SlideShare a Scribd company logo
2
Most read
4
Most read
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 218
Texture Classification Approach And Texture
Datasets: A Review
1
Shrikant Bhosle, 2
Dr. Prakash Khanale
1
Research Scholar, 2
Professor & HOD
1
Department of Computer Science,
1
College of Arts, Commerce & Science, Parbhani, India.
Abstract: Texture is one of the most important visual characteristics of image. Texture classification is a process of assigning unknown
texture to known texture class. Applications areas of texture classification are medical image analysis, object recognition, biometrics,
content based image retrieval, remote sensing, industrial inspection, document analysis and many more. In this paper we discussed
some feature extraction and classification methods used for texture classification namely, Local binary pattern, Scale invariant feature
transform, Speed up robust feature, Fourier transformation, Texture spectrum, Gray level co-occurrence matrix, K-nearest neighbor,
Artificial neural network and Support Vector Machine. We also discussed some popular texture datasets Brodatz, Outex, CUReT and
VisTex used for texture classification.
Keywords - Feature extraction, Texture classification, LBP, SURF, SIFT, GLCM, SVM, K-NN, ANN, Fourier transformation,
TS.
I. INTRODUCTION
One can recognize texture by observing it but it is very difficult to describe. Texture is fundamental feature of image and it can be
defined as homogeneous patterns contain information about structural arrangement and relationship between pixel. Examples of different
texture images are given below.
Figure 1: Examples of different texture images.
Texture analysis is one of the most important techniques used in analysis and interpretation of texture. Research problem in this
domain can be divided into four categories namely, texture segmentation, texture synthesis, texture classification and shape from texture.
Out of these four primary issues of texture analysis, we focused only on texture classification. How to extract powerful feature to
efficiently characterize texture image is challenging and still developing research area of texture classification. The goal of texture
classification is to classify unknown texture sample image into one of the set of known texture classes. A successful classification of
texture image requires an efficient feature extraction and classification method. Texture classification used in large variety of real world
application like medical image analysis, pattern recognition, biometrics, content based image retrieval, remote sensing, industrial
inspection and document analysis [1].
General framework of texture classification technique is shown in figure 2. It consists of three main steps: preprocessing, feature
extraction and texture classification.
Preprocessing
The preprocessing step removes noise and enhances the quality of input image for further processing. It also includes some operations
like color to gray level image conversion, interest area selection and gray level normalization.
Feature extraction
Feature extraction is most important and difficult step in texture classification. In this step texture features are extracted from image.
Extraction of powerful texture features plays important role. If poor features are used even the best classifier will fail to achieve good
result. Consequently most research in texture classification focused on feature extraction methods.
Texture classification
Decision about which category the texture image belong is taken in texture classification step. Dataset is divided into two parts:
training samples and test samples. In texture classification training samples is used for learning of classifier while test samples are used
for measuring the accuracy of classifier. Most widely used classifier are support vector machine (SVM), Artificial Neural Network
(ANN) and K-Nearest Neighbor (KNN).
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 219
Figure 2: General Framework of Texture Classification Technique.
Texture classification is divided into two category supervised and unsupervised classification. In supervised classification classifier
trained with feature of known classes. In unsupervised classification, classifier recognizes different classes based on input feature
similarity. In this paper we present a review of various feature extraction and classification methods used for texture classification as well
as texture datasets used in training and testing process. The remainder of this paper is organized as follows.
Sections 2 include detail of feature extraction methods. Sections 3 include detail of classifier used in texture classification. Sections 5
include detail of texture datasets and section 5 concludes the paper.
II. FEATURE EXTRACTION
In texture classification, it is important to extract texture features and classify using different classifier. A wide variety of feature
extraction methods for describing texture features of image has been developed and used in texture classification. In this section we
discuss only some popular feature extraction methods.
2.1 Local Binary Pattern (LBP)
Local binary pattern is powerful texture descriptor introduced by ojala et.al. [2] describe the relationship of a pixel to its
neighborhood. LBP is used to calculate local texture feature of image and often used for texture classification problem. LBP work as, it
describe the eight neighborhood pixel in binary code and summaries all code into histogram which serve as texture feature. In simple
word, LBP label each pixel of an image with decimal number called local binary pattern or LBP code. Figure shows the example of
basic LBP descriptor.
Figure 3: An example of basic LBP descriptor.
As shown in above figure each pixel is compare with its eight neighbors. The central pixel value is subtracting from its neighboring
pixel value. The resulting negative value is encoded with 0 and other with 1. A binary number is obtained by concatenating all these
binary code in a clockwise direction starting from the top left one and its corresponding decimal value is used for labeling the pixel.
The 256-bin histogram of LBP labels is computed and then used as texture descriptor of an image. LBP variants for texture
classification are rotation invariant LBP, uniform LBP, multi-dimensional LBP, LBP variance (LBPV), adaptive LBP (A-LBP), multi-
scale spatial pyramid LBP (MSSP-LBP), Pyramid LBP (-LBP) and completed LBP (C-LBP) [3].
2.2 Scale Invariant Feature Transform (SIFT)
Scale invariant feature transform is an algorithm in computer vision to detect and describe local features in image. The algorithm
was published by David Lowe in 1999. Matching features across different images is a common problem in computer vision. SIFT
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 220
image features provide a set of features that are not affected by change in rotation, change in scale and illumination [4]. When all
images are similar in nature (same scale, orientation, etc.) simple corner detectors can work but when you have images of different
scales and rotations, you need to use the scale invariant feature transform. The scale invariant feature transform (SIFT) bundles a
feature detector and a feature descriptor. The detector extracts number of frames (attributed regions) from an image in a way which is
consistent with some variations of the illumination, viewpoint and other viewing conditions. The descriptor associated to the regions of
image which identifies their appearance compactly and robustly. Application of SIFT include object recognition, image stitching, 3d
modeling, gesture recognition, video tracking etc.
2.3 Speed up Robust Feature (SURF)
Speed up robust feature is a common object detecting and object-matching local features used in various applications such as, object
recognition, image classification, image retrieval etc. [5]. SURF is invariant to scale, rotation and illumination variation of image.
Repeatability, distinctiveness and robustness are outstanding characteristics of SURF. It is used to select the interest point at distinctive
location in image such as, corners, blobs and T-junctions. SURF is not only speed up the calculation but also matching rate is
significant.
2.4 Fourier Transformation (FT)
Fourier transform based methods usually work on textures showing strong periodicity. Various numbers of features can be extracted
from the Fourier power spectrum. Liu and Jernigan introduced 28 texture features derived from normalized Fourier transform
coefficients [6] such as rings, wedges, inertia, entropy, anisotropy, and etc. A power spectrum of a texture image can be used for
measuring the periodicity and directionality information of the texture. For example, a fine texture has high frequency components,
while coarse one has low frequency components. Figure 4 shows some texture images and their corresponding Fourier spectrums.
Methods based on Fourier transformation perform poorly in practice, due to its lack of spatial information.
Figure 4: Texture images and their Fourier spectrums.
2.5 Texture Spectrum (TS)
He and Wang stated that a texture image can be decomposed into a set of essential small unit called texture unit [7]. Texture
spectrum approach is based on texture units, which characterize local texture information in all eight directions. This method has been
applied to texture feature extraction, texture classification, edge detection and texture filtering. One of the advantages of the texture
spectrum approach is that the texture aspects of an image are characterized by the corresponding texture spectrum. Texture spectrum
can be directly used for image classification and image analysis. The basic principles used for estimating the texture spectrum (TS)
including the texture unit (TU) and the texture unit number (NTU). For example, texture image can be decomposed into a set of
essential small units called texture unit. A texture unit is represented by a 3*3 window. The central pixel X0 in the window is the
currently being processed pixel and the given neighborhood of X0 can be denoted as X={x1, X2, X3,------------------,X8} where X1 is
the pixel value at location 1 shown in following figure.
Figure 5: A Texture Unit.
The corresponding texture unit set is TU= {E1, E2, E3, ------------------, E8}. So NTU has 38=6561 standard textural units which are
considered the smallest unit covered all aspects in all eight directions from the central pixel.
2.6 Gray Level Co-occurrence Matrices (GLCM)
Gray level co-occurrence matrices is introduced by Haralick[8]. It is statistical method used to calculate second order statistical
features (texture features) in which relationship between two adjacent pixel is considered. Statistical features are defined as finding
pixel pair with some gray level value at specified point in an image. Point may be defined by displacement vector. It contains
information about the position of pixel pair with same gray level value. There are two parameters Distance (D) and position angle (P)
of displacement vector affects the calculation of GLCM. There are four directions for position angle parameter is shown in figure.
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 221
Figure 6: GLCM angle.
Another important parameter that affects the GLCM feature vector is number of gray levels (NUMLEVELS). It determines the size
of GLCM. After calculating GLCM, it needed to normalize because without normalization result will be bad [9]. GLCM is also called
Gray level spatial dependency matrix. Haralick define 14 statistical features extracted from GLCM. Some of them are given below.
1 1
2
0 0
1 1 2
0 0
1 1
0 0
1 1
0 0
( )
( ) .
(1 )(1 )
( )
( )
1
( )
N N
mn
m n
N N
mn
m n
N N
x y
mn
m n x y
N N
mn
m n
Angular second moment
Contrast
Correlation
Homogeneity
ASM
CON
COR
ENT
m n
C
m n C
C
C
 
 
 
 
 
 
 
 
 
 


 


 




III. TEXTURE CLASSIFICATION
Classification is a method which takes the texture features (training samples) as input and compare with test sample to gives the
texture class as output. In this section we are going to discuss three classifiers K-Nearest Neighbor (K-NN), Artificial Neural Network
(ANN) and Support Vector Machine (SVM) which are frequently used in texture classification.
3.1 K-Nearest Neighbor (KNN)
K-nearest neighbor is simplest classifier use supervised learning method for texture classification. To make the K-NN classifier
more robust it needs to store the features of all training samples for classification. In K-NN classifier nearest neighbor is measured by
using different distance metrics that means test image can be classified by calculating the distance between test image and training
images. From literature survey commonly used distance metrics in K-NN classifier are
2
1
1
( )
k
i
k
i i
i
Euclidean
Manhattan
i iX Y
X Y





After calculating distance between training images and test image, training images are arranged to get the nearest neighbors
depending on the least distance. After that K closest training images are selected as K-Nearest Neighbors. K is user defined constant.
When k=1 a test image is assigned to the closest class of training images. Generally large value of K decreases the probability of
misclassification. It also decreases the effect of noise on classification [10]. For example, if k=5 then the K-NN classifier choose five
closest images from training database and the test image is assigned to the most common class among these K-samples from training
database.
3.2 Artificial Neural Network (ANN)
An artificial neural network usually called Neural Network is a mathematical model used for classification. Its Function is same as
neuron functionality in human brain [11]. ANN is defined as, set of connected input output network in which weight is associated with
each connection. It consists of an interconnected group of artificial neuron (processing element) to process information. A typical three
layer neural network is shown in figure 7. It consists of three layers namely input layer, hidden layer and output layer.
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 222
Figure 7: Basic structure of three layers artificial neural network.
As shown in above figure, bottom layer of neuron receives the input called input layer, upper layer of neuron provides the output is
called output layer, the layer in the middle of input and output layer is called hidden layer. In ANN each neuron has weighted I/P,
transfer function and one output. Behavior of a neural network depends on transfer function, learning rule and architecture of the
neurons. The neuron is activated using the weighted sum of the input and the activation signal is passed through a transfer function to
produce output of neuron. During the training process of ANN, weights are adjusted until the error is minimized and the network
reaches the specified level of accuracy. Once the network is trained and tested it can be given new input information to predict the
output.
3.3 Support Vector Machine (SVM)
Support vector machine was introduced by Vladimir Vapnik in 1995 [12]. It is a type of linear classifier use supervised learning
method for pattern classification and recognition task. SVM is widely used in various applications such as medical diagnosis, text
recognition, data analysis, face recognition and bioinformatics. It is originally developed to classify data of two classes which is then
extended to multiple classes. The goal of training SVM is to construct optimal hyper plane.
For example, suppose we have two classes of input data in an N dimensional feature space. SVM try to find an n dimensional hyper
plane which divides the space into two parts and separates these two groups from each other. However there is probably a lot’s of hyper
planes which splits these two groups. Thus additionally SVM for desired problems is minimizing the error through maximizing the
margin. That means, SVM try to find the hyper plane such that the distance from the closest pointer of each class to this hyper plane is
at maximum. This hyper plane is called the maximum margin hyper plane (or optimal hyper plane). An example of such hyper plane is
shown in figure 8(a).
In figure 8(b) we have three different planes. H3 does not separate two groups of point. H1 does separate two groups of points but
does not satisfy the maximum margin restriction. Finally H2 separate two groups and maximizes the distance from hyper plane to
support vectors. Hyper plane H2 is called maximum margin hyper plane. When there are multiple classes a hyper plane between every
two different classes or between each class and the rest classes is created to separate the data points based on one-against-one or one-
against-all strategy. SVM can also perform non-linear classification using different kernel function [13].
IV. TEXTURE DATASETS
Brodatz, Outex, CUReT and VisTex are popularly used standard texture datasets in texture classification approach and it is freely
available for research. Details of these datasets are given below.
Figure 8: SVM Classification.
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 223
4.1 Brodatz Texture Dataset
The brodatz texture dataset is derived from Brodatz Album [14]. It is standard texture database prepared by Phil Brodatz, a
professional photographer in 1966. It consists of 112 grayscale texture images of size 512*512 pixels. All images in this database are
represented by single illumination and viewing direction. Figure 9 shows Sample images of Brodatz textures dataset.
Figure 9: Sample images from Brodatz texture album.
4.2 Outex Texture Dataset
Outex stands for University of Oulu texture dataset [15]. Outex dataset is most popular and target dataset widely used for the
evaluation of rotation and illumination invariant texture classification approaches. Outex dataset contains 16 sets of natural and artificial
texture images. All images in each set of Outex database is captured under different illumination and rotation condition. TC_00010 and
TC_00012 are rotation and illumination variance set in Outex dataset. It contains 24 texture images rotated in 9 directions. Size of all
images 128*128 pixels, captured at 100dpi spatial resolution. Sample images of TC_00010 and TC_00012 set are shown in figure 10.
Figure 10: Samples of the 24 textures in TC10 and TC12 set.
4.3 CUReT Dataset
The Columbia Utrecht Reflectance and texture (CUReT) dataset is challenging dataset used for texture classification task. CUReT
dataset is collaborated research between Columbia University and Utrecht University. It contain 61 different texture classes and each
class contain 205 texture images under different rotation and illumination condition [16]. This makes the CUReT database more
challenging for texture classification task. The images in this dataset are stored as 24-bit .BMP files format. Sample images of CUReT
dataset shown in figure 11.
Figure 11: Sample images from CUReT dataset.
IV. CONCLUSION
Texture is an important feature used to describe texture pattern of image. Texture classifications contain feature extraction and
classification of texture image. Extracting optimal texture features is challenging tasks in texture classification. In this paper, we
discussed general framework of texture classification technique and some popular texture datasets used for training and testing of
© 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138)
IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 224
classification method. From this review we conclude that, texture classification technique is easy to understand and still active research
area with potential to increase classification rate.
REFERENCES
[1] Smriti H. Bhandari, Amruta G.Yadrave, “Local Binary Pattern approach for Rotation InvariantTexture Classification”,
ICCPCT, 2015.
[2] T. Ojala, M. Pietikinen, and D. Harwood, “A comparative study of texture measures with classification based on featured
distributions”, Pattern Recognition, vol. 29, pp.51 – 59,1996.
[3] Niraj Doshi, Gerald Schaefer, “Rotation-invariant Local Binary Pattern Texture Classification”, 54th International Symposium
ELMAR, 2012.
[4] Leila Kabbai, Mehrez Abdellaoui, Ali Douik, “Content Based Image Retrieval using Local and Global features descriptor”,
published by IEEE, 2016.
[5] Baofeng Zhang, Yingkui Jiao, Zhijun Ma, Yongchen Li, Junchao Zhu, “An Efficient Image Matching Method Using Speed Up
Robust Features”, published by IEEE, 2014.
[6] S. S. Liu and M. E. Jernigan, “Texture analysis and discrimination in additive noise”, Computer Vision, Graphics and Image
Processing, 1990.
[7] Chih-Cheng Hung, Minh Pham, Sara Arasteh, “ Image Texture Classification Using Texture Spectrum and Local Binary Pattern ”,
published by IEEE.
[8] Haralick R. M., Shanmugam K., Dinstein I., “Textural features for image classification” IEEE Trans. Systems. Man. Cybernetics,
610-621, 1973.
[9] Ben Salem, Y. Nasri, “Automatic recognition of woven fabric using SVM”, Signal, image and video processing, 429-434, 2009.
[10] B. S. Everitt, S. Landau, M. Leese, and D. Stahl, “Miscellaneous Clustering Methods, in Cluster Analysis (5th Edition)”, January
2011.
[11] P.G.H.H. Gunasekara, J. V. Wijayakulasooriya and H. A. C. Dharmagunawardhana “Image Texture Analysis Using Deep Neural
Networks”, published by IEEE, 2017.
[12] V. Vapnik, C. Cortes, “Support-vector networks”, Machine Learning, vol. 20, Issue. 3, pp. 273–297, 1995.
[13] Van shang, yan-hua diao, chun-ming li, “Rotation invariant Texture Classification Algorithm Based on Curvelet Transform And
SVM”, IEEE Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, 2008.
[14] P. Brodatr, “Textures: A Photographic Album for artist and Designers”, Dover, New York. 1966.
[15] Ojala, T., T. Mäenpää, M. Pietikäinen, J. Viertola, J. Kyllönen, and S. Huovinen, “Outex- New Framework for Empirical
Evaluation of Texture Analysis Algorithms”, Proc. IEEE 16th Int. Conf. on Pattern Recognition, 1, 701-706, 2002.
[16] K. J. Dana, B. van Ginneken, K. N. Shree, and J. J. Koenderink, “Reflectance and texture of real world surfaces”, IEEE ACM
Transactions on Graphics (TOG), vol. 18, pp.1–34,1999.
[17] Yassine Ben Salem, Salem Nasri, “Rotation Invariant Texture Classification using Support Vector Machines”, IEEE
International Conference on Electronic Systems, Signal Processing and Computing Technologies, 2014.
[18] Vishal S. Thakare, Nitin N. Patil, “Classification of Texture Using Gray Level Co-Occurrence Matrix and Self-Organizing Map”,
IEEE International Conference on Electronic Systems, Signal Processing and Computing Technologies, 2014.
[19] Sourajit Das, Uma Ranjan Jena, “Texture Classification using Combination of LBP and GLRLM Features along with KNN and
Multiclass SVM Classification”, IEEE International Conference on Communication, Control and Intelligent Systems (CCIS), 2016.
[20] Eftekhar Hossain, Md. Farhad Hossain and Mohammad Anisur Rahaman, “A Color and Texture Based Approach for the
Detection and Classification of Plant Leaf Disease Using KNN Classifier”, IEEE International Conference on Electrical, Computer
and Communication Engineering (ECCE), 2019.
[21] A.Jayasudha, D.Pugazhenthi, “Colour Texture Classification Using Wavelet Transform From Its Gray Scale”, IEEE 2nd
International Conference on Current Trends in Engineering and Technology, 2014.
[22] Gandham Girish, Jatindra Kumar Dash, ”Adaptive Fuzzy Local Binary Pattern for Texture Classification”,IEEE 2nd International
Conference on Man and Machine Interfacing, 2017.
[23] Paul Schumacher, Jun Zhang, “Texture Classification Using Neural Networks And Discrete Wavelet Transform”, published by
IEEE, 1994.
[24] Mohammed W. Ashour, Mahmoud F. Hussin, and Khaled M. Mahar, “Supervised Texture Classification Using Several Features
Extraction Techniques Based on ANN And SVM”, published by IEEE, 2008.
[25] Yassine Ben Salem, Salem Nasri, “Rotation Invariant Texture Classification using Suport Vector Machines”, published by IEEE,
2014.
[26] Li Meng, Fu Ping, Sun Shenghe, “3D Texture Classification Using 3D Texture Histogram Model and SVM”, IEEE Eighth
International Conference on Electronic Measurement and Instruments, 2007.
[27] Rosalind W. Picard, Tanweer Kabir, Fang Liu, “Real-time Recognition with the entire Brodatz Texture Database”, published by
IEEE, 1993.
[28] Wenda He, Reyer Zwiggelaar, “Image Classification: A Novel Texture Signature Approach”, IEEE 17th International Conference
on Image Processing, 2010.

More Related Content

PPTX
A completed modeling of local binary pattern operator
PDF
04 image enhancement edge detection
PPTX
IMAGE SEGMENTATION TECHNIQUES
PPSX
Image segmentation 2
PPTX
Processing of satellite_image_using_digi
PPT
Spatial filtering
PPTX
Unit 2. Image Enhancement in Spatial Domain.pptx
PPTX
Image segmentation
A completed modeling of local binary pattern operator
04 image enhancement edge detection
IMAGE SEGMENTATION TECHNIQUES
Image segmentation 2
Processing of satellite_image_using_digi
Spatial filtering
Unit 2. Image Enhancement in Spatial Domain.pptx
Image segmentation

What's hot (20)

PPT
Digitized images and
PDF
Basics of image processing & analysis
PPT
Introduction to digital image processing
PPTX
Intensity Transformation
PPTX
03 cie552 image_filtering_spatial
PDF
PPTX
Dip chapter 2
PPTX
Image Restoration And Reconstruction
PDF
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
PPTX
Image stitching
PPT
Segmentation
PDF
Image processing.pdf
PPTX
Photometric calibration
PPTX
PPTX
Image segmentation in Digital Image Processing
PPTX
Histogram based Enhancement
PPTX
Spatial operation.ppt
PDF
Image Segmentation (Digital Image Processing)
DOC
[EXPERIMENT6+7] Heat_treatment_and_Hardenability
Digitized images and
Basics of image processing & analysis
Introduction to digital image processing
Intensity Transformation
03 cie552 image_filtering_spatial
Dip chapter 2
Image Restoration And Reconstruction
UNIT-2 image enhancement.pdf Image Processing Unit 2 AKTU
Image stitching
Segmentation
Image processing.pdf
Photometric calibration
Image segmentation in Digital Image Processing
Histogram based Enhancement
Spatial operation.ppt
Image Segmentation (Digital Image Processing)
[EXPERIMENT6+7] Heat_treatment_and_Hardenability
Ad

Similar to Texture Classification (20)

PDF
Texture Images Classification using Secant Lines Segments Histogram
PDF
A_Survey_Paper_on_Image_Classification_and_Methods.pdf
PDF
Texture based feature extraction and object tracking
PDF
A survey on feature descriptors for texture image classification
PDF
Automatic Detection of Radius of Bone Fracture
PDF
06 9237 it texture classification based edit putri
PDF
PERFORMANCE EVALUATION OF ONTOLOGY AND FUZZYBASE CBIR
PDF
Performance Evaluation Of Ontology And Fuzzybase Cbir
PDF
Feature Extraction and Feature Selection using Textual Analysis
PDF
Gi3411661169
PDF
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
PDF
IRJET- Image Segmentation Techniques: A Review
PDF
A novel approach to develop a new hybrid
PDF
Ac03401600163.
PDF
IRJET- Shape based Image Classification using Geometric ­–Properties
PDF
A Review of Feature Extraction Techniques for CBIR based on SVM
PDF
Review of Image Segmentation Techniques based on Region Merging Approach
PDF
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
PDF
Face Recognition using Feature Descriptors and Classifiers
PDF
L0816166
Texture Images Classification using Secant Lines Segments Histogram
A_Survey_Paper_on_Image_Classification_and_Methods.pdf
Texture based feature extraction and object tracking
A survey on feature descriptors for texture image classification
Automatic Detection of Radius of Bone Fracture
06 9237 it texture classification based edit putri
PERFORMANCE EVALUATION OF ONTOLOGY AND FUZZYBASE CBIR
Performance Evaluation Of Ontology And Fuzzybase Cbir
Feature Extraction and Feature Selection using Textual Analysis
Gi3411661169
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
IRJET- Image Segmentation Techniques: A Review
A novel approach to develop a new hybrid
Ac03401600163.
IRJET- Shape based Image Classification using Geometric ­–Properties
A Review of Feature Extraction Techniques for CBIR based on SVM
Review of Image Segmentation Techniques based on Region Merging Approach
Feature Extraction for Image Classification and Analysis with Ant Colony Opti...
Face Recognition using Feature Descriptors and Classifiers
L0816166
Ad

Recently uploaded (20)

PDF
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
PDF
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
PDF
The scientific heritage No 166 (166) (2025)
PPTX
Vitamins & Minerals: Complete Guide to Functions, Food Sources, Deficiency Si...
PPTX
BIOMOLECULES PPT........................
PPTX
neck nodes and dissection types and lymph nodes levels
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PDF
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PDF
Phytochemical Investigation of Miliusa longipes.pdf
PDF
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
PPTX
Microbiology with diagram medical studies .pptx
PPT
POSITIONING IN OPERATION THEATRE ROOM.ppt
PPT
6.1 High Risk New Born. Padetric health ppt
PPTX
Fluid dynamics vivavoce presentation of prakash
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PDF
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
PDF
An interstellar mission to test astrophysical black holes
PPTX
2. Earth - The Living Planet Module 2ELS
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
The scientific heritage No 166 (166) (2025)
Vitamins & Minerals: Complete Guide to Functions, Food Sources, Deficiency Si...
BIOMOLECULES PPT........................
neck nodes and dissection types and lymph nodes levels
Introduction to Fisheries Biotechnology_Lesson 1.pptx
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Phytochemical Investigation of Miliusa longipes.pdf
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
Microbiology with diagram medical studies .pptx
POSITIONING IN OPERATION THEATRE ROOM.ppt
6.1 High Risk New Born. Padetric health ppt
Fluid dynamics vivavoce presentation of prakash
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
An interstellar mission to test astrophysical black holes
2. Earth - The Living Planet Module 2ELS
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...

Texture Classification

  • 1. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 218 Texture Classification Approach And Texture Datasets: A Review 1 Shrikant Bhosle, 2 Dr. Prakash Khanale 1 Research Scholar, 2 Professor & HOD 1 Department of Computer Science, 1 College of Arts, Commerce & Science, Parbhani, India. Abstract: Texture is one of the most important visual characteristics of image. Texture classification is a process of assigning unknown texture to known texture class. Applications areas of texture classification are medical image analysis, object recognition, biometrics, content based image retrieval, remote sensing, industrial inspection, document analysis and many more. In this paper we discussed some feature extraction and classification methods used for texture classification namely, Local binary pattern, Scale invariant feature transform, Speed up robust feature, Fourier transformation, Texture spectrum, Gray level co-occurrence matrix, K-nearest neighbor, Artificial neural network and Support Vector Machine. We also discussed some popular texture datasets Brodatz, Outex, CUReT and VisTex used for texture classification. Keywords - Feature extraction, Texture classification, LBP, SURF, SIFT, GLCM, SVM, K-NN, ANN, Fourier transformation, TS. I. INTRODUCTION One can recognize texture by observing it but it is very difficult to describe. Texture is fundamental feature of image and it can be defined as homogeneous patterns contain information about structural arrangement and relationship between pixel. Examples of different texture images are given below. Figure 1: Examples of different texture images. Texture analysis is one of the most important techniques used in analysis and interpretation of texture. Research problem in this domain can be divided into four categories namely, texture segmentation, texture synthesis, texture classification and shape from texture. Out of these four primary issues of texture analysis, we focused only on texture classification. How to extract powerful feature to efficiently characterize texture image is challenging and still developing research area of texture classification. The goal of texture classification is to classify unknown texture sample image into one of the set of known texture classes. A successful classification of texture image requires an efficient feature extraction and classification method. Texture classification used in large variety of real world application like medical image analysis, pattern recognition, biometrics, content based image retrieval, remote sensing, industrial inspection and document analysis [1]. General framework of texture classification technique is shown in figure 2. It consists of three main steps: preprocessing, feature extraction and texture classification. Preprocessing The preprocessing step removes noise and enhances the quality of input image for further processing. It also includes some operations like color to gray level image conversion, interest area selection and gray level normalization. Feature extraction Feature extraction is most important and difficult step in texture classification. In this step texture features are extracted from image. Extraction of powerful texture features plays important role. If poor features are used even the best classifier will fail to achieve good result. Consequently most research in texture classification focused on feature extraction methods. Texture classification Decision about which category the texture image belong is taken in texture classification step. Dataset is divided into two parts: training samples and test samples. In texture classification training samples is used for learning of classifier while test samples are used for measuring the accuracy of classifier. Most widely used classifier are support vector machine (SVM), Artificial Neural Network (ANN) and K-Nearest Neighbor (KNN).
  • 2. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 219 Figure 2: General Framework of Texture Classification Technique. Texture classification is divided into two category supervised and unsupervised classification. In supervised classification classifier trained with feature of known classes. In unsupervised classification, classifier recognizes different classes based on input feature similarity. In this paper we present a review of various feature extraction and classification methods used for texture classification as well as texture datasets used in training and testing process. The remainder of this paper is organized as follows. Sections 2 include detail of feature extraction methods. Sections 3 include detail of classifier used in texture classification. Sections 5 include detail of texture datasets and section 5 concludes the paper. II. FEATURE EXTRACTION In texture classification, it is important to extract texture features and classify using different classifier. A wide variety of feature extraction methods for describing texture features of image has been developed and used in texture classification. In this section we discuss only some popular feature extraction methods. 2.1 Local Binary Pattern (LBP) Local binary pattern is powerful texture descriptor introduced by ojala et.al. [2] describe the relationship of a pixel to its neighborhood. LBP is used to calculate local texture feature of image and often used for texture classification problem. LBP work as, it describe the eight neighborhood pixel in binary code and summaries all code into histogram which serve as texture feature. In simple word, LBP label each pixel of an image with decimal number called local binary pattern or LBP code. Figure shows the example of basic LBP descriptor. Figure 3: An example of basic LBP descriptor. As shown in above figure each pixel is compare with its eight neighbors. The central pixel value is subtracting from its neighboring pixel value. The resulting negative value is encoded with 0 and other with 1. A binary number is obtained by concatenating all these binary code in a clockwise direction starting from the top left one and its corresponding decimal value is used for labeling the pixel. The 256-bin histogram of LBP labels is computed and then used as texture descriptor of an image. LBP variants for texture classification are rotation invariant LBP, uniform LBP, multi-dimensional LBP, LBP variance (LBPV), adaptive LBP (A-LBP), multi- scale spatial pyramid LBP (MSSP-LBP), Pyramid LBP (-LBP) and completed LBP (C-LBP) [3]. 2.2 Scale Invariant Feature Transform (SIFT) Scale invariant feature transform is an algorithm in computer vision to detect and describe local features in image. The algorithm was published by David Lowe in 1999. Matching features across different images is a common problem in computer vision. SIFT
  • 3. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 220 image features provide a set of features that are not affected by change in rotation, change in scale and illumination [4]. When all images are similar in nature (same scale, orientation, etc.) simple corner detectors can work but when you have images of different scales and rotations, you need to use the scale invariant feature transform. The scale invariant feature transform (SIFT) bundles a feature detector and a feature descriptor. The detector extracts number of frames (attributed regions) from an image in a way which is consistent with some variations of the illumination, viewpoint and other viewing conditions. The descriptor associated to the regions of image which identifies their appearance compactly and robustly. Application of SIFT include object recognition, image stitching, 3d modeling, gesture recognition, video tracking etc. 2.3 Speed up Robust Feature (SURF) Speed up robust feature is a common object detecting and object-matching local features used in various applications such as, object recognition, image classification, image retrieval etc. [5]. SURF is invariant to scale, rotation and illumination variation of image. Repeatability, distinctiveness and robustness are outstanding characteristics of SURF. It is used to select the interest point at distinctive location in image such as, corners, blobs and T-junctions. SURF is not only speed up the calculation but also matching rate is significant. 2.4 Fourier Transformation (FT) Fourier transform based methods usually work on textures showing strong periodicity. Various numbers of features can be extracted from the Fourier power spectrum. Liu and Jernigan introduced 28 texture features derived from normalized Fourier transform coefficients [6] such as rings, wedges, inertia, entropy, anisotropy, and etc. A power spectrum of a texture image can be used for measuring the periodicity and directionality information of the texture. For example, a fine texture has high frequency components, while coarse one has low frequency components. Figure 4 shows some texture images and their corresponding Fourier spectrums. Methods based on Fourier transformation perform poorly in practice, due to its lack of spatial information. Figure 4: Texture images and their Fourier spectrums. 2.5 Texture Spectrum (TS) He and Wang stated that a texture image can be decomposed into a set of essential small unit called texture unit [7]. Texture spectrum approach is based on texture units, which characterize local texture information in all eight directions. This method has been applied to texture feature extraction, texture classification, edge detection and texture filtering. One of the advantages of the texture spectrum approach is that the texture aspects of an image are characterized by the corresponding texture spectrum. Texture spectrum can be directly used for image classification and image analysis. The basic principles used for estimating the texture spectrum (TS) including the texture unit (TU) and the texture unit number (NTU). For example, texture image can be decomposed into a set of essential small units called texture unit. A texture unit is represented by a 3*3 window. The central pixel X0 in the window is the currently being processed pixel and the given neighborhood of X0 can be denoted as X={x1, X2, X3,------------------,X8} where X1 is the pixel value at location 1 shown in following figure. Figure 5: A Texture Unit. The corresponding texture unit set is TU= {E1, E2, E3, ------------------, E8}. So NTU has 38=6561 standard textural units which are considered the smallest unit covered all aspects in all eight directions from the central pixel. 2.6 Gray Level Co-occurrence Matrices (GLCM) Gray level co-occurrence matrices is introduced by Haralick[8]. It is statistical method used to calculate second order statistical features (texture features) in which relationship between two adjacent pixel is considered. Statistical features are defined as finding pixel pair with some gray level value at specified point in an image. Point may be defined by displacement vector. It contains information about the position of pixel pair with same gray level value. There are two parameters Distance (D) and position angle (P) of displacement vector affects the calculation of GLCM. There are four directions for position angle parameter is shown in figure.
  • 4. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 221 Figure 6: GLCM angle. Another important parameter that affects the GLCM feature vector is number of gray levels (NUMLEVELS). It determines the size of GLCM. After calculating GLCM, it needed to normalize because without normalization result will be bad [9]. GLCM is also called Gray level spatial dependency matrix. Haralick define 14 statistical features extracted from GLCM. Some of them are given below. 1 1 2 0 0 1 1 2 0 0 1 1 0 0 1 1 0 0 ( ) ( ) . (1 )(1 ) ( ) ( ) 1 ( ) N N mn m n N N mn m n N N x y mn m n x y N N mn m n Angular second moment Contrast Correlation Homogeneity ASM CON COR ENT m n C m n C C C                                 III. TEXTURE CLASSIFICATION Classification is a method which takes the texture features (training samples) as input and compare with test sample to gives the texture class as output. In this section we are going to discuss three classifiers K-Nearest Neighbor (K-NN), Artificial Neural Network (ANN) and Support Vector Machine (SVM) which are frequently used in texture classification. 3.1 K-Nearest Neighbor (KNN) K-nearest neighbor is simplest classifier use supervised learning method for texture classification. To make the K-NN classifier more robust it needs to store the features of all training samples for classification. In K-NN classifier nearest neighbor is measured by using different distance metrics that means test image can be classified by calculating the distance between test image and training images. From literature survey commonly used distance metrics in K-NN classifier are 2 1 1 ( ) k i k i i i Euclidean Manhattan i iX Y X Y      After calculating distance between training images and test image, training images are arranged to get the nearest neighbors depending on the least distance. After that K closest training images are selected as K-Nearest Neighbors. K is user defined constant. When k=1 a test image is assigned to the closest class of training images. Generally large value of K decreases the probability of misclassification. It also decreases the effect of noise on classification [10]. For example, if k=5 then the K-NN classifier choose five closest images from training database and the test image is assigned to the most common class among these K-samples from training database. 3.2 Artificial Neural Network (ANN) An artificial neural network usually called Neural Network is a mathematical model used for classification. Its Function is same as neuron functionality in human brain [11]. ANN is defined as, set of connected input output network in which weight is associated with each connection. It consists of an interconnected group of artificial neuron (processing element) to process information. A typical three layer neural network is shown in figure 7. It consists of three layers namely input layer, hidden layer and output layer.
  • 5. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 222 Figure 7: Basic structure of three layers artificial neural network. As shown in above figure, bottom layer of neuron receives the input called input layer, upper layer of neuron provides the output is called output layer, the layer in the middle of input and output layer is called hidden layer. In ANN each neuron has weighted I/P, transfer function and one output. Behavior of a neural network depends on transfer function, learning rule and architecture of the neurons. The neuron is activated using the weighted sum of the input and the activation signal is passed through a transfer function to produce output of neuron. During the training process of ANN, weights are adjusted until the error is minimized and the network reaches the specified level of accuracy. Once the network is trained and tested it can be given new input information to predict the output. 3.3 Support Vector Machine (SVM) Support vector machine was introduced by Vladimir Vapnik in 1995 [12]. It is a type of linear classifier use supervised learning method for pattern classification and recognition task. SVM is widely used in various applications such as medical diagnosis, text recognition, data analysis, face recognition and bioinformatics. It is originally developed to classify data of two classes which is then extended to multiple classes. The goal of training SVM is to construct optimal hyper plane. For example, suppose we have two classes of input data in an N dimensional feature space. SVM try to find an n dimensional hyper plane which divides the space into two parts and separates these two groups from each other. However there is probably a lot’s of hyper planes which splits these two groups. Thus additionally SVM for desired problems is minimizing the error through maximizing the margin. That means, SVM try to find the hyper plane such that the distance from the closest pointer of each class to this hyper plane is at maximum. This hyper plane is called the maximum margin hyper plane (or optimal hyper plane). An example of such hyper plane is shown in figure 8(a). In figure 8(b) we have three different planes. H3 does not separate two groups of point. H1 does separate two groups of points but does not satisfy the maximum margin restriction. Finally H2 separate two groups and maximizes the distance from hyper plane to support vectors. Hyper plane H2 is called maximum margin hyper plane. When there are multiple classes a hyper plane between every two different classes or between each class and the rest classes is created to separate the data points based on one-against-one or one- against-all strategy. SVM can also perform non-linear classification using different kernel function [13]. IV. TEXTURE DATASETS Brodatz, Outex, CUReT and VisTex are popularly used standard texture datasets in texture classification approach and it is freely available for research. Details of these datasets are given below. Figure 8: SVM Classification.
  • 6. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 223 4.1 Brodatz Texture Dataset The brodatz texture dataset is derived from Brodatz Album [14]. It is standard texture database prepared by Phil Brodatz, a professional photographer in 1966. It consists of 112 grayscale texture images of size 512*512 pixels. All images in this database are represented by single illumination and viewing direction. Figure 9 shows Sample images of Brodatz textures dataset. Figure 9: Sample images from Brodatz texture album. 4.2 Outex Texture Dataset Outex stands for University of Oulu texture dataset [15]. Outex dataset is most popular and target dataset widely used for the evaluation of rotation and illumination invariant texture classification approaches. Outex dataset contains 16 sets of natural and artificial texture images. All images in each set of Outex database is captured under different illumination and rotation condition. TC_00010 and TC_00012 are rotation and illumination variance set in Outex dataset. It contains 24 texture images rotated in 9 directions. Size of all images 128*128 pixels, captured at 100dpi spatial resolution. Sample images of TC_00010 and TC_00012 set are shown in figure 10. Figure 10: Samples of the 24 textures in TC10 and TC12 set. 4.3 CUReT Dataset The Columbia Utrecht Reflectance and texture (CUReT) dataset is challenging dataset used for texture classification task. CUReT dataset is collaborated research between Columbia University and Utrecht University. It contain 61 different texture classes and each class contain 205 texture images under different rotation and illumination condition [16]. This makes the CUReT database more challenging for texture classification task. The images in this dataset are stored as 24-bit .BMP files format. Sample images of CUReT dataset shown in figure 11. Figure 11: Sample images from CUReT dataset. IV. CONCLUSION Texture is an important feature used to describe texture pattern of image. Texture classifications contain feature extraction and classification of texture image. Extracting optimal texture features is challenging tasks in texture classification. In this paper, we discussed general framework of texture classification technique and some popular texture datasets used for training and testing of
  • 7. © 2019 IJRAR June 2019, Volume 6, Issue 2 www.ijrar.org (E-ISSN 2348-1269, P- ISSN 2349-5138) IJRAR19K3893 International Journal of Research and Analytical Reviews (IJRAR)www.ijrar.org 224 classification method. From this review we conclude that, texture classification technique is easy to understand and still active research area with potential to increase classification rate. REFERENCES [1] Smriti H. Bhandari, Amruta G.Yadrave, “Local Binary Pattern approach for Rotation InvariantTexture Classification”, ICCPCT, 2015. [2] T. Ojala, M. Pietikinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions”, Pattern Recognition, vol. 29, pp.51 – 59,1996. [3] Niraj Doshi, Gerald Schaefer, “Rotation-invariant Local Binary Pattern Texture Classification”, 54th International Symposium ELMAR, 2012. [4] Leila Kabbai, Mehrez Abdellaoui, Ali Douik, “Content Based Image Retrieval using Local and Global features descriptor”, published by IEEE, 2016. [5] Baofeng Zhang, Yingkui Jiao, Zhijun Ma, Yongchen Li, Junchao Zhu, “An Efficient Image Matching Method Using Speed Up Robust Features”, published by IEEE, 2014. [6] S. S. Liu and M. E. Jernigan, “Texture analysis and discrimination in additive noise”, Computer Vision, Graphics and Image Processing, 1990. [7] Chih-Cheng Hung, Minh Pham, Sara Arasteh, “ Image Texture Classification Using Texture Spectrum and Local Binary Pattern ”, published by IEEE. [8] Haralick R. M., Shanmugam K., Dinstein I., “Textural features for image classification” IEEE Trans. Systems. Man. Cybernetics, 610-621, 1973. [9] Ben Salem, Y. Nasri, “Automatic recognition of woven fabric using SVM”, Signal, image and video processing, 429-434, 2009. [10] B. S. Everitt, S. Landau, M. Leese, and D. Stahl, “Miscellaneous Clustering Methods, in Cluster Analysis (5th Edition)”, January 2011. [11] P.G.H.H. Gunasekara, J. V. Wijayakulasooriya and H. A. C. Dharmagunawardhana “Image Texture Analysis Using Deep Neural Networks”, published by IEEE, 2017. [12] V. Vapnik, C. Cortes, “Support-vector networks”, Machine Learning, vol. 20, Issue. 3, pp. 273–297, 1995. [13] Van shang, yan-hua diao, chun-ming li, “Rotation invariant Texture Classification Algorithm Based on Curvelet Transform And SVM”, IEEE Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, 2008. [14] P. Brodatr, “Textures: A Photographic Album for artist and Designers”, Dover, New York. 1966. [15] Ojala, T., T. Mäenpää, M. Pietikäinen, J. Viertola, J. Kyllönen, and S. Huovinen, “Outex- New Framework for Empirical Evaluation of Texture Analysis Algorithms”, Proc. IEEE 16th Int. Conf. on Pattern Recognition, 1, 701-706, 2002. [16] K. J. Dana, B. van Ginneken, K. N. Shree, and J. J. Koenderink, “Reflectance and texture of real world surfaces”, IEEE ACM Transactions on Graphics (TOG), vol. 18, pp.1–34,1999. [17] Yassine Ben Salem, Salem Nasri, “Rotation Invariant Texture Classification using Support Vector Machines”, IEEE International Conference on Electronic Systems, Signal Processing and Computing Technologies, 2014. [18] Vishal S. Thakare, Nitin N. Patil, “Classification of Texture Using Gray Level Co-Occurrence Matrix and Self-Organizing Map”, IEEE International Conference on Electronic Systems, Signal Processing and Computing Technologies, 2014. [19] Sourajit Das, Uma Ranjan Jena, “Texture Classification using Combination of LBP and GLRLM Features along with KNN and Multiclass SVM Classification”, IEEE International Conference on Communication, Control and Intelligent Systems (CCIS), 2016. [20] Eftekhar Hossain, Md. Farhad Hossain and Mohammad Anisur Rahaman, “A Color and Texture Based Approach for the Detection and Classification of Plant Leaf Disease Using KNN Classifier”, IEEE International Conference on Electrical, Computer and Communication Engineering (ECCE), 2019. [21] A.Jayasudha, D.Pugazhenthi, “Colour Texture Classification Using Wavelet Transform From Its Gray Scale”, IEEE 2nd International Conference on Current Trends in Engineering and Technology, 2014. [22] Gandham Girish, Jatindra Kumar Dash, ”Adaptive Fuzzy Local Binary Pattern for Texture Classification”,IEEE 2nd International Conference on Man and Machine Interfacing, 2017. [23] Paul Schumacher, Jun Zhang, “Texture Classification Using Neural Networks And Discrete Wavelet Transform”, published by IEEE, 1994. [24] Mohammed W. Ashour, Mahmoud F. Hussin, and Khaled M. Mahar, “Supervised Texture Classification Using Several Features Extraction Techniques Based on ANN And SVM”, published by IEEE, 2008. [25] Yassine Ben Salem, Salem Nasri, “Rotation Invariant Texture Classification using Suport Vector Machines”, published by IEEE, 2014. [26] Li Meng, Fu Ping, Sun Shenghe, “3D Texture Classification Using 3D Texture Histogram Model and SVM”, IEEE Eighth International Conference on Electronic Measurement and Instruments, 2007. [27] Rosalind W. Picard, Tanweer Kabir, Fang Liu, “Real-time Recognition with the entire Brodatz Texture Database”, published by IEEE, 1993. [28] Wenda He, Reyer Zwiggelaar, “Image Classification: A Novel Texture Signature Approach”, IEEE 17th International Conference on Image Processing, 2010.