SlideShare a Scribd company logo
IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 333
OBJECT RECOGNITION FROM IMAGE USING GRID BASED COLOR
MOMENTS FEATURE EXTRACTION METHOD
Amit Thakur1
, Avinash Dhole 2
1
Computer Science and Engineering, 2
Professor, Computer Science and Engineering; Head, Department of Computer
Science & Engineering, 1,2 Raipur Institute of Technology, Raipur, Mandir Hasaud, Raipur, Chhattisgarh, INDIA,
amitthakur744@gmail.com, avi_dhole33@rediffmail.com
Abstract
Image processing is a mechanism to convert an image into digital form and perform various operations on it, in order to get an
enhanced image or to extract some useful information from it. In Image processing system, it treats images as two dimensional signals
while applying number of image processing methods to them. This leads to an increasing number of generated digital images.
Therefore it is required automatic systems to recognize the objects from the images. These systems may collect the number of features
of a image and specification of image and consequently the different features of an object will identify the object from the image.
Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image
Processing forms core research area within engineering and computer science disciplines too. Its most common and effective method
is retrieve the textual features from various methods. But most of the methods do not yield the more accurate features form the image.
So there is a requirement of an effective and efficient method for features extraction from the image. Moreover, images are rich in
content, so some approaches are proposed based on various features derived directly from the content of the image: these are the
grid-based-color-moments (GBCM) approaches. They allow users to search the desired object from image by specifying visual
features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of
measuring similarity between image features. In this paper, we have proposed a number of existing methods for GBCM applications.
Keywords- object, grid-based-color-moments (gbcm), features, feature extraction, textual features, Image Processing,
Digital form, Object Identification
---------------------------------------------------------------------***------------------------------------------------------------------------
1. INTRODUCTION
Object Recognition is the task of finding a given desired object
in an image sequence or video sequence [1]. Thus object
recognition is a problem of matching features from a database
with extracted representations from the image dataset. So
Object recognition is concerned with determining the identity
of an object being observed in the image from a set of known
labels [7]. Oftentimes, it is assumed that the object being
observed has been detected or there is a single Object in the
image [5].
Object recognition is one of the most fascinating abilities that
humans easily possess since childhood. With a simple glance of
an object, humans are able to tell its identity or category despite
of the appearance variation due to change in pose, illumination,
texture, deformation, and under occlusion. Furthermore,
humans can easily generalize from observing a set of objects to
recognizing objects that have never been seen before. For
example, kids are able to generalize the concept of “chair" or
“cup" after seeing just a few examples [3].
Color moments analysis (CMA) is a very popular and effective
for color-based image analysis [1]. It is especially important for
classification of images based on color, texture properties, face
recognition, image retrieval, and identification of Image angle
at various degrees. Here we will discuss the basic methodology
to calculate CMs of a given image and sample code. Color
moments to be calculated are in fact statistical moments.
An image has to be partitioned into sub-blocks. Deciding
optimal number of sub-blocks is a qualitative question and has
to be decided as per the type of the application. In general, at
least 7*7 and not more than 9*9 is a good choice. Here we
partition an image using 9*9 sub-blocks. This gives total of 81
blocks [8].
Since any color distribution can be characterized by its
moments and most information is concentrated on the low-
order moments, only the first moment (mean), the second
moment (variance) and the third moment (skewness) are taken
as the feature vectors [2]. The similarity between two color
moments is measured by the Euclidean distance. Two similar
images will have high similarity [6]. However, if two images
have only a similar sub-region, their corresponding moments
will be different and the similarity measure will be low.
Color moments are measures that can be used differentiate
images based on their features of color. Once calculated, these
IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 334
moments provide a measurement for color similarity between
images [4].These values of similarity can then be compared to
the values of images indexed in a database for tasks like image
retrieval [1]. The basis of color moments lays in the assumption
that the distribution of color in an image can be interpreted as a
probability distribution [8].Probability distributions are
characterized by a number of unique moments (e.g. Normal
distributions are differentiated by their mean and variance). It
therefore follows that if the color in an image follows a certain
probability distribution, the moments of that distribution can
then be used as features to identify that image based on color
[9].Moments are calculated for each of these channels in an
image. An image therefore is characterized by 9 moments- 3
moments for each 3 color channels.
Section 1 throws lights on the introduction of image processing
and object recognition. Section 2 explains about the related
researches that have been done in the field of object
identification. Section 3 covers the problem statement that has
been proposed. Many papers and researches have been put in
front for object recognition. Section 4 details with the
experimental set up and result. And at last section, it focuses on
the conclusion and future scope.
2. RELATED RESEARCH
Many scholars have published tons of research work on object
recognition techniques and methods. Mas Rina Mustaffa,
Fatimah Ahmad, Rahmita Wirza O.K. Rahmat, Ramlan
Mahmod presented context baesd image retrieval based on
color spatial features [2]. Jau-Ling Shih and Ling-Hwei Chen
gave a way of Color Image Retrieval Based on Primitives of
Color Moments [3]. They have concluded and proposed a new
color image retrieval method based on primitives of color
moments.David G. Lowe introduced Object Recognition from
Local Scale-Invariant Features [4]. J F Dale Addison, Stefan
Wermter, and Garen Z Arevian presented texts upon the
comparison of various feature extraction techniques and
selection techniques [5]. Noah Keen focused upon the color
moments of am Image [6].
3. PROBLEM STATEMENT
The problem statement is to classify the image‟s object into
certain classes using the rich set of datasets of various features
of image. Till now the datasets of object‟s features are not
clearly identified [1].We have identified more than 15
properties to extract the features of object. This will help to
increase the possibilities and efficiency for identifying the
objects from an image. The relationship among the various
models is illustrated in Fig. 1.
Fig 1: Feature Extraction Process
Proposed systems collect the number of features of a image and
specification of image and consequently the different features
of an object will identify the object from the image. As the
number of properties increase lead to increase in the efficiency
and correctness in object identification. The main task is to
choose the highly matched data sets and hence the object in the
image.
4. PROBLEM SOLVING USING GBCM
METHOD
Fig 2: Various stages of object recognition
process
4.1 FEATURE EXTRACTION
In this paper, we have proposed a new method for solving
object recognition problem. We will define the i-th color
channel at the j-th image pixel as Pij
Acquiring Image
Test the Image with Trained Data
Sets
Feature Extraction
Image Classification Using SVM
Object Recognition
IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 335
The three color moments can then be defined as:
Mean :
Mean can be understood as the average color value in the image
.
Standard Deviation:
The standard deviation is the square root of the variance of the
distribution.
Skewness :
Skewness can be understood as a measure of the degree of asy
mmetry in the distribution.
Many other image properties have been introduced for better
performance of the system. Those are listed as below:
 Auto Correlation
 Contrast
 Energy
 Entropy
 Homogeneity
 Sum Variance
 Sum Average
 Difference Entropy
 Maximum Probability
 Dissimilarity
 Cluster Prominence
GBCM():
Here GBCM() function has been using to extract the three
features i.e. Mean, Standard Deviation and Skewness of an
image. Here variable „i‟ is used for loop iteration.
GBCM((i-1)*9+1) = mean(reshape(block(:,:,1), 1, []));
GBCM((i-1)*9+2) = std(reshape(block(:,:,1), 1, []))^2;
GBCM((i-1)*9+3) = skewness(reshape(block(:,:,1), 1, []));
reshape():
B = reshape(A,m,n) returns the m-by-n matrix B whose
elements are taken column-wise from A. An error results if A
does not have m*n elements.
4.2 CLASSIFICATION USING SVM
Support vector machine are supervised learning models with
associated learning algorithms that analyse data and recognize
patterns, used for classification and regression analysis [7].
Fig 3: The SVM Algorithm
SVM object can be created in one of two ways - an existing
SVM can be loaded from a file, or a new SVM can be created a
trained on a dataset. Support Vector Machines (SVM) has
recently gained prominence in the field of machine learning and
pattern classification [8].Given a set of training examples, each
marked as belonging to one of two categories, an SVM training
algorithm builds a model that assigns new examples into one
category or the other. An SVM model is a representation of the
examples as points in space, mapped so that the examples of
the separate categories are divided by a clear gap that is as wide
as possible. New examples are then mapped into that same
space and predicted to belong to a category based on which side
of the gap they fall on.
Fig 4: Block Diagram of SVM
The datasets are provided to the SVM Classifier as an input and
consequently the classified objects are produced as an output.
5. EXPERIMENTAL SET UP AND RESULT
Comparing
Each Input to Training Data
Input Data
T1
T2
T3
T4
T5
T6
Output
Model
Fig 5: Comparing testing data with trained data
Data
Sets
SVM
Classifier
Classified
Object
IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163
__________________________________________________________________________________________
Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 336
First of all Input data are considered, each input data are
compared with training data that can be up to „n‟ numbers.
Now the result obtained from training data set that is the
maximum match found is passed into a next stage where
collaboration of data sets are done and finally the result
obtained from collaboration are passed into next stage which is
called as output model. Finally output model is created.
In our system, number of images has been through to the
MATLAB programs. We have prepared training data sets for
images of different types and 18 features have been extracted.
In our project around 50 training data images have been tested
with trained data sets and for testing 20 images have been
inputted to SVM program in order to test the final result and at
last output efficiency has been observed in percentage as 81%.
CONCLUSION AND FUTURE WORK
In this paper we have focused on the different type of Feature
Extraction Techniques and applied to the input training data
sets. In future more techniques can be involved for better
efficiency and result. There are many more complex
modifications we can make to the images. For example, you
can apply a variety of filters to the image. The filters use
mathematical algorithms to modify the image. Some filters are
easy to use, while others require a great deal of technical
knowledge. There are lots of possibilities in the fields of image
processing. There are number of algorithms in pre-processing
of images.
REFERENCES
[1] SWAIN,M. and BALLARD,D.: “Color indexing,”
International Journal of Computer Vision, 1991, 7, (1), pp.
11-32ology, 9(2):49-6 1, 1992.
[2] Mas Rina Mustaffa, Fatimah Ahmad, Rahmita Wirza O.K.
Rahmat, Ramlan Mahmod, “CONTENT-BASED IMAGE
RETRIEVAL BASED ON COLOR-SPATIAL
FEATURES”, Malaysian Journal of Computer Science,
Vol. 21(1), 2008.
[3] Jau-Ling Shih and Ling-Hwei Chen, “Color Image
Retrieval Based on Primitives of Color Moments”, 1001
Ta Hsueh Rd., Hsinchu, Taiwan 30050, R.O.C., May
2011.
[4] David G. Lowe, “Object Recognition from Local Scale-
Invariant Features”, Proc. of the International Conference
on Computer Vision, Corfu (Sept. 1999).
[5] J F Dale Addison, Stefan Wermter, Garen Z Arevian, “A
comparison of feature exctraction and selection
Techniques”, International Journal of Computer
Applications (0975-8887), vol. 9, no. 12, pp. 36-40,
November 2010.
[6] Noah Keen, “Color Moments”, February 10, 2005.
[7] CHRISTOPHER J.C. BURGES, “A Tutorial on Support
Vector Machines for Pattern Recognition”, Kluwer
Academic Publishers, Boston. Manufactured in The
Netherlands.
[8] V. N. Vapnik. The Nature of Statistical Learning Theory.
Springer, New York, 2nd
edition, 2000.
[9] S. V. N. Vishwanathan and M. Narasimha Murty.
Geometric SVM: A fast and intuitive SVM algorithm.
Technical Report IISC-CSA-2001-14,Dept. of CSA, Indian
Institute of Science, Bangalore, India, November 2001.
Submitted to ICPR 2002.
[10] J. Pradeep, E, Shrinivasan, S. Himavathi “ Diagonal based
feature extraction for handwritten character recognition
system using neural network”, IEEE, 20011.
BIOGRAPHIES:
Amit Thakur received the B.E. From Pt.
RSU Raipur(C.G.) India in Computer
Science & Engineering in the year 2006.
He is currently pursuing M.Tech. Degree
in Computer Science Engineering from
CSVTU Bhilai (C.G.), India. He is
currently working as Assistant Professor
with the Department of Computer Science
& Engineering in BIT, Raipur (C.G.)
India. His research areas include Feature Extraction, Pattern
Recognition, and Image Processing etc.
Avinash Dhole is Professor of Computer
Science & Engineering and Head of
Computer Science & Engineering
Department at Raipur Institute of
Technology, Raipur (C.G.) India. He has
obtained his M.Tech. degree in Computer
Science & Engineering from RCET,
Bhilai, India in 2005. He has published
over 15 Papers in various reputed National & International
Journals, Conferences, and Seminars. He is serving his duties
as faculty in Chhattisgarh Swami Vivekanand Technical
University, Bhilai, India (A State Government University). His
area of research includes Operating Systems, Editors & IDEs,
Information System Design & Development, Software
Engineering, Modelling & Simulation, Operations Research,
etc.

More Related Content

What's hot (16)

PDF
Content based image retrieval based on shape with texture features
Alexander Decker
 
PDF
Gi3411661169
IJERA Editor
 
PDF
Automatic dominant region segmentation for natural images
csandit
 
PDF
An Automatic Color Feature Vector Classification Based on Clustering Method
RSIS International
 
PDF
Automatic Image Annotation Using CMRM with Scene Information
TELKOMNIKA JOURNAL
 
PDF
Amalgamation of contour, texture, color, edge, and spatial features for effic...
eSAT Journals
 
PDF
An implementation of novel genetic based clustering algorithm for color image...
TELKOMNIKA JOURNAL
 
PDF
Research Inventy : International Journal of Engineering and Science
researchinventy
 
PDF
MMFO: modified moth flame optimization algorithm for region based RGB color i...
IJECEIAES
 
PDF
Image segmentation based on color
eSAT Publishing House
 
PDF
A Review of Feature Extraction Techniques for CBIR based on SVM
IJEEE
 
PDF
2015.basicsof imageanalysischapter2 (1)
moemi1
 
PDF
CBIR of Batik Images using Micro Structure Descriptor on Android
IJECEIAES
 
PDF
Real time implementation of object tracking through
eSAT Publishing House
 
PDF
Dk34681688
IJERA Editor
 
PDF
A brief review of segmentation methods for medical images
eSAT Journals
 
Content based image retrieval based on shape with texture features
Alexander Decker
 
Gi3411661169
IJERA Editor
 
Automatic dominant region segmentation for natural images
csandit
 
An Automatic Color Feature Vector Classification Based on Clustering Method
RSIS International
 
Automatic Image Annotation Using CMRM with Scene Information
TELKOMNIKA JOURNAL
 
Amalgamation of contour, texture, color, edge, and spatial features for effic...
eSAT Journals
 
An implementation of novel genetic based clustering algorithm for color image...
TELKOMNIKA JOURNAL
 
Research Inventy : International Journal of Engineering and Science
researchinventy
 
MMFO: modified moth flame optimization algorithm for region based RGB color i...
IJECEIAES
 
Image segmentation based on color
eSAT Publishing House
 
A Review of Feature Extraction Techniques for CBIR based on SVM
IJEEE
 
2015.basicsof imageanalysischapter2 (1)
moemi1
 
CBIR of Batik Images using Micro Structure Descriptor on Android
IJECEIAES
 
Real time implementation of object tracking through
eSAT Publishing House
 
Dk34681688
IJERA Editor
 
A brief review of segmentation methods for medical images
eSAT Journals
 

Viewers also liked (20)

DOCX
Actividad 2
HenryCuzco
 
DOCX
Texto 1 g_yazmin
Naeicledc
 
PPTX
NAMS 2
Shanna Salmon
 
PPTX
Welcome to the blog of 6c
manuelita145
 
PDF
Wildlife in the cloud: A new approach for engaging stakeholders in wildlife m...
Greenapps&web
 
PPTX
Linea de tiempo.julio
Esperanza Lopez
 
DOCX
Avances tecnologicos 1ºg_valeria
Naeicledc
 
DOCX
Cotizacion
LINITA3120SANTIAGO
 
PPTX
Jaco
Patrick Pacheco
 
PDF
#Sobercollective
John Knutson
 
PPTX
Ingles
marlyveronik
 
DOCX
Motores de Busqueda
maullauniandesr
 
PDF
DECÁLOGO SEGURIDAD TICS
Fátima Pérez
 
DOCX
Cotizaciones
Gloria Liliana Mendez
 
PDF
Tony Kannenberg letter of recommendation (dragged)
Brian Woestman
 
PDF
Sintesis informativa 30 de marzo 2015
megaradioexpress
 
PPTX
EC 448 tema: Caso oxy
Presidencia de la República del Ecuador
 
PDF
Mangas ix valida distrital
Henry Duran
 
PPT
ESCRIBIR PARA INTERNET
RosaVirginia Fagúndez
 
Actividad 2
HenryCuzco
 
Texto 1 g_yazmin
Naeicledc
 
Welcome to the blog of 6c
manuelita145
 
Wildlife in the cloud: A new approach for engaging stakeholders in wildlife m...
Greenapps&web
 
Linea de tiempo.julio
Esperanza Lopez
 
Avances tecnologicos 1ºg_valeria
Naeicledc
 
Cotizacion
LINITA3120SANTIAGO
 
#Sobercollective
John Knutson
 
Ingles
marlyveronik
 
Motores de Busqueda
maullauniandesr
 
DECÁLOGO SEGURIDAD TICS
Fátima Pérez
 
Cotizaciones
Gloria Liliana Mendez
 
Tony Kannenberg letter of recommendation (dragged)
Brian Woestman
 
Sintesis informativa 30 de marzo 2015
megaradioexpress
 
Mangas ix valida distrital
Henry Duran
 
ESCRIBIR PARA INTERNET
RosaVirginia Fagúndez
 
Ad

Similar to Object recognition from image using grid based color (20)

PDF
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
sipij
 
PDF
IRJET-Feature based Image Retrieval based on Color
IRJET Journal
 
PDF
Ijarcet vol-2-issue-7-2287-2291
Editor IJARCET
 
PDF
Ijarcet vol-2-issue-7-2287-2291
Editor IJARCET
 
PDF
Feature extraction based retrieval of
ijcsity
 
PDF
50320140502001 2
IAEME Publication
 
PDF
50320140502001
IAEME Publication
 
PDF
An fpga based efficient fruit recognition system using minimum
Alexander Decker
 
PDF
Basic geometric shape and primary colour detection using image processing on ...
eSAT Journals
 
PDF
Application of Image Retrieval Techniques to Understand Evolving Weather
ijsrd.com
 
PDF
563 574
Editor IJARCET
 
PDF
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...
csandit
 
PDF
Ijarcet vol-2-issue-4-1383-1388
Editor IJARCET
 
PDF
Content based image retrieval (cbir) using
ijcsity
 
PDF
OBJECT DETECTION AND RECOGNITION: A SURVEY
Journal For Research
 
PDF
A_Survey_Paper_on_Image_Classification_and_Methods.pdf
BijayNag1
 
PDF
C1803011419
IOSR Journals
 
PDF
Color vs texture feature extraction and matching in visual content retrieval ...
IAEME Publication
 
PDF
A comparative analysis of retrieval techniques in content based image retrieval
csandit
 
PDF
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL
cscpconf
 
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
sipij
 
IRJET-Feature based Image Retrieval based on Color
IRJET Journal
 
Ijarcet vol-2-issue-7-2287-2291
Editor IJARCET
 
Ijarcet vol-2-issue-7-2287-2291
Editor IJARCET
 
Feature extraction based retrieval of
ijcsity
 
50320140502001 2
IAEME Publication
 
50320140502001
IAEME Publication
 
An fpga based efficient fruit recognition system using minimum
Alexander Decker
 
Basic geometric shape and primary colour detection using image processing on ...
eSAT Journals
 
Application of Image Retrieval Techniques to Understand Evolving Weather
ijsrd.com
 
C OMPARATIVE S TUDY OF D IMENSIONALITY R EDUCTION T ECHNIQUES U SING PCA AND ...
csandit
 
Ijarcet vol-2-issue-4-1383-1388
Editor IJARCET
 
Content based image retrieval (cbir) using
ijcsity
 
OBJECT DETECTION AND RECOGNITION: A SURVEY
Journal For Research
 
A_Survey_Paper_on_Image_Classification_and_Methods.pdf
BijayNag1
 
C1803011419
IOSR Journals
 
Color vs texture feature extraction and matching in visual content retrieval ...
IAEME Publication
 
A comparative analysis of retrieval techniques in content based image retrieval
csandit
 
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL
cscpconf
 
Ad

Object recognition from image using grid based color

  • 1. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163 __________________________________________________________________________________________ Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 333 OBJECT RECOGNITION FROM IMAGE USING GRID BASED COLOR MOMENTS FEATURE EXTRACTION METHOD Amit Thakur1 , Avinash Dhole 2 1 Computer Science and Engineering, 2 Professor, Computer Science and Engineering; Head, Department of Computer Science & Engineering, 1,2 Raipur Institute of Technology, Raipur, Mandir Hasaud, Raipur, Chhattisgarh, INDIA, [email protected], [email protected] Abstract Image processing is a mechanism to convert an image into digital form and perform various operations on it, in order to get an enhanced image or to extract some useful information from it. In Image processing system, it treats images as two dimensional signals while applying number of image processing methods to them. This leads to an increasing number of generated digital images. Therefore it is required automatic systems to recognize the objects from the images. These systems may collect the number of features of a image and specification of image and consequently the different features of an object will identify the object from the image. Image processing is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too. Its most common and effective method is retrieve the textual features from various methods. But most of the methods do not yield the more accurate features form the image. So there is a requirement of an effective and efficient method for features extraction from the image. Moreover, images are rich in content, so some approaches are proposed based on various features derived directly from the content of the image: these are the grid-based-color-moments (GBCM) approaches. They allow users to search the desired object from image by specifying visual features (e.g., colour, texture and shape). Once the features have been defined and extracted, the retrieval becomes a task of measuring similarity between image features. In this paper, we have proposed a number of existing methods for GBCM applications. Keywords- object, grid-based-color-moments (gbcm), features, feature extraction, textual features, Image Processing, Digital form, Object Identification ---------------------------------------------------------------------***------------------------------------------------------------------------ 1. INTRODUCTION Object Recognition is the task of finding a given desired object in an image sequence or video sequence [1]. Thus object recognition is a problem of matching features from a database with extracted representations from the image dataset. So Object recognition is concerned with determining the identity of an object being observed in the image from a set of known labels [7]. Oftentimes, it is assumed that the object being observed has been detected or there is a single Object in the image [5]. Object recognition is one of the most fascinating abilities that humans easily possess since childhood. With a simple glance of an object, humans are able to tell its identity or category despite of the appearance variation due to change in pose, illumination, texture, deformation, and under occlusion. Furthermore, humans can easily generalize from observing a set of objects to recognizing objects that have never been seen before. For example, kids are able to generalize the concept of “chair" or “cup" after seeing just a few examples [3]. Color moments analysis (CMA) is a very popular and effective for color-based image analysis [1]. It is especially important for classification of images based on color, texture properties, face recognition, image retrieval, and identification of Image angle at various degrees. Here we will discuss the basic methodology to calculate CMs of a given image and sample code. Color moments to be calculated are in fact statistical moments. An image has to be partitioned into sub-blocks. Deciding optimal number of sub-blocks is a qualitative question and has to be decided as per the type of the application. In general, at least 7*7 and not more than 9*9 is a good choice. Here we partition an image using 9*9 sub-blocks. This gives total of 81 blocks [8]. Since any color distribution can be characterized by its moments and most information is concentrated on the low- order moments, only the first moment (mean), the second moment (variance) and the third moment (skewness) are taken as the feature vectors [2]. The similarity between two color moments is measured by the Euclidean distance. Two similar images will have high similarity [6]. However, if two images have only a similar sub-region, their corresponding moments will be different and the similarity measure will be low. Color moments are measures that can be used differentiate images based on their features of color. Once calculated, these
  • 2. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163 __________________________________________________________________________________________ Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 334 moments provide a measurement for color similarity between images [4].These values of similarity can then be compared to the values of images indexed in a database for tasks like image retrieval [1]. The basis of color moments lays in the assumption that the distribution of color in an image can be interpreted as a probability distribution [8].Probability distributions are characterized by a number of unique moments (e.g. Normal distributions are differentiated by their mean and variance). It therefore follows that if the color in an image follows a certain probability distribution, the moments of that distribution can then be used as features to identify that image based on color [9].Moments are calculated for each of these channels in an image. An image therefore is characterized by 9 moments- 3 moments for each 3 color channels. Section 1 throws lights on the introduction of image processing and object recognition. Section 2 explains about the related researches that have been done in the field of object identification. Section 3 covers the problem statement that has been proposed. Many papers and researches have been put in front for object recognition. Section 4 details with the experimental set up and result. And at last section, it focuses on the conclusion and future scope. 2. RELATED RESEARCH Many scholars have published tons of research work on object recognition techniques and methods. Mas Rina Mustaffa, Fatimah Ahmad, Rahmita Wirza O.K. Rahmat, Ramlan Mahmod presented context baesd image retrieval based on color spatial features [2]. Jau-Ling Shih and Ling-Hwei Chen gave a way of Color Image Retrieval Based on Primitives of Color Moments [3]. They have concluded and proposed a new color image retrieval method based on primitives of color moments.David G. Lowe introduced Object Recognition from Local Scale-Invariant Features [4]. J F Dale Addison, Stefan Wermter, and Garen Z Arevian presented texts upon the comparison of various feature extraction techniques and selection techniques [5]. Noah Keen focused upon the color moments of am Image [6]. 3. PROBLEM STATEMENT The problem statement is to classify the image‟s object into certain classes using the rich set of datasets of various features of image. Till now the datasets of object‟s features are not clearly identified [1].We have identified more than 15 properties to extract the features of object. This will help to increase the possibilities and efficiency for identifying the objects from an image. The relationship among the various models is illustrated in Fig. 1. Fig 1: Feature Extraction Process Proposed systems collect the number of features of a image and specification of image and consequently the different features of an object will identify the object from the image. As the number of properties increase lead to increase in the efficiency and correctness in object identification. The main task is to choose the highly matched data sets and hence the object in the image. 4. PROBLEM SOLVING USING GBCM METHOD Fig 2: Various stages of object recognition process 4.1 FEATURE EXTRACTION In this paper, we have proposed a new method for solving object recognition problem. We will define the i-th color channel at the j-th image pixel as Pij Acquiring Image Test the Image with Trained Data Sets Feature Extraction Image Classification Using SVM Object Recognition
  • 3. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163 __________________________________________________________________________________________ Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 335 The three color moments can then be defined as: Mean : Mean can be understood as the average color value in the image . Standard Deviation: The standard deviation is the square root of the variance of the distribution. Skewness : Skewness can be understood as a measure of the degree of asy mmetry in the distribution. Many other image properties have been introduced for better performance of the system. Those are listed as below:  Auto Correlation  Contrast  Energy  Entropy  Homogeneity  Sum Variance  Sum Average  Difference Entropy  Maximum Probability  Dissimilarity  Cluster Prominence GBCM(): Here GBCM() function has been using to extract the three features i.e. Mean, Standard Deviation and Skewness of an image. Here variable „i‟ is used for loop iteration. GBCM((i-1)*9+1) = mean(reshape(block(:,:,1), 1, [])); GBCM((i-1)*9+2) = std(reshape(block(:,:,1), 1, []))^2; GBCM((i-1)*9+3) = skewness(reshape(block(:,:,1), 1, [])); reshape(): B = reshape(A,m,n) returns the m-by-n matrix B whose elements are taken column-wise from A. An error results if A does not have m*n elements. 4.2 CLASSIFICATION USING SVM Support vector machine are supervised learning models with associated learning algorithms that analyse data and recognize patterns, used for classification and regression analysis [7]. Fig 3: The SVM Algorithm SVM object can be created in one of two ways - an existing SVM can be loaded from a file, or a new SVM can be created a trained on a dataset. Support Vector Machines (SVM) has recently gained prominence in the field of machine learning and pattern classification [8].Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on. Fig 4: Block Diagram of SVM The datasets are provided to the SVM Classifier as an input and consequently the classified objects are produced as an output. 5. EXPERIMENTAL SET UP AND RESULT Comparing Each Input to Training Data Input Data T1 T2 T3 T4 T5 T6 Output Model Fig 5: Comparing testing data with trained data Data Sets SVM Classifier Classified Object
  • 4. IJRET: International Journal of Research in Engineering and Technology ISSN: 2319-1163 __________________________________________________________________________________________ Volume: 02 Issue: 03 | Mar-2013, Available @ http://www.ijret.org 336 First of all Input data are considered, each input data are compared with training data that can be up to „n‟ numbers. Now the result obtained from training data set that is the maximum match found is passed into a next stage where collaboration of data sets are done and finally the result obtained from collaboration are passed into next stage which is called as output model. Finally output model is created. In our system, number of images has been through to the MATLAB programs. We have prepared training data sets for images of different types and 18 features have been extracted. In our project around 50 training data images have been tested with trained data sets and for testing 20 images have been inputted to SVM program in order to test the final result and at last output efficiency has been observed in percentage as 81%. CONCLUSION AND FUTURE WORK In this paper we have focused on the different type of Feature Extraction Techniques and applied to the input training data sets. In future more techniques can be involved for better efficiency and result. There are many more complex modifications we can make to the images. For example, you can apply a variety of filters to the image. The filters use mathematical algorithms to modify the image. Some filters are easy to use, while others require a great deal of technical knowledge. There are lots of possibilities in the fields of image processing. There are number of algorithms in pre-processing of images. REFERENCES [1] SWAIN,M. and BALLARD,D.: “Color indexing,” International Journal of Computer Vision, 1991, 7, (1), pp. 11-32ology, 9(2):49-6 1, 1992. [2] Mas Rina Mustaffa, Fatimah Ahmad, Rahmita Wirza O.K. Rahmat, Ramlan Mahmod, “CONTENT-BASED IMAGE RETRIEVAL BASED ON COLOR-SPATIAL FEATURES”, Malaysian Journal of Computer Science, Vol. 21(1), 2008. [3] Jau-Ling Shih and Ling-Hwei Chen, “Color Image Retrieval Based on Primitives of Color Moments”, 1001 Ta Hsueh Rd., Hsinchu, Taiwan 30050, R.O.C., May 2011. [4] David G. Lowe, “Object Recognition from Local Scale- Invariant Features”, Proc. of the International Conference on Computer Vision, Corfu (Sept. 1999). [5] J F Dale Addison, Stefan Wermter, Garen Z Arevian, “A comparison of feature exctraction and selection Techniques”, International Journal of Computer Applications (0975-8887), vol. 9, no. 12, pp. 36-40, November 2010. [6] Noah Keen, “Color Moments”, February 10, 2005. [7] CHRISTOPHER J.C. BURGES, “A Tutorial on Support Vector Machines for Pattern Recognition”, Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. [8] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 2nd edition, 2000. [9] S. V. N. Vishwanathan and M. Narasimha Murty. Geometric SVM: A fast and intuitive SVM algorithm. Technical Report IISC-CSA-2001-14,Dept. of CSA, Indian Institute of Science, Bangalore, India, November 2001. Submitted to ICPR 2002. [10] J. Pradeep, E, Shrinivasan, S. Himavathi “ Diagonal based feature extraction for handwritten character recognition system using neural network”, IEEE, 20011. BIOGRAPHIES: Amit Thakur received the B.E. From Pt. RSU Raipur(C.G.) India in Computer Science & Engineering in the year 2006. He is currently pursuing M.Tech. Degree in Computer Science Engineering from CSVTU Bhilai (C.G.), India. He is currently working as Assistant Professor with the Department of Computer Science & Engineering in BIT, Raipur (C.G.) India. His research areas include Feature Extraction, Pattern Recognition, and Image Processing etc. Avinash Dhole is Professor of Computer Science & Engineering and Head of Computer Science & Engineering Department at Raipur Institute of Technology, Raipur (C.G.) India. He has obtained his M.Tech. degree in Computer Science & Engineering from RCET, Bhilai, India in 2005. He has published over 15 Papers in various reputed National & International Journals, Conferences, and Seminars. He is serving his duties as faculty in Chhattisgarh Swami Vivekanand Technical University, Bhilai, India (A State Government University). His area of research includes Operating Systems, Editors & IDEs, Information System Design & Development, Software Engineering, Modelling & Simulation, Operations Research, etc.