SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3772
FACE RECOGNITION USING MACHINE LEARNING
Ishan Ratn Pandey, Mayank Raj, Kundan Kumar Sah , Tojo Mathew, M. S. Padmini
123B.E., Department of Computer Science and Engineering, The National Institute of Engineering, Mysuru, India
45Assistant Professor, Computer Science & Engineering, The National Institute of Engineering, Mysuru, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - For real world applications like video
surveillance, human machine interaction, and security
systems, face recognitionisofgreatimportance. Deeplearning
based methods have shown better performance in terms of
accuracy and speed of processing in image recognition
compared to traditional machine learning methods. This
paper presents a modified architecture of the Convolution
Neural Network (CNN) by adding two operations of
normalization to two of the layers. The operation of
normalization that is normalization of the batch provided
acceleration of the network. CNN architecture was used to
extract distinctive facial characteristicsandSoftmaxclassifier
was used to classify faces within CNN's fully connected layer.
Our Face Database has shown in the experiment part that the
proposed approach has improved the performance of face
recognition with better results of recognition.
Key Words: face recognition, convolutional neural network,
softmax classifier, deep learning
1.INTRODUCTION
Face recognition is the process of recognizing a
person's face through a vision system. Because of its use in
security systems, videosurveillance,commercial areas,it has
been an important human - computer interaction tool and is
also used in social networks like Facebook. After the fast
development of artificial intelligence, face recognition has
attracted attention due toits meddlesomenatureandsinceit
is the main method of human identification when compared
with other types of biometric methods. Face recognitioncan
be easily checked in an uncontrolled environment without
the knowledge of the subject person.
As the history of face recognitionisviewed,itisseen
that it has been present in many researchpaperse.g.[1]–[6].
Traditional methods based on shallow learning have faced
challenges such as pose variation, scene lighting and facial
expression changes as in references [7]-[17]. Shallow
learning methods use only some basic image characteristics
and trust on artificial experience to extract sample
characteristics. Deep learning methods can extract more
complicated facial characteristics [18]-[27]. Deeplearningis
making crucial progress in solving issues that have formany
years restricted the artificial intelligence community's best
attempts. It has proved outstanding in disclosing complex
structures in high-dimensional data and is therefore
applicable to many science, business and government
domains. It addresses the problem of learning hierarchical
representations with a single algorithm or some algorithms
and has mainly defeated records in image recognition,
natural language processing, semantic segmentation and
many other real worldscenarios [28]-[35]. Therearevarious
deep learning approaches like Convolution Neural Network
(CNN), Deep Belief Network (DBN) [36], [37], Stacked
Autoencoder [38]. In image and face recognition, CNN is
frequently used as an algorithm. CNN is a kind of artificial
neural networks that use convolution approach to extract
characteristics from input data to increase the number of
characteristics. CNN was first proposed by LeCun and was
first used in the recognition of handwriting [39]. His
network was the source of many of the recent architectures
and an inspiration for many scientists. By publishing their
work in the ImageNet Competition, Krizhevsky, Sutskever
and Hinton achieved good results [40]. It is regarded as one
of computer vision's most dominant publications and has
shown that CNNs outperform recognition performance in
comparison with handmade methods. CNN has achieved
cutting - edge over a number of areas with computational
power from Graphical Processing Units (GPUs), including
image recognition, scene recognition and edge detection.
This paper's main contribution is to obtain a
powerful high accuracy recognition algorithm. In thispaper,
by adding Batch Normalization process after two different
layers, we developed a new CNN architecture.
In this paper, the general structure of the processof
face recognition consists of three stages. It starts with the
pre - processing stage: the conversion of color space and the
resize of images, continuing with the extraction of facial
features and then the classification of extracted feature set.
Softmax Classifier is to realize the final stage in our system,
which is classification based on the facial characteristics
extracted from CNN.
The rest of this paper is as follows organized. The
architecture of CNN is introduced in section 2. Theproposed
algorithm will be discussed in section 3. Section 4 presents
the face database used in this paper. Section 5 presents the
experimental results. Finally, in section 6, we discuss
conclusions.
2. METHODOLOGY
CNNs are a category of neural networks that have
shown to be highly effective in areas suchasfacerecognition
and classification. CNNs are a type of feed-forward, multi-
layered neural networks. CNNs consist of neurons with
learning weights. Each filter takes certain inputs, converts
and follows them with a non-linearity [41]. As shown in
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3773
Fig.1, a typical CNN architecture is shown. CNN's structure
contains layers of Convolution, pooling,RectifiedLinearUnit
(ReLU), and Fully Connected.
Fig 1: Layers of CNN
2.1 Convolution Layer
Convolution layer performs the core building block
of a Convolutionary Network that performs most of the
heavy lifting computations. Convolution layer's main
purpose is to extract features from the image-based input
data. By learning image features usingsmall squaresofinput
image, Convolution preserves the spatial relationship
between pixels. Using a set of learnable neurons, the input
image is compressed. This creates a feature map in the
output image and then feeds the feature maps to the next
convolution layer as input data.
2.2 Pooling Layer
Pooling layer reduces each activation map's
dimensionality but still has the most important information.
The images input are divided into a set of rectangles thatare
not overlapping. A non-linear operation such as average is
used to down-sample each region. This layer achievesbetter
generalization, faster convergence, robust translation and
distortion, and is usually placed between layers of
convolution.
2.3 ReLU Layer
ReLU is a non-linear operation that involves units
that use the rectifier. It is an element-wise operation which
means that it is applied per pixel, reconstituting all the
negative values by zero in the feature map. To understand
how ReLU operates, we assume that in the literature for
neural networks there is a neuron input given as x and from
that the rectifier is defined as f(x)= max(0,x).
2.4 Fully Connected Layer
The term Fully ConnectedLayer(FCL)refersto each
filter connected in the next layer in the previous layer. The
output from the layers of convolution, pooling, andReLUare
incarnation of the input image's high-level features. Theaim
of using the FCL is to use these features to classify the input
image into different classes based on the training dataset.
FCL is considered to be the final layer of pooling feeding the
features to a classifier using the activation function of
Softmax. The sum of Fully Connected Layer's output
probabilities is 1. Using the Softmax as the activation
function ensures this. The Softmax function takes an
arbitrary real-evaluated scores vector and transformsitinto
a value vector between 0 and 1 that sums up to 1.
3. THE PROPOSED ALGORITHM
The block scheme of the proposed algorithm for
CNN recognition is shown in Fig. 2. The algorithm is
performed in the following three steps:
1) Resize the input images as 16x16x1, 16x16x3,
32x32x1, 32x32x3, 64x64x1, and 64x64x1.
2) Make a CNN structure with eight layers made up of
convolutional, max pooling, convolutional, max
pooling, convolutional, max pooling, convolutional,
and convolutional layers respectively.
3) Use Softmax classifier for classification after
extracting all the features.
Fig 2: CNN Block Diagram
In picture. 3, The structure of the proposed CNN extraction
block feature is shown.
4. DATABASE
Our face database contains imagesof4 peopletaken
at different times with different lighting conditions taken
between 2/03/2019 to 8/03/19. Each individual in the
database is represented by hundred colored JPEG images
with cluttered background taken at 640 pixels resolution. In
these images, the average size of the faces is 150bis150
pixels. The pictures show frontal and/or inclined faces with
different conditions of lighting and scale. Photograph. Fig. 4
presents some face images from our face database of
different subjects [42].
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3774
5. EXPERIMENTAL RESULTS
We designed our CNN with the MatConvNet
software tool Beta23 version. The size of each image was
changed after the pre-processingstageas16x16x1,16x16x3,
32x32x1, 32x32x3, 64x64x1, and 64x64x3. 70% of the
pictures were assigned to the training set, 30% to the test
set. By making changes in image size, learning rate, batch
size, and so on, we implemented various tests. For 35
epochs, CNN was trained. The performance of the proposed
CNN was assessed based on top-1 and top-5 errors. Top-1
error rate checks whether the top class is identical to the
target label and top-5 error rate checks whether the target
label is one of your top five predictions. Table 1 shows a
brief structure of the proposed algorithm. The results in the
literature are better than those using shallow learning
techniques like in references [43-45].
Figure 5 shows the performance of the proposed
CNN architecture with respect to the top-1 error rate. As
seen from Figure 5, from the image size 64x64x3, the lowest
top-1 error rate was obtained. This result matters when it is
intended to find any subject's target label in the database.
Top-5 error rate is given in Figure 6 and the lowest ratewith
3 channels was achieved from all images.
6. CONCLUSION
This paper presents an empirical assessment of the
CNN architecture-based face recognition system. The
prominent features of the proposed algorithm is that it uses
batch normalization for the outputs of the first and final
convolution layers and higheraccuracyratesareachieved by
the network. Softmax Classifier is usedtoclassifythefacesin
a fully connected layer step. Our Face Database tested the
performance of the proposed algorithm. The resultsshowed
satisfactory rates of recognition according to literature
studies.
REFERENCES
[1] S. G. Bhele and V. H. Mankar, “A Review Paper on Face
Recognition Techniques,” Int. J. Adv. Res. Comput. Eng.
Technol., vol. 1, no. 8, pp. 2278–1323, 2012.
[2] V. Bruce and A. Young, “Understanding face recognition,”
Br. J. Psychol., vol. 77, no. 3, pp. 305–327, 1986.
[3] D. N. Parmar and B. B. Mehta, “Face Recognition Methods
& Applications,” Int. J. Comput. Technol. Appl., vol. 4, no. 1,
pp. 84–86, 2013.
[4] W. Zhao et al., “Face Recognition: A Literature Survey,”
ACM Comput. Surv., vol. 35, no. 4, pp. 399–458, 2003.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3775
[5] K. Delac, Recent Advances in Face Recognition. 2008.
[6] A. S. Tolba, A. H. El-baz, and A. A. El-Harby, “Face
Recognition : A Literature Review,” Int. J. Signal Process.,vol.
2, no. 2, pp. 88–103, 2006.
[7] C. Geng and X. Jiang, “Face recognitionusingsiftfeatures,”
in Proceedings - International Conference on Image
Processing, ICIP, pp. 3313–3316, 2009.
[8] S. J. Wang, J. Yang, N. Zhang, and C. G. Zhou, “Tensor
Discriminant Color Space for Face Recognition,” IEEE Trans.
Image Process., vol. 20, no. 9, pp. 2490–501, 2011.
[9] S. N. Borade, R. R. Deshmukh, and S. Ramu, “Face
recognition using fusion of PCA and LDA: Borda count
approach,” in 24th Mediterranean Conference on Control
and Automation, MED 2016, pp. 1164–1167, 2016.
[10] M. A. Turk and A. P. Pentland, “Face Recognition Using
Eigenfaces,” Journal of Cognitive Neuroscience, vol. 3, no. 1.
pp. 72–86, 1991.
[11] M. O. Simón, “Improved RGB-D-T based face
recognition,” IET Biometrics, vol. 5, no. 4, pp. 297–303, Dec.
2016.
[12] O. Dniz, G. Bueno, J. Salido, and F. De La Torre, “Face
recognition usingHistogramsofOrientedGradients,”Pattern
Recognit. Lett., vol. 32, no. 12, pp. 1598–1603, 2011.
[13] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma,
“Robust face recognition via sparse representation,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210–227,
2009.
[14] C. Zhou, L. Wang, Q. Zhang, and X. Wei,“Facerecognition
based on PCA image reconstruction and LDA,” Opt. - Int. J.
Light Electron Opt., vol. 124, no. 22, pp. 5599–5603, 2013.
[15] Z. Lei, D. Yi and S. Z. Li, “Learning Stacked Image
Descriptor for Face Recognition,” IEEE Trans. Circuits Syst.
Video Technol., vol. 26, no. 9, pp. 1685–1696, Sep. 2016.
[16] P. Sukhija, S. Behal, and P. Singh, “Face Recognition
System Using Genetic Algorithm,” in Procedia Computer
Science, vol. 85, 2016.
[17] S. Liao, A. K. Jain, and S. Z. Li, “Partial face recognition:
Alignmentfree approach,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 35, no. 5, pp. 1193–1205, 2013.
[18] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Learning Deep
Representation for Face Alignment with Auxiliary
Attributes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38,
no. 5, pp. 918–930, 2016.
[19] G. B. Huang, H. Lee, and E. Learned-Miller, “Learning
hierarchical representations for face verification with
convolutional deep belief networks,” in Proceedings of the
IEEE Computer Society Conference on Computer Vision and
Pattern Recognition, pp. 2518–2525, 2012.
[20] S. Lawrence, C. L. Giles, Ah Chung Tsoi, and A. D. Back,
“Face recognition: a convolutional neural-network
approach,” IEEE Trans. Neural Networks, vol. 8, no. 1, pp.
98–113, 1997.
[21] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face
Recognition,” in Procedings of the British Machine Vision
Conference 2015, pp. 41.1- 41.12, 2015.
[22] Z. P. Fu, Y. N. ZhANG, and H. Y. Hou, “Survey of deep
learning in face recognition,” in IEEE International
Conference on Orange Technologies, ICOT 2014, pp. 5–8,
2014.
[23] X. Chen, B. Xiao, C. Wang, X. Cai, Z. Lv, and Y. Shi,
“Modular hierarchical feature learning with deep neural
networks for face verification,” Image Processing (ICIP),
2013 20th IEEEInternational Conferenceon.pp.3690–3694,
2013.
[24] Y. Sun, D. Liang, X. Wang, and X. Tang, “DeepID3: Face
Recognition withVeryDeepNeural Networks,”Cvpr,pp. 2–6,
2015.
[25] G. Hu, “When Face Recognition Meets with Deep
Learning: An Evaluation of Convolutional Neural Networks
for Face Recognition,” 2015 IEEE Int. Conf. Comput. Vis.
Work., pp. 384–392, 2015.
[26] C. Ding and D. Tao, “Robust Face Recognition via
Multimodal Deep Face Representation,” IEEE Trans.
Multimed., vol. 17, no. 11, pp. 2049– 2058, 2015.
[27] A. Bharati, R. Singh, M. Vatsa, and K. W. Bowyer,
“Detecting Facial Retouching Using Supervised Deep
Learning,” IEEE Trans. Inf. Forensics Secur., vol.11,no.9,pp.
1903–1913, 2016.
[28] M. Liang and X. Hu, “Recurrent convolutional neural
network for object recognition,” 2015 IEEE Conference on
Computer Vision and Pattern Recognition(CVPR).pp.3367–
3375, 2015.
[29] P. Pinheiro and R. Collobert, “Recurrent convolutional
neural networks for scene labeling,” Proc. 31st Int.Conf., vol.
32, no. June, pp. 82–90, 2014.
[30] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang,
“DeepContour: A deep convolutional feature learned by
positive-sharing loss for contour detection,” in Proceedings
of the IEEE Computer Society Conference on Computer
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3776
Vision and Pattern Recognition, vol. 07–12, pp. 3982–3991,
June 2015.
[31] M. A. K. Mohamed, A. El-Sayed Yarub, and A. Estaitia,
“Automated Edge Detection Using Convolutional Neural
Network,” Int. J. Adv. Comput. Sci. Appl., vol.4,no.10,pp.11–
17, 2013.
[32] Dan Cireúan, “Deep Neural Networks for Pattern
Recognition.”
[33] R. Collobert, J. Weston, L. Bottou, M. Karlen, K.
Kavukcuoglu, and P. Kuksa, “Natural Language Processing
(Almost) from Scratch,” J. Mach. Learn. Res., vol. 12, pp.
2493–2537, 2011.
[34] R. Collobert and J. Weston, “A unified architecture for
natural language processing: Deep neural networks with
multitask learning,” Proc. 25th Int. Conf. Mach. Learn., pp.
160–167, 2008.
[35] E. Shelhamer, J. Long, and T. Darrell, “Fully
Convolutional Networks for Semantic Segmentation,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651,
2017.
[36] R. Xia, J. Deng, B. Schuller, and Y. Liu, “Modeling gender
information for emotion recognition using Denoising
autoencoder,” in ICASSP, IEEE International Conference on
Acoustics, Speech and Signal Processing – Proceedings, pp.
990–994, 2014.
[37] G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning
algorithm for deep belief nets,”Neural Comput.,vol.18,no.7,
pp. 1527–1554, 2006.
[38] Y. Bengio, “Learning Deep Architectures for AI,” vol. 2,
no. 1, 2009.
[39] Y. LeCun, “Backpropagation AppliedtoHandwritten Zip
Code Recognition,” Neural Comput., vol. 1, no. 4, pp. 541–
551, Dec. 1989.
[40] A. Krizhevsky, I. Sutskever, and H. E. Geoffrey,
“ImageNet Classification with Deep Convolutional Neural
Networks,” Adv. Neural Inf. Process. Syst. 25, pp. 1–9, 2012.
[41] A. Uçar, Y. Demir, and C. Guzelis, “Object Recognition
and Detection with Deep Learning for Autonomous Driving
Applications,” Simulation, pp. 1-11, 2017.
[42] “Georgia Tech face database,” 10-Jan-2017 [Online].
Available: http:/www.anefian.com/research/facereco.htm.
[43] Nischal K N, Praveen Nayak M, K Manikantan, and S
Ramachandran,“FaceRecognitionusingEntropy-augmented
face isolation and Image folding as pre-processing
techniques,” 2013 Annual IEEEIndia Conference(INDICON),
2013.
[44] Katia Estabridis, “Face Recognition and Learning via
Adaptive Dictionaries,” IEEE ConferenceonTechnologies for
Homeland Security (HST), 2012.
[45] Qiong Kang and Lingling Peng, “An Extended PCA and
LDA for color face recognition,” International Conference on
Information Security and Intelligence Control (ISIC), 2012.

More Related Content

PDF
06 17443 an neuro fuzzy...
PDF
J25043046
PDF
IRJET - Effective Workflow for High-Performance Recognition of Fruits using M...
PDF
SELF-LEARNING AI FRAMEWORK FOR SKIN LESION IMAGE SEGMENTATION AND CLASSIFICATION
PDF
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
PDF
Blank Background Image Lossless Compression Technique
PDF
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
PDF
A Review on Image Segmentation using Clustering and Swarm Optimization Techni...
06 17443 an neuro fuzzy...
J25043046
IRJET - Effective Workflow for High-Performance Recognition of Fruits using M...
SELF-LEARNING AI FRAMEWORK FOR SKIN LESION IMAGE SEGMENTATION AND CLASSIFICATION
IRJET- Implementation of Gender Detection with Notice Board using Raspberry Pi
Blank Background Image Lossless Compression Technique
COLOUR BASED IMAGE SEGMENTATION USING HYBRID KMEANS WITH WATERSHED SEGMENTATION
A Review on Image Segmentation using Clustering and Swarm Optimization Techni...

What's hot (18)

PDF
IRJET- Significant Neural Networks for Classification of Product Images
PDF
Property based fusion for multifocus images
PDF
Review paper on segmentation methods for multiobject feature extraction
PDF
International Journal of Computational Engineering Research(IJCER)
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
PDF
Review on Optimal image fusion techniques and Hybrid technique
PDF
CATWALKGRADER: A CATWALK ANALYSIS AND CORRECTION SYSTEM USING MACHINE LEARNIN...
PDF
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...
PDF
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
PDF
Gc2005vk
PDF
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
PDF
Id105
PDF
Performance analysis on color image mosaicing techniques on FPGA
DOC
Morpho
PDF
Influence of local segmentation in the context of digital image processing
PDF
IRJET- Art Authentication System using Deep Neural Networks
IRJET- Significant Neural Networks for Classification of Product Images
Property based fusion for multifocus images
Review paper on segmentation methods for multiobject feature extraction
International Journal of Computational Engineering Research(IJCER)
International Journal of Engineering Research and Development (IJERD)
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
Review on Optimal image fusion techniques and Hybrid technique
CATWALKGRADER: A CATWALK ANALYSIS AND CORRECTION SYSTEM USING MACHINE LEARNIN...
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
Gc2005vk
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
Id105
Performance analysis on color image mosaicing techniques on FPGA
Morpho
Influence of local segmentation in the context of digital image processing
IRJET- Art Authentication System using Deep Neural Networks
Ad

Similar to IRJET- Face Recognition using Machine Learning (20)

PDF
Classification of Images Using CNN Model and its Variants
PDF
IRJET- Identification of Scene Images using Convolutional Neural Networks - A...
PDF
IRJET - Hand Gesture Recognition to Perform System Operations
PDF
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
PDF
Plant Disease Detection using Convolution Neural Network (CNN)
PDF
IRJET-Multiclass Classification Method Based On Deep Learning For Leaf Identi...
PDF
Deep Learning for Natural Language Processing
PDF
IRJET- Machine Learning based Object Identification System using Python
PDF
IRJET-MText Extraction from Images using Convolutional Neural Network
PDF
Improving AI surveillance using Edge Computing
PDF
Efficient resampling features and convolution neural network model for image ...
PDF
Efficient resampling features and convolution neural network model for image ...
PDF
IRJET - Single Image Super Resolution using Machine Learning
PDF
A Survey on Image Processing using CNN in Deep Learning
PDF
Semantic Assisted Convolutional Neural Networks in Face Recognition
PDF
IRJET- A Vision based Hand Gesture Recognition System using Convolutional...
PDF
IMAGE SEGMENTATION AND ITS TECHNIQUES
PDF
DEEP LEARNING BASED BRAIN STROKE DETECTION
PDF
Garbage Classification Using Deep Learning Techniques
PDF
UNSUPERVISED LEARNING MODELS OF INVARIANT FEATURES IN IMAGES: RECENT DEVELOPM...
Classification of Images Using CNN Model and its Variants
IRJET- Identification of Scene Images using Convolutional Neural Networks - A...
IRJET - Hand Gesture Recognition to Perform System Operations
From Pixels to Understanding: Deep Learning's Impact on Image Classification ...
Plant Disease Detection using Convolution Neural Network (CNN)
IRJET-Multiclass Classification Method Based On Deep Learning For Leaf Identi...
Deep Learning for Natural Language Processing
IRJET- Machine Learning based Object Identification System using Python
IRJET-MText Extraction from Images using Convolutional Neural Network
Improving AI surveillance using Edge Computing
Efficient resampling features and convolution neural network model for image ...
Efficient resampling features and convolution neural network model for image ...
IRJET - Single Image Super Resolution using Machine Learning
A Survey on Image Processing using CNN in Deep Learning
Semantic Assisted Convolutional Neural Networks in Face Recognition
IRJET- A Vision based Hand Gesture Recognition System using Convolutional...
IMAGE SEGMENTATION AND ITS TECHNIQUES
DEEP LEARNING BASED BRAIN STROKE DETECTION
Garbage Classification Using Deep Learning Techniques
UNSUPERVISED LEARNING MODELS OF INVARIANT FEATURES IN IMAGES: RECENT DEVELOPM...
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

DOCX
573137875-Attendance-Management-System-original
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
Geodesy 1.pptx...............................................
PPTX
additive manufacturing of ss316l using mig welding
PDF
PPT on Performance Review to get promotions
PPTX
Fundamentals of Mechanical Engineering.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Sustainable Sites - Green Building Construction
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PPTX
Artificial Intelligence
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPT
Mechanical Engineering MATERIALS Selection
PDF
Well-logging-methods_new................
573137875-Attendance-Management-System-original
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Geodesy 1.pptx...............................................
additive manufacturing of ss316l using mig welding
PPT on Performance Review to get promotions
Fundamentals of Mechanical Engineering.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
III.4.1.2_The_Space_Environment.p pdffdf
R24 SURVEYING LAB MANUAL for civil enggi
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Sustainable Sites - Green Building Construction
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
Artificial Intelligence
Categorization of Factors Affecting Classification Algorithms Selection
Mechanical Engineering MATERIALS Selection
Well-logging-methods_new................

IRJET- Face Recognition using Machine Learning

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3772 FACE RECOGNITION USING MACHINE LEARNING Ishan Ratn Pandey, Mayank Raj, Kundan Kumar Sah , Tojo Mathew, M. S. Padmini 123B.E., Department of Computer Science and Engineering, The National Institute of Engineering, Mysuru, India 45Assistant Professor, Computer Science & Engineering, The National Institute of Engineering, Mysuru, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - For real world applications like video surveillance, human machine interaction, and security systems, face recognitionisofgreatimportance. Deeplearning based methods have shown better performance in terms of accuracy and speed of processing in image recognition compared to traditional machine learning methods. This paper presents a modified architecture of the Convolution Neural Network (CNN) by adding two operations of normalization to two of the layers. The operation of normalization that is normalization of the batch provided acceleration of the network. CNN architecture was used to extract distinctive facial characteristicsandSoftmaxclassifier was used to classify faces within CNN's fully connected layer. Our Face Database has shown in the experiment part that the proposed approach has improved the performance of face recognition with better results of recognition. Key Words: face recognition, convolutional neural network, softmax classifier, deep learning 1.INTRODUCTION Face recognition is the process of recognizing a person's face through a vision system. Because of its use in security systems, videosurveillance,commercial areas,it has been an important human - computer interaction tool and is also used in social networks like Facebook. After the fast development of artificial intelligence, face recognition has attracted attention due toits meddlesomenatureandsinceit is the main method of human identification when compared with other types of biometric methods. Face recognitioncan be easily checked in an uncontrolled environment without the knowledge of the subject person. As the history of face recognitionisviewed,itisseen that it has been present in many researchpaperse.g.[1]–[6]. Traditional methods based on shallow learning have faced challenges such as pose variation, scene lighting and facial expression changes as in references [7]-[17]. Shallow learning methods use only some basic image characteristics and trust on artificial experience to extract sample characteristics. Deep learning methods can extract more complicated facial characteristics [18]-[27]. Deeplearningis making crucial progress in solving issues that have formany years restricted the artificial intelligence community's best attempts. It has proved outstanding in disclosing complex structures in high-dimensional data and is therefore applicable to many science, business and government domains. It addresses the problem of learning hierarchical representations with a single algorithm or some algorithms and has mainly defeated records in image recognition, natural language processing, semantic segmentation and many other real worldscenarios [28]-[35]. Therearevarious deep learning approaches like Convolution Neural Network (CNN), Deep Belief Network (DBN) [36], [37], Stacked Autoencoder [38]. In image and face recognition, CNN is frequently used as an algorithm. CNN is a kind of artificial neural networks that use convolution approach to extract characteristics from input data to increase the number of characteristics. CNN was first proposed by LeCun and was first used in the recognition of handwriting [39]. His network was the source of many of the recent architectures and an inspiration for many scientists. By publishing their work in the ImageNet Competition, Krizhevsky, Sutskever and Hinton achieved good results [40]. It is regarded as one of computer vision's most dominant publications and has shown that CNNs outperform recognition performance in comparison with handmade methods. CNN has achieved cutting - edge over a number of areas with computational power from Graphical Processing Units (GPUs), including image recognition, scene recognition and edge detection. This paper's main contribution is to obtain a powerful high accuracy recognition algorithm. In thispaper, by adding Batch Normalization process after two different layers, we developed a new CNN architecture. In this paper, the general structure of the processof face recognition consists of three stages. It starts with the pre - processing stage: the conversion of color space and the resize of images, continuing with the extraction of facial features and then the classification of extracted feature set. Softmax Classifier is to realize the final stage in our system, which is classification based on the facial characteristics extracted from CNN. The rest of this paper is as follows organized. The architecture of CNN is introduced in section 2. Theproposed algorithm will be discussed in section 3. Section 4 presents the face database used in this paper. Section 5 presents the experimental results. Finally, in section 6, we discuss conclusions. 2. METHODOLOGY CNNs are a category of neural networks that have shown to be highly effective in areas suchasfacerecognition and classification. CNNs are a type of feed-forward, multi- layered neural networks. CNNs consist of neurons with learning weights. Each filter takes certain inputs, converts and follows them with a non-linearity [41]. As shown in
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3773 Fig.1, a typical CNN architecture is shown. CNN's structure contains layers of Convolution, pooling,RectifiedLinearUnit (ReLU), and Fully Connected. Fig 1: Layers of CNN 2.1 Convolution Layer Convolution layer performs the core building block of a Convolutionary Network that performs most of the heavy lifting computations. Convolution layer's main purpose is to extract features from the image-based input data. By learning image features usingsmall squaresofinput image, Convolution preserves the spatial relationship between pixels. Using a set of learnable neurons, the input image is compressed. This creates a feature map in the output image and then feeds the feature maps to the next convolution layer as input data. 2.2 Pooling Layer Pooling layer reduces each activation map's dimensionality but still has the most important information. The images input are divided into a set of rectangles thatare not overlapping. A non-linear operation such as average is used to down-sample each region. This layer achievesbetter generalization, faster convergence, robust translation and distortion, and is usually placed between layers of convolution. 2.3 ReLU Layer ReLU is a non-linear operation that involves units that use the rectifier. It is an element-wise operation which means that it is applied per pixel, reconstituting all the negative values by zero in the feature map. To understand how ReLU operates, we assume that in the literature for neural networks there is a neuron input given as x and from that the rectifier is defined as f(x)= max(0,x). 2.4 Fully Connected Layer The term Fully ConnectedLayer(FCL)refersto each filter connected in the next layer in the previous layer. The output from the layers of convolution, pooling, andReLUare incarnation of the input image's high-level features. Theaim of using the FCL is to use these features to classify the input image into different classes based on the training dataset. FCL is considered to be the final layer of pooling feeding the features to a classifier using the activation function of Softmax. The sum of Fully Connected Layer's output probabilities is 1. Using the Softmax as the activation function ensures this. The Softmax function takes an arbitrary real-evaluated scores vector and transformsitinto a value vector between 0 and 1 that sums up to 1. 3. THE PROPOSED ALGORITHM The block scheme of the proposed algorithm for CNN recognition is shown in Fig. 2. The algorithm is performed in the following three steps: 1) Resize the input images as 16x16x1, 16x16x3, 32x32x1, 32x32x3, 64x64x1, and 64x64x1. 2) Make a CNN structure with eight layers made up of convolutional, max pooling, convolutional, max pooling, convolutional, max pooling, convolutional, and convolutional layers respectively. 3) Use Softmax classifier for classification after extracting all the features. Fig 2: CNN Block Diagram In picture. 3, The structure of the proposed CNN extraction block feature is shown. 4. DATABASE Our face database contains imagesof4 peopletaken at different times with different lighting conditions taken between 2/03/2019 to 8/03/19. Each individual in the database is represented by hundred colored JPEG images with cluttered background taken at 640 pixels resolution. In these images, the average size of the faces is 150bis150 pixels. The pictures show frontal and/or inclined faces with different conditions of lighting and scale. Photograph. Fig. 4 presents some face images from our face database of different subjects [42].
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3774 5. EXPERIMENTAL RESULTS We designed our CNN with the MatConvNet software tool Beta23 version. The size of each image was changed after the pre-processingstageas16x16x1,16x16x3, 32x32x1, 32x32x3, 64x64x1, and 64x64x3. 70% of the pictures were assigned to the training set, 30% to the test set. By making changes in image size, learning rate, batch size, and so on, we implemented various tests. For 35 epochs, CNN was trained. The performance of the proposed CNN was assessed based on top-1 and top-5 errors. Top-1 error rate checks whether the top class is identical to the target label and top-5 error rate checks whether the target label is one of your top five predictions. Table 1 shows a brief structure of the proposed algorithm. The results in the literature are better than those using shallow learning techniques like in references [43-45]. Figure 5 shows the performance of the proposed CNN architecture with respect to the top-1 error rate. As seen from Figure 5, from the image size 64x64x3, the lowest top-1 error rate was obtained. This result matters when it is intended to find any subject's target label in the database. Top-5 error rate is given in Figure 6 and the lowest ratewith 3 channels was achieved from all images. 6. CONCLUSION This paper presents an empirical assessment of the CNN architecture-based face recognition system. The prominent features of the proposed algorithm is that it uses batch normalization for the outputs of the first and final convolution layers and higheraccuracyratesareachieved by the network. Softmax Classifier is usedtoclassifythefacesin a fully connected layer step. Our Face Database tested the performance of the proposed algorithm. The resultsshowed satisfactory rates of recognition according to literature studies. REFERENCES [1] S. G. Bhele and V. H. Mankar, “A Review Paper on Face Recognition Techniques,” Int. J. Adv. Res. Comput. Eng. Technol., vol. 1, no. 8, pp. 2278–1323, 2012. [2] V. Bruce and A. Young, “Understanding face recognition,” Br. J. Psychol., vol. 77, no. 3, pp. 305–327, 1986. [3] D. N. Parmar and B. B. Mehta, “Face Recognition Methods & Applications,” Int. J. Comput. Technol. Appl., vol. 4, no. 1, pp. 84–86, 2013. [4] W. Zhao et al., “Face Recognition: A Literature Survey,” ACM Comput. Surv., vol. 35, no. 4, pp. 399–458, 2003.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3775 [5] K. Delac, Recent Advances in Face Recognition. 2008. [6] A. S. Tolba, A. H. El-baz, and A. A. El-Harby, “Face Recognition : A Literature Review,” Int. J. Signal Process.,vol. 2, no. 2, pp. 88–103, 2006. [7] C. Geng and X. Jiang, “Face recognitionusingsiftfeatures,” in Proceedings - International Conference on Image Processing, ICIP, pp. 3313–3316, 2009. [8] S. J. Wang, J. Yang, N. Zhang, and C. G. Zhou, “Tensor Discriminant Color Space for Face Recognition,” IEEE Trans. Image Process., vol. 20, no. 9, pp. 2490–501, 2011. [9] S. N. Borade, R. R. Deshmukh, and S. Ramu, “Face recognition using fusion of PCA and LDA: Borda count approach,” in 24th Mediterranean Conference on Control and Automation, MED 2016, pp. 1164–1167, 2016. [10] M. A. Turk and A. P. Pentland, “Face Recognition Using Eigenfaces,” Journal of Cognitive Neuroscience, vol. 3, no. 1. pp. 72–86, 1991. [11] M. O. Simón, “Improved RGB-D-T based face recognition,” IET Biometrics, vol. 5, no. 4, pp. 297–303, Dec. 2016. [12] O. Dniz, G. Bueno, J. Salido, and F. De La Torre, “Face recognition usingHistogramsofOrientedGradients,”Pattern Recognit. Lett., vol. 32, no. 12, pp. 1598–1603, 2011. [13] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 2, pp. 210–227, 2009. [14] C. Zhou, L. Wang, Q. Zhang, and X. Wei,“Facerecognition based on PCA image reconstruction and LDA,” Opt. - Int. J. Light Electron Opt., vol. 124, no. 22, pp. 5599–5603, 2013. [15] Z. Lei, D. Yi and S. Z. Li, “Learning Stacked Image Descriptor for Face Recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 9, pp. 1685–1696, Sep. 2016. [16] P. Sukhija, S. Behal, and P. Singh, “Face Recognition System Using Genetic Algorithm,” in Procedia Computer Science, vol. 85, 2016. [17] S. Liao, A. K. Jain, and S. Z. Li, “Partial face recognition: Alignmentfree approach,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 5, pp. 1193–1205, 2013. [18] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Learning Deep Representation for Face Alignment with Auxiliary Attributes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 5, pp. 918–930, 2016. [19] G. B. Huang, H. Lee, and E. Learned-Miller, “Learning hierarchical representations for face verification with convolutional deep belief networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2518–2525, 2012. [20] S. Lawrence, C. L. Giles, Ah Chung Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Trans. Neural Networks, vol. 8, no. 1, pp. 98–113, 1997. [21] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,” in Procedings of the British Machine Vision Conference 2015, pp. 41.1- 41.12, 2015. [22] Z. P. Fu, Y. N. ZhANG, and H. Y. Hou, “Survey of deep learning in face recognition,” in IEEE International Conference on Orange Technologies, ICOT 2014, pp. 5–8, 2014. [23] X. Chen, B. Xiao, C. Wang, X. Cai, Z. Lv, and Y. Shi, “Modular hierarchical feature learning with deep neural networks for face verification,” Image Processing (ICIP), 2013 20th IEEEInternational Conferenceon.pp.3690–3694, 2013. [24] Y. Sun, D. Liang, X. Wang, and X. Tang, “DeepID3: Face Recognition withVeryDeepNeural Networks,”Cvpr,pp. 2–6, 2015. [25] G. Hu, “When Face Recognition Meets with Deep Learning: An Evaluation of Convolutional Neural Networks for Face Recognition,” 2015 IEEE Int. Conf. Comput. Vis. Work., pp. 384–392, 2015. [26] C. Ding and D. Tao, “Robust Face Recognition via Multimodal Deep Face Representation,” IEEE Trans. Multimed., vol. 17, no. 11, pp. 2049– 2058, 2015. [27] A. Bharati, R. Singh, M. Vatsa, and K. W. Bowyer, “Detecting Facial Retouching Using Supervised Deep Learning,” IEEE Trans. Inf. Forensics Secur., vol.11,no.9,pp. 1903–1913, 2016. [28] M. Liang and X. Hu, “Recurrent convolutional neural network for object recognition,” 2015 IEEE Conference on Computer Vision and Pattern Recognition(CVPR).pp.3367– 3375, 2015. [29] P. Pinheiro and R. Collobert, “Recurrent convolutional neural networks for scene labeling,” Proc. 31st Int.Conf., vol. 32, no. June, pp. 82–90, 2014. [30] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang, “DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection,” in Proceedings of the IEEE Computer Society Conference on Computer
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 3776 Vision and Pattern Recognition, vol. 07–12, pp. 3982–3991, June 2015. [31] M. A. K. Mohamed, A. El-Sayed Yarub, and A. Estaitia, “Automated Edge Detection Using Convolutional Neural Network,” Int. J. Adv. Comput. Sci. Appl., vol.4,no.10,pp.11– 17, 2013. [32] Dan Cireúan, “Deep Neural Networks for Pattern Recognition.” [33] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural Language Processing (Almost) from Scratch,” J. Mach. Learn. Res., vol. 12, pp. 2493–2537, 2011. [34] R. Collobert and J. Weston, “A unified architecture for natural language processing: Deep neural networks with multitask learning,” Proc. 25th Int. Conf. Mach. Learn., pp. 160–167, 2008. [35] E. Shelhamer, J. Long, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, pp. 640–651, 2017. [36] R. Xia, J. Deng, B. Schuller, and Y. Liu, “Modeling gender information for emotion recognition using Denoising autoencoder,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing – Proceedings, pp. 990–994, 2014. [37] G. E. Hinton, S. Osindero, and Y. W. Teh, “A fast learning algorithm for deep belief nets,”Neural Comput.,vol.18,no.7, pp. 1527–1554, 2006. [38] Y. Bengio, “Learning Deep Architectures for AI,” vol. 2, no. 1, 2009. [39] Y. LeCun, “Backpropagation AppliedtoHandwritten Zip Code Recognition,” Neural Comput., vol. 1, no. 4, pp. 541– 551, Dec. 1989. [40] A. Krizhevsky, I. Sutskever, and H. E. Geoffrey, “ImageNet Classification with Deep Convolutional Neural Networks,” Adv. Neural Inf. Process. Syst. 25, pp. 1–9, 2012. [41] A. Uçar, Y. Demir, and C. Guzelis, “Object Recognition and Detection with Deep Learning for Autonomous Driving Applications,” Simulation, pp. 1-11, 2017. [42] “Georgia Tech face database,” 10-Jan-2017 [Online]. Available: http:/www.anefian.com/research/facereco.htm. [43] Nischal K N, Praveen Nayak M, K Manikantan, and S Ramachandran,“FaceRecognitionusingEntropy-augmented face isolation and Image folding as pre-processing techniques,” 2013 Annual IEEEIndia Conference(INDICON), 2013. [44] Katia Estabridis, “Face Recognition and Learning via Adaptive Dictionaries,” IEEE ConferenceonTechnologies for Homeland Security (HST), 2012. [45] Qiong Kang and Lingling Peng, “An Extended PCA and LDA for color face recognition,” International Conference on Information Security and Intelligence Control (ISIC), 2012.