SlideShare a Scribd company logo
Convolution Neural Network for Visual
Recognition
Outline
• Quick overview of Artificial Neural Network (ANN)
• What is Convolution? Convolutional Neural Network (CNN)? Why?
• How it works?
• Demo
• Code
• References
• Discussion
7/24/18 Creative Common BY-SA-NC 2
Neural Network
source: http://www.kurzweilai.net/images/neuron_structure1.jpg and https://theclevermachine.files.wordpress.com/2014/09/perceptron2.png
7/24/18 Creative Common BY-SA-NC 3
Forward Feed and Back Propagation
source: https://theclevermachine.wordpress.com/2014/09/11/a-gentle-introduction-to-artificial-neural-networks/
7/24/18 Creative Common BY-SA-NC 4
Activation Function
image source: https://www.gabormelli.com/RKB/Neuron_Activation_Function
7/24/18 Creative Common BY-SA-NC 5
Why Convolution Neural Network?
Image source: https://www.coursera.org/lecture/convolutional-neural-networks/why-convolutions-Xv7B5
• Reduce number of weights
required for training.
• Use filter to capture local
information; more meaningful
search, move from pixel
recognition to pattern
recognition.
• Sparsity of connections (means
most of the weights are 0. This
can lead to an increase in space
and time efficiency.)
7/24/18 Creative Common BY-SA-NC 6
What is Convolution?
Image source: https://www.youtube.com/watch?v=cOmkIsWfAcg
• In mathematics, a convolution is
the integral measuring how
much two functions overlap as
one passes over the other.
• A convolution is a way of mixing
two functions by multiplying
them.
7/24/18 Creative Common BY-SA-NC 7
Image Convolution
image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
7/24/18 Creative Common BY-SA-NC 8
• Original image: function f
• Filter: function g
• Image convolution f * g
Example: 8
f * gg
g2
g1
gn
Approach
image source: image source: cs231n_2017_lecture5.pdf slide-38
7/24/18 Creative Common BY-SA-NC 9
Convolution
image source: cs231n_2017_lecture5.pdf slide-39
7/24/18 Creative Common BY-SA-NC 10
CNN Layers
source: partially from cs231n_2017
A simple ConvNet for CIFAR-10 classification could have the architecture
[INPUT - CONV - RELU - POOL - FC].
In more detail:
• INPUT [e.g. 32x32x3]
• Holds the raw pixel values of the image, width 32, height 32, and with three color channels R,G,B.
• CONV layer [32x32x6]
• Holds the output of neurons that are connected to local regions in the input,
• each computing a dot product between their weights and a small region they are connected to in the input volume. This may
result in volume such as [32x32x6] if we decided to use 6 filters.
• RELU layer [32x32x6]
• will apply an elementwise activation function, such as the max(0,x) thresholding at zero. This leaves the size of the volume
unchanged ([32x32x6]).
• POOL layer [16x16x6]
• will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x6].
• FC (i.e. fully-connected) layer [400x1]> [120x1] > [84x1]
• will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class
score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron
in this layer will be connected to all the numbers in the previous volume.
Notes: switch 12 filters used in original note to 6 filters.
7/24/18 Creative Common BY-SA-NC 11
Convolution
source cs231n
Calculation Demo:
http://cs231n.github.io/convolutional-networks/
7/24/18 Creative Common BY-SA-NC 12
7/24/18 Creative Common BY-SA-NC 13
Image source: image source: cs231n_2017_lecture5.pdf slide-39
Activation Function - ReLU
• Remove negative values.
• When we use ReLU, we should
watch for dead units in the
network (= units that never
activate). If there is many dead
units in training our network, we
might want to consider using
leaky_ReLU instead.
7/24/18 Creative Common BY-SA-NC 14
Max-Pooling
Image source: cs231n
7/24/18 Creative Common BY-SA-NC 15
Architecture Example
source: https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524
7/24/18 Creative Common BY-SA-NC 16
Conv Layer
image source: cs231n_2017_lecture5.pdf slide-39
7/24/18 Creative Common BY-SA-NC 17
Operation – Convolution
image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
7/24/18 Creative Common BY-SA-NC 18
Operation – Activation
Image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
7/24/18 Creative Common BY-SA-NC 19
Operation – Pooling
image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
7/24/18 Creative Common BY-SA-NC 20
Architecture Example
7/24/18 Creative Common BY-SA-NC 21
Alexnet - Trained
Filters
source: cs231n
Example filters learned by Krizhevsky et al.
Each of the 96 filters shown here is of size
[11x11x3], and each one is shared by the
55*55 neurons in one depth slice. Notice
that the parameter sharing assumption is
relatively reasonable: If detecting a
horizontal edge is important at some location
in the image, it should intuitively be useful at
some other location as well due to the
translationally-invariant structure of images.
There is therefore no need to relearn to
detect a horizontal edge at every one of the
55*55 distinct locations in the Conv layer
output volume.
7/24/18 Creative Common BY-SA-NC 22
Summary
source: partially from cs231n_2017_lecture5.pdf slide-76
• Workflow
1. Initialize all filter weights and parameters with random numbers.
2. Use original images as input,
2.1 Apply Filters to Original Image > Conv layer
2.2 Apply Activation Function (e.g. ReLU) to Conv layer > Feature Map
2.3 Apply Pooling Filter to Feature Map > Smaller Feature Map (optional)
2.4 Flatten the Feature Map > Full Connected Network (FC)
2.5 Apply ANN training (forward and backward propagation) to FC
2.6 Optimize the Weights, Calculate error, adjust weights, loop with original images till the probability of correct class is high.
3. Test the result, if happy, then save filters (weight and parameters) for future use, else loop.
• ConvNets stack CONV,POOL,FC layers
[(CONV-RELU)*N-POOL?]*M-(FC-RELU)*K, SOFTMAX
where - N is usually up to ~5, M is large, 0 <= K <= 2
- Trend towards smaller filters and deeper architectures
- Trend towards getting rid of POOL/FC layers (just CONV)
• But!!
- recent advances such as ResNet/GoogLeNet challenge this paradigm.
- Proposed new Capsule Neural Network can overcome some shortcoming of ConvNets.
7/24/18 Creative Common BY-SA-NC 23
Various CNN Architectures
From https://www.jeremyjordan.me/convnet-architectures/
7/24/18 Creative Common BY-SA-NC 24
These architectures serve as rich feature extractors which can be used for image
classification, object detection, image segmentation, and many other more
advanced tasks.
Classic network architectures (included for historical purposes)
• [LeNet-5](https://www.jeremyjordan.me/convnet-architectures/#lenet5)
• [AlexNet](https://www.jeremyjordan.me/convnet-architectures/#alexnet)
• [VGG 16](https://www.jeremyjordan.me/convnet-architectures/#vgg16 )
Modern network architectures
• [Inception](https://www.jeremyjordan.me/convnet-architectures/#inception)
• [ResNet](https://www.jeremyjordan.me/convnet-architectures/#resnet)
• [DenseNet](https://www.jeremyjordan.me/convnet-architectures/#densenet )
Network Performance
Source: https://www.semanticscholar.org/paper/An-Analysis-of-Deep-Neural-Network-Models-for-Canziani-Paszke/28ee688947cf9d31fc48f07a0497cd75200a9485 and
https://arxiv.org/pdf/1605.07678.pdf
7/24/18 Creative Common BY-SA-NC 25
Reference
• [How to Select Activation Function for Deep Neural Network](https://engmrk.com/activation-function-for-dnn/ )
• [Using Convolutional Neural Networks for Image Recognition](https://ip.cadence.com/uploads/901/cnn_wp-pdf)
• [Activation Functions: Neural Networks](https://towardsdatascience.com/activation-functions-neural-networks-
1cbd9f8d91d6)
• [Convolutional Neural Networks Tutorial in TensorFlow](http://adventuresinmachinelearning.com/convolutional-neural-
networks-tutorial-tensorflow/)
• [Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/pdf/1512.00567.pdf)
7/24/18 Creative Common BY-SA-NC 26
Demo
[Demo - filtering](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ ) building image
[Demo – cs231n](http://cs231n.stanford.edu/) end to end architecture in real-time
[Demo – convolution calculation](http://cs231n.github.io/convolutional-networks/ ) dot product
[Demo – cifar10 ](https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html) in details filter/ReLU
7/24/18 Creative Common BY-SA-NC 27
Code
[image classification with Tensorflow](https://github.com/rkuo/ml-tensorflow/blob/master/cnn-cifar10/cnn-cifar10-keras-v0.2.0.ipynb ) use tensorflow local
[image classification with Keras](https://github.com/rkuo/ml-tensorflow/blob/master/cnn-cifar10/cnn-cifar10-keras-v0.2.0.ipynb ) use keras local
[catsdogs](https://github.com/rkuo/fastai/blob/master/lesson1-catsdogs/Fastai_2_Lesson1.ipynb) use fastai with pre-trained model = resnet34
[tableschairs](https://github.com/rkuo/fastai/blob/master/lesson1-tableschairs/Fastai_2_Lesson1a-tableschairs.ipynb ) switch data
7/24/18 Creative Common BY-SA-NC 28
Image Classification
with Tensorflow
7/24/18 Creative Common BY-SA-NC 29
Image Classification
with Keras
7/24/18 Creative Common BY-SA-NC 30
TablesChairs with
Fastai
7/24/18 Creative Common BY-SA-NC 31
Catsdogs Model with
Fastai
7/24/18 Creative Common BY-SA-NC 32
Supplement Slides
7/24/18 Creative Common BY-SA-NC 33
Why Convolution
Neural Network?
Image source:
https://www.youtube.com/watch?v=QsxKKyhYxFQ
• Reduce number of weights
required for training.
• Use filter to capture local
information; more meaningful
search, move from pixel
recognition to pattern
recognition.
• Sparsity of connections (means
most of the weights are 0. This
can lead to an increase in space
and time efficiency.)
7/24/18 Creative Common BY-SA-NC 34
LeNet 5
source: Yann. LeCun, L. Bottou, Y. Bengio, and P. Haffner,
Gradient-based learning applied to document
recognition, Proc. IEEE 86(11): 2278–2324, 1998.
- 2 Conv
- 2 Subsampling
- 2 FC
- Gaussian Connectors
7/24/18 Creative Common BY-SA-NC 35
7/24/18 Creative Common BY-SA-NC 36
Inception v3

More Related Content

PPTX
Convolutional Neural Network and Its Applications
PPTX
Convolution Neural Network (CNN)
PPTX
Convolutional Neural Networks
PPTX
PDF
Next generation sequencing
PPTX
Suffrage and election
 
PPTX
Stacks and Queue - Data Structures
Convolutional Neural Network and Its Applications
Convolution Neural Network (CNN)
Convolutional Neural Networks
Next generation sequencing
Suffrage and election
 
Stacks and Queue - Data Structures

What's hot (20)

PPTX
Convolutional neural network
PPTX
Image classification using CNN
PDF
Convolutional Neural Network Models - Deep Learning
PPTX
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
PPTX
Image classification with Deep Neural Networks
PPTX
Image classification using cnn
PPTX
Convolution Neural Network (CNN)
PPTX
Convolutional Neural Network
PPTX
CNN Machine learning DeepLearning
PPTX
Deep Learning With Neural Networks
PDF
Convolutional Neural Networks (CNN)
PDF
Applications in Machine Learning
PDF
Deep Learning - Convolutional Neural Networks
PPTX
CNN Tutorial
PPTX
CNN and its applications by ketaki
PDF
Introduction to object detection
PPTX
Image classification using convolutional neural network
PPT
Multi-Layer Perceptrons
PPTX
Convolutional neural network
PDF
Image segmentation with deep learning
Convolutional neural network
Image classification using CNN
Convolutional Neural Network Models - Deep Learning
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Image classification with Deep Neural Networks
Image classification using cnn
Convolution Neural Network (CNN)
Convolutional Neural Network
CNN Machine learning DeepLearning
Deep Learning With Neural Networks
Convolutional Neural Networks (CNN)
Applications in Machine Learning
Deep Learning - Convolutional Neural Networks
CNN Tutorial
CNN and its applications by ketaki
Introduction to object detection
Image classification using convolutional neural network
Multi-Layer Perceptrons
Convolutional neural network
Image segmentation with deep learning
Ad

Similar to Machine Learning - Convolutional Neural Network (20)

PDF
IRJET-Multiple Object Detection using Deep Neural Networks
PPTX
B.tech_project_ppt.pptx
PDF
Garbage Classification Using Deep Learning Techniques
PDF
PR-144: SqueezeNext: Hardware-Aware Neural Network Design
PPTX
Introduction to Convolutional Neural Networks (CNNs).pptx
PDF
Modern Convolutional Neural Network techniques for image segmentation
PPTX
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable C...
PPTX
DL-CO2-Session6-VGGNet_GoogLeNet_ResNet_DenseNet_RCNN.pptx
PDF
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
PDF
Once-for-All: Train One Network and Specialize it for Efficient Deployment
PPTX
Image Segmentation Using Deep Learning : A survey
PDF
Machine learining concepts and artifical intelligence
PPTX
Handwritten Digit Recognition(Convolutional Neural Network) PPT
PDF
interface and user experience. Responsive Design: Ensure the app is user-frie...
PPTX
intro-to-cnn-April_2020.pptx
PPTX
CNN Arcitecture Implementation Resnet CNN-RESNET
PPTX
U-Netpresentation.pptx
PDF
Convolutional Neural Networks for Image Classification (Cape Town Deep Learni...
PDF
“Understanding DNN-Based Object Detectors,” a Presentation from Au-Zone Techn...
PPTX
Deep Learning for Computer Vision - PyconDE 2017
IRJET-Multiple Object Detection using Deep Neural Networks
B.tech_project_ppt.pptx
Garbage Classification Using Deep Learning Techniques
PR-144: SqueezeNext: Hardware-Aware Neural Network Design
Introduction to Convolutional Neural Networks (CNNs).pptx
Modern Convolutional Neural Network techniques for image segmentation
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable C...
DL-CO2-Session6-VGGNet_GoogLeNet_ResNet_DenseNet_RCNN.pptx
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
Once-for-All: Train One Network and Specialize it for Efficient Deployment
Image Segmentation Using Deep Learning : A survey
Machine learining concepts and artifical intelligence
Handwritten Digit Recognition(Convolutional Neural Network) PPT
interface and user experience. Responsive Design: Ensure the app is user-frie...
intro-to-cnn-April_2020.pptx
CNN Arcitecture Implementation Resnet CNN-RESNET
U-Netpresentation.pptx
Convolutional Neural Networks for Image Classification (Cape Town Deep Learni...
“Understanding DNN-Based Object Detectors,” a Presentation from Au-Zone Techn...
Deep Learning for Computer Vision - PyconDE 2017
Ad

More from Richard Kuo (15)

PPTX
View Orchestration from Model Driven Engineering Prospective
PPT
Telecom Infra Project study notes
PPTX
5g, gpu and fpga
PDF
Learning
PPTX
Kubernetes20151017a
PDF
IaaS with Chef
PDF
Ontology, Semantic Web and DBpedia
PDF
SDN and NFV
PDF
Graph Database
PDF
UML, OWL and REA based enterprise business model 20110201a
PPTX
Open v switch20150410b
PPTX
Spark Study Notes
PDF
Docker and coreos20141020b
PDF
Git studynotes
PDF
Cloud computing reference architecture from nist and ibm
View Orchestration from Model Driven Engineering Prospective
Telecom Infra Project study notes
5g, gpu and fpga
Learning
Kubernetes20151017a
IaaS with Chef
Ontology, Semantic Web and DBpedia
SDN and NFV
Graph Database
UML, OWL and REA based enterprise business model 20110201a
Open v switch20150410b
Spark Study Notes
Docker and coreos20141020b
Git studynotes
Cloud computing reference architecture from nist and ibm

Recently uploaded (20)

PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPT
Teaching material agriculture food technology
PPTX
A Presentation on Artificial Intelligence
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Tartificialntelligence_presentation.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Spectroscopy.pptx food analysis technology
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
1. Introduction to Computer Programming.pptx
PDF
Encapsulation theory and applications.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Electronic commerce courselecture one. Pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Agricultural_Statistics_at_a_Glance_2022_0.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Teaching material agriculture food technology
A Presentation on Artificial Intelligence
Machine learning based COVID-19 study performance prediction
Tartificialntelligence_presentation.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Encapsulation_ Review paper, used for researhc scholars
Spectroscopy.pptx food analysis technology
Advanced methodologies resolving dimensionality complications for autism neur...
1. Introduction to Computer Programming.pptx
Encapsulation theory and applications.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
MIND Revenue Release Quarter 2 2025 Press Release
Programs and apps: productivity, graphics, security and other tools
Assigned Numbers - 2025 - Bluetooth® Document
gpt5_lecture_notes_comprehensive_20250812015547.pdf

Machine Learning - Convolutional Neural Network

  • 1. Convolution Neural Network for Visual Recognition
  • 2. Outline • Quick overview of Artificial Neural Network (ANN) • What is Convolution? Convolutional Neural Network (CNN)? Why? • How it works? • Demo • Code • References • Discussion 7/24/18 Creative Common BY-SA-NC 2
  • 3. Neural Network source: http://www.kurzweilai.net/images/neuron_structure1.jpg and https://theclevermachine.files.wordpress.com/2014/09/perceptron2.png 7/24/18 Creative Common BY-SA-NC 3
  • 4. Forward Feed and Back Propagation source: https://theclevermachine.wordpress.com/2014/09/11/a-gentle-introduction-to-artificial-neural-networks/ 7/24/18 Creative Common BY-SA-NC 4
  • 5. Activation Function image source: https://www.gabormelli.com/RKB/Neuron_Activation_Function 7/24/18 Creative Common BY-SA-NC 5
  • 6. Why Convolution Neural Network? Image source: https://www.coursera.org/lecture/convolutional-neural-networks/why-convolutions-Xv7B5 • Reduce number of weights required for training. • Use filter to capture local information; more meaningful search, move from pixel recognition to pattern recognition. • Sparsity of connections (means most of the weights are 0. This can lead to an increase in space and time efficiency.) 7/24/18 Creative Common BY-SA-NC 6
  • 7. What is Convolution? Image source: https://www.youtube.com/watch?v=cOmkIsWfAcg • In mathematics, a convolution is the integral measuring how much two functions overlap as one passes over the other. • A convolution is a way of mixing two functions by multiplying them. 7/24/18 Creative Common BY-SA-NC 7
  • 8. Image Convolution image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ 7/24/18 Creative Common BY-SA-NC 8 • Original image: function f • Filter: function g • Image convolution f * g Example: 8 f * gg g2 g1 gn
  • 9. Approach image source: image source: cs231n_2017_lecture5.pdf slide-38 7/24/18 Creative Common BY-SA-NC 9
  • 10. Convolution image source: cs231n_2017_lecture5.pdf slide-39 7/24/18 Creative Common BY-SA-NC 10
  • 11. CNN Layers source: partially from cs231n_2017 A simple ConvNet for CIFAR-10 classification could have the architecture [INPUT - CONV - RELU - POOL - FC]. In more detail: • INPUT [e.g. 32x32x3] • Holds the raw pixel values of the image, width 32, height 32, and with three color channels R,G,B. • CONV layer [32x32x6] • Holds the output of neurons that are connected to local regions in the input, • each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as [32x32x6] if we decided to use 6 filters. • RELU layer [32x32x6] • will apply an elementwise activation function, such as the max(0,x) thresholding at zero. This leaves the size of the volume unchanged ([32x32x6]). • POOL layer [16x16x6] • will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x6]. • FC (i.e. fully-connected) layer [400x1]> [120x1] > [84x1] • will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume. Notes: switch 12 filters used in original note to 6 filters. 7/24/18 Creative Common BY-SA-NC 11
  • 13. 7/24/18 Creative Common BY-SA-NC 13 Image source: image source: cs231n_2017_lecture5.pdf slide-39
  • 14. Activation Function - ReLU • Remove negative values. • When we use ReLU, we should watch for dead units in the network (= units that never activate). If there is many dead units in training our network, we might want to consider using leaky_ReLU instead. 7/24/18 Creative Common BY-SA-NC 14
  • 15. Max-Pooling Image source: cs231n 7/24/18 Creative Common BY-SA-NC 15
  • 17. Conv Layer image source: cs231n_2017_lecture5.pdf slide-39 7/24/18 Creative Common BY-SA-NC 17
  • 18. Operation – Convolution image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ 7/24/18 Creative Common BY-SA-NC 18
  • 19. Operation – Activation Image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ 7/24/18 Creative Common BY-SA-NC 19
  • 20. Operation – Pooling image source: https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ 7/24/18 Creative Common BY-SA-NC 20
  • 22. Alexnet - Trained Filters source: cs231n Example filters learned by Krizhevsky et al. Each of the 96 filters shown here is of size [11x11x3], and each one is shared by the 55*55 neurons in one depth slice. Notice that the parameter sharing assumption is relatively reasonable: If detecting a horizontal edge is important at some location in the image, it should intuitively be useful at some other location as well due to the translationally-invariant structure of images. There is therefore no need to relearn to detect a horizontal edge at every one of the 55*55 distinct locations in the Conv layer output volume. 7/24/18 Creative Common BY-SA-NC 22
  • 23. Summary source: partially from cs231n_2017_lecture5.pdf slide-76 • Workflow 1. Initialize all filter weights and parameters with random numbers. 2. Use original images as input, 2.1 Apply Filters to Original Image > Conv layer 2.2 Apply Activation Function (e.g. ReLU) to Conv layer > Feature Map 2.3 Apply Pooling Filter to Feature Map > Smaller Feature Map (optional) 2.4 Flatten the Feature Map > Full Connected Network (FC) 2.5 Apply ANN training (forward and backward propagation) to FC 2.6 Optimize the Weights, Calculate error, adjust weights, loop with original images till the probability of correct class is high. 3. Test the result, if happy, then save filters (weight and parameters) for future use, else loop. • ConvNets stack CONV,POOL,FC layers [(CONV-RELU)*N-POOL?]*M-(FC-RELU)*K, SOFTMAX where - N is usually up to ~5, M is large, 0 <= K <= 2 - Trend towards smaller filters and deeper architectures - Trend towards getting rid of POOL/FC layers (just CONV) • But!! - recent advances such as ResNet/GoogLeNet challenge this paradigm. - Proposed new Capsule Neural Network can overcome some shortcoming of ConvNets. 7/24/18 Creative Common BY-SA-NC 23
  • 24. Various CNN Architectures From https://www.jeremyjordan.me/convnet-architectures/ 7/24/18 Creative Common BY-SA-NC 24 These architectures serve as rich feature extractors which can be used for image classification, object detection, image segmentation, and many other more advanced tasks. Classic network architectures (included for historical purposes) • [LeNet-5](https://www.jeremyjordan.me/convnet-architectures/#lenet5) • [AlexNet](https://www.jeremyjordan.me/convnet-architectures/#alexnet) • [VGG 16](https://www.jeremyjordan.me/convnet-architectures/#vgg16 ) Modern network architectures • [Inception](https://www.jeremyjordan.me/convnet-architectures/#inception) • [ResNet](https://www.jeremyjordan.me/convnet-architectures/#resnet) • [DenseNet](https://www.jeremyjordan.me/convnet-architectures/#densenet )
  • 26. Reference • [How to Select Activation Function for Deep Neural Network](https://engmrk.com/activation-function-for-dnn/ ) • [Using Convolutional Neural Networks for Image Recognition](https://ip.cadence.com/uploads/901/cnn_wp-pdf) • [Activation Functions: Neural Networks](https://towardsdatascience.com/activation-functions-neural-networks- 1cbd9f8d91d6) • [Convolutional Neural Networks Tutorial in TensorFlow](http://adventuresinmachinelearning.com/convolutional-neural- networks-tutorial-tensorflow/) • [Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/pdf/1512.00567.pdf) 7/24/18 Creative Common BY-SA-NC 26
  • 27. Demo [Demo - filtering](https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ ) building image [Demo – cs231n](http://cs231n.stanford.edu/) end to end architecture in real-time [Demo – convolution calculation](http://cs231n.github.io/convolutional-networks/ ) dot product [Demo – cifar10 ](https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html) in details filter/ReLU 7/24/18 Creative Common BY-SA-NC 27
  • 28. Code [image classification with Tensorflow](https://github.com/rkuo/ml-tensorflow/blob/master/cnn-cifar10/cnn-cifar10-keras-v0.2.0.ipynb ) use tensorflow local [image classification with Keras](https://github.com/rkuo/ml-tensorflow/blob/master/cnn-cifar10/cnn-cifar10-keras-v0.2.0.ipynb ) use keras local [catsdogs](https://github.com/rkuo/fastai/blob/master/lesson1-catsdogs/Fastai_2_Lesson1.ipynb) use fastai with pre-trained model = resnet34 [tableschairs](https://github.com/rkuo/fastai/blob/master/lesson1-tableschairs/Fastai_2_Lesson1a-tableschairs.ipynb ) switch data 7/24/18 Creative Common BY-SA-NC 28
  • 29. Image Classification with Tensorflow 7/24/18 Creative Common BY-SA-NC 29
  • 30. Image Classification with Keras 7/24/18 Creative Common BY-SA-NC 30
  • 32. Catsdogs Model with Fastai 7/24/18 Creative Common BY-SA-NC 32
  • 34. Why Convolution Neural Network? Image source: https://www.youtube.com/watch?v=QsxKKyhYxFQ • Reduce number of weights required for training. • Use filter to capture local information; more meaningful search, move from pixel recognition to pattern recognition. • Sparsity of connections (means most of the weights are 0. This can lead to an increase in space and time efficiency.) 7/24/18 Creative Common BY-SA-NC 34
  • 35. LeNet 5 source: Yann. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE 86(11): 2278–2324, 1998. - 2 Conv - 2 Subsampling - 2 FC - Gaussian Connectors 7/24/18 Creative Common BY-SA-NC 35
  • 36. 7/24/18 Creative Common BY-SA-NC 36 Inception v3

Editor's Notes

  • #2: Convolution Neural Network for Visual Recognition (捲積神經網絡用於視覺識別)
  • #7: Max-Pooling 最大池化 Use 6 filters size = 5 x 5 x 3 3072 x 3072 = 9.43m vs 156 x 4704 = 733824 Stride 步長
  • #13: 9 + 1 + (-2) + 1 (bias) = 9 Hyper-Parameters: Accepts a volume of size W1×H1×D1 Requires four hyper-parameters: Number of filters K, their spatial extent F, the stride S, the amount of zero padding P. Produces a volume of size W2×H2×D2 where: W2=(W1−F+2P)/S+1 H2=(H1−F+2P)/S+1 (i.e. width and height are computed equally by symmetry) D2=K With parameter sharing, it introduces F⋅F⋅D1 weights per filter, for a total of (F⋅F⋅D1)⋅K weights and K biases. In the output volume, the d-th depth slice (of size W2×H2) is the result of performing a valid convolution of the d-th filter over the input volume with a stride of S, and then offset by d-th bias. A common setting of the hyper-parameters is F=3,S=1,P=1.
  • #14: For consistency, function f should be g
  • #16: Max-Pooling 最大池化 http://www.ais.uni-bonn.de/papers/icann2010_maxpool.pdf show max-pooling is effective.
  • #17: Source cs231n: Example Architecture: Overview: We will go into more details below, but a simple ConvNet for CIFAR-10 classification could have the architecture [INPUT - CONV - RELU - POOL - FC]. In more detail: INPUT [32x32x3] will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B. CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as [32x32x12] if we decided to use 12 filters. Use 6 here. RELU layer will apply an elementwise activation function, such as the max(0,x) thresholding at zero. This leaves the size of the volume unchanged ([32x32x12]). POOL layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x12]. FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.
  • #19: Each Filter Generates One Feature Map
  • #21: In particular, pooling makes the input representations (feature dimension) smaller and more manageable reduces the number of parameters and computations in the network, therefore, controlling overfitting [4] makes the network invariant to small transformations, distortions and translations in the input image (a small distortion in input will not change the output of Pooling – since we take the maximum / average value in a local neighborhood). helps us arrive at an almost scale invariant representation of our image (the exact term is “equivariant”). This is very powerful since we can detect objects in an image no matter where they are located (read [18] and [19] for details).
  • #22: [INPUT – [CONV – RELU]*2 – POOL]*3 – [FC]*2 - SoftMax
  • #23: Alexnet - https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax
  • #24: Concept: Find a set of filters (function-g, matrix with weights) and parameters which can create proper feature maps, and cause various activation functions to be fired at different (layers) that leads to correct class has highest probability. f*g*a*p*fc -> max-y This should include the option of DROPOUT. Give a image function f, find a filter g, and activation function a, and pooling function p that leads to max y value (associate with f). Use red color glass filter to look a red letter-A written on a white paper, we will see a write letter-A written on a black paper.
  • #25: Source cs231n: Example Architecture: Overview: We will go into more details below, but a simple ConvNet for CIFAR-10 classification could have the architecture [INPUT - CONV - RELU - POOL - FC]. In more detail: INPUT [32x32x3] will hold the raw pixel values of the image, in this case an image of width 32, height 32, and with three color channels R,G,B. CONV layer will compute the output of neurons that are connected to local regions in the input, each computing a dot product between their weights and a small region they are connected to in the input volume. This may result in volume such as [32x32x12] if we decided to use 12 filters. RELU layer will apply an elementwise activation function, such as the max(0,x) thresholding at zero. This leaves the size of the volume unchanged ([32x32x12]). POOL layer will perform a downsampling operation along the spatial dimensions (width, height), resulting in volume such as [16x16x12]. FC (i.e. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. As with ordinary Neural Networks and as the name implies, each neuron in this layer will be connected to all the numbers in the previous volume.
  • #34: Demo: http://cs231n.stanford.edu/
  • #35: Max-Pooling 最大池化 Use 6 filters size = 5 x 5 x 3 3072 x 3072 = 9.43m vs 156 x 4704 = 733824 Stride 步長
  • #37: []()