Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Hands-On Convolutional Neural Networks with TensorFlow

You're reading from   Hands-On Convolutional Neural Networks with TensorFlow Solve computer vision problems with modeling in TensorFlow and Python

Arrow left icon
Product type Paperback
Published in Aug 2018
Publisher Packt
ISBN-13 9781789130331
Length 272 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (5):
Arrow left icon
Richard Burton Richard Burton
Author Profile Icon Richard Burton
Richard Burton
Giounona Tzanidou Giounona Tzanidou
Author Profile Icon Giounona Tzanidou
Giounona Tzanidou
Iffat Zafar Iffat Zafar
Author Profile Icon Iffat Zafar
Iffat Zafar
Leonardo Araujo Leonardo Araujo
Author Profile Icon Leonardo Araujo
Leonardo Araujo
Nimesh Patel Nimesh Patel
Author Profile Icon Nimesh Patel
Nimesh Patel
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Setup and Introduction to TensorFlow FREE CHAPTER 2. Deep Learning and Convolutional Neural Networks 3. Image Classification in TensorFlow 4. Object Detection and Segmentation 5. VGG, Inception Modules, Residuals, and MobileNets 6. Autoencoders, Variational Autoencoders, and Generative Adversarial Networks 7. Transfer Learning 8. Machine Learning Best Practices and Troubleshooting 9. Training at Scale 10. References 11. Other Books You May Enjoy

Variational autoencoders


Our first true generative model, which can create more data that resembles the training data, will be the variational autoencoder (VAE). The VAE looks like the normal autoencoder but with a new constraint that will force our compressed representation (latent space) to follow a zero mean and unit variance Gaussian distribution.

The idea behind forcing this constraint on the latent space is that when we want to use our VAE to generate new data, we can just create sample vectors that come from a unit Gaussian distribution and give them to the trained decoder. It is this constraint on the latent space vector that is the difference between VAE and normal autoencoders. This constraint allows us a way to create new latent vectors than can be fed to the decoder to generate data.

The following figure shows that the VAE looks exactly the same in structure as the autoencoder, except for the constraint on the hidden space:

Parameters to define a normal distribution

We need two parameters...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime