This chapter introduces the concept of deep belief networks and the significance of this type of deep unsupervised learning. It explains such concepts by introducing deep autoencoders along with two regularization techniques that can help create robust models. These regularization techniques, batch normalization and dropout, have been known to facilitate the learning of deep models and have been widely adopted. We will demonstrate the power of a deep autoencoder on MNIST and on a much harder dataset known as CIFAR-10, which contains color images.
By the end of this chapter, you will appreciate the benefits of making deep belief networks by observing the ease of modeling and quality of the output that they provide. You will ...