Examples of autoencoders
In this chapter, we will demonstrate some examples of different variations of autoencoders using the MNIST dataset. As a concrete example, suppose the inputs x are the pixel intensity values from a 28 x 28 image (784 pixels); so the number of input data samples is n=784. There are s2=392 hidden units in layer L2. And since the output will be of the same dimensions as the input data samples, y ∈ R784. The number of neurons in the input layer will be 784, followed by 392 neurons in the middle layer L2; so the network will be a lower representation, which is a compressed version of the output. The network will then feed this compressed lower representation of the input a(L2) ∈ R392 to the second part of the network, which will try hard to reconstruct the input pixels 784 from this compressed version.
Autoencoders rely on the fact that the input samples represented by the image pixels will be somehow correlated and then it will use this fact to reconstruct them. So autoencoders...