
Advanced Deep Learning with Keras
By :

In the previous section, we learned that the principles behind GANs are straightforward. We also learned how GANs could be implemented by familiar network layers such as CNNs and RNNs. What differentiates GANs from other networks is they are notoriously difficult to train. Something as simple as a minor change in the layers can drive the network to training instability.
In this section, we'll examine one of the early successful implementations of GANs using deep CNNs. It is called DCGAN [3].
Figure 4.2.1 shows DCGAN that is used to generate fake MNIST images. DCGAN recommends the following design principles:
Use of strides > 1 convolution instead of MaxPooling2D
or UpSampling2D
. With strides > 1, the CNN learns how to resize the feature maps.
Avoid using Dense
layers. Use CNN in all layers. The Dense
layer is utilized only as the first layer of the generator to accept the z-vector. The output of the Dense
layer is resized and becomes the input of the succeeding...