GAN Neural Networks
adityakhanna (29)

Generative Adversarial Neural Networks

GANs are a pretty definite feature of the news now because of their capabilities. DeepFakes and face-swapping systems that can really help in fooling people stem from the use of GANs, but how do they actually work?

This tutorial is a follow up to my other neural network tutorials because you do need some understanding of neural networks.

In simple terms, a GAN is two neural networks, one called the discriminator and one called the generator.

The job of the generator is to generate fake samples and the job of the discriminator is to classify whether a sample is fake or real and the two neural networks compete against each other therefore improving both neural networks.

In terms of compete, I mean if the discriminator wrongly classifies an image, it tunes its weights and if the generator gets found out for generating fake images it tunes its weights and so on for that number of epochs.

And that's all there is to it in terms of fundamentals, but note that the generator doesn't generate randomly, however, the input it does take
I can't really attach a demo for this tutorial specifically (and other python- TensorFlow tutorials) because of a glitch that prevents us from using the TensorFlow package after downloading it however I did upload all the code as an iPython notebook on GitHub at this link (this is actually code that I made as part of a course from many months back): https://github.com/adityakhannaadk/mnist_tf_gan/blob/master/digit_generator.py.
It simply generates MNIST digits instead of classifying them (the overused use of the MNIST dataset) and the main lines of code are these:

def generator(z,reuse=None):
  with tf.variable_scope('gen',reuse=reuse):
    hidden1 = tf.layers.dense(inputs=z,units=128)
    
    alpha = 0.01
    hidden1 = tf.maximum(alpha*hidden1,hidden1)
    hidden2 = tf.layers.dense(inputs=hidden1,units=128)
    hidden2 = tf.maximum(alpha*hidden2,hidden2)
    output = tf.layers.dense(hidden2,units=784,activation=tf.nn.tanh)
    return output
  
#Discrimnator in the GAN
def discriminator(X,reuse=None):
  with tf.variable_scope('dis',reuse=reuse):
    hidden1 = tf.layers.dense(inputs=X,units=128)
    
    alpha = 0.01
    hidden1 = tf.maximum(alpha*hidden1,hidden1)
    hidden2 = tf.layers.dense(inputs=hidden1,units=128)
    hidden2 = tf.maximum(alpha*hidden2,hidden2)
    logits = tf.layers.dense(hidden2,units=1)
    output = tf.sigmoid(logits)
    return output,logits  

One thing to note is this was written in TensorFlow 1.4 and I think its better to explain it this way rather than with TensorFlow 2.0. If you do want to convert my code, you can do so with the TensorFlow 1.4 -> 2.0 converter.

The number of units to output in the generator is 784 because one sample from the MNIST dataset is 28 by 28 pixels (which is 784) so the data outputted must be similar, therefore the outputs also have 784 units on which the numpy. reshape function is applied to make it a 28 by 28 square. This happens at the end of my code when displaying the nth sample.
There are two hidden layers (in the generator) and they are both made up of 128 neurons then the relu function is applied to the value returned after matrix multiplying the values outputted from the hidden layers.

The discriminator is not too different from a general classifier network, it uses the sigmoid activation function as well as two hidden layers, similar to the generator with 128 neurons each.

Thanks for reading and I hope you learned something about how GANs work.

You are viewing a single comment. View All
adityakhanna (29)

@AdCharity Hmm. I did want to show the impact of GANs though and deepfakes are pretty recent.