Learn to Code via Tutorials on Repl.it

← Back to all posts
Understanding neural networks
adityakhanna (10)

Introduction

In this tutorial, I will go over the main workings of a Neural Network and code one from scratch for a particular purpose.
Neural Networks are very powerful in general and are versatile as well. The one I will introduce you to today isn't very powerful or versatile, it is called a perceptron and is three input neurons and one output neurons with one layer, the main purpose of this isn't to create a powerful usable tool but to understand the workings and intricacies of a neural network.

Structure

As mentioned previously, this neural network is three nodes connected to one. The values stored in the three nodes can be known as
the input vector, because it is what the neural network takes as input. The nodes also contain functions called activation functions but
we will get on to that when discussing the code. The lines between the three nodes and the output node have weights attributed to them.
Each weight defines how much one node can effect the one that it's connected to, these weights change as the neural network learns.

Context

The problem solved by this network isn't really applicable to the real world, but is at least easy to understand.

  • You are a teacher.
  • You have three students.
  • Student 1 is the brightest.
  • Student 2 is brighter than student 3.
  • Based upon past data, the computer needs to classify whether they are cheating or not

Converting this to neural network terms, the new scores of the students are what will be used as the input vector of our neural network.
The training data would be the scores attained in the past by these students and whether they have been cheating or not.

Code and explanation

This is the code, with each function explained step by step.

from numpy import random
from numpy import dot
from numpy import array
import numpy as np
import math

class network():
    def __init__(self):
        random.seed(1)
        self.synaptic_weights = 2 * random.random((3, 1)) - 1
    #Neural net sigmoid functions    
    def sigmoid(self, x):
      return 1 /(1+(math.e**-x))

    def adjust(self, x):
        return x * (1 - x)

    def train(self, In, Out, iterations):
        for iteration in range(iterations):
            output = self.process(In)
            error = Out - output
            adjustment = dot(In.T, error * self.adjust(output))


            self.synaptic_weights += adjustment


    def process(self, inputs):
        return self.sigmoid(dot(inputs, self.synaptic_weights))
cheating_classify = network()
training_labels_cheating_in = array([[10, 10, 10], [8, 9, 10], [10, 7, 6], [10,9,10],[9, 3, 2],[7,6,5]])
training_labels_cheating_out = array([[1, 1, 0, 1,0, 0]]).T
# Training:
cheating_classify.train(training_labels_cheating_in, training_labels_cheating_out, 10000)
def predict(scores):
  return(cheating_classify.process(array(scores)))

print(predict([10,8,6]))
print(predict([8,9,10]))

Now I will explain every part of this code in detail.
Firstly the class at the beginning, this class contains 5 functions including the init.
In the initiation, random synaptic weights (the variable synaptic_weights) are generated.
These will be adjusted procedurally over time.

The second function is the sigmoid function. The equation is 1/1+e^-x and the graph looks like an s with the the x values ranging from
-infinity to infinity and the y values ranging from 0 to 1. It's responsible for squashing the values so that they are compatible with the other structures of the neural network.

The train function makes use of the process function and vice versa, so I'll go through that first. In the neural network, the output value is the squashed value of the dot product of the input values and the current neural network weights.

In the train function, the synaptic weights are adjusted based upon the difference in accuracy between the predicted answer at the time and the true answer. This is called "error" and is multiplied by the f(x) value of the output when plotted on the graph of x(x-1) which is the adjust function, i.e. the product of the input vector and error * f(output) where f(x) = x(x-1). This value is then added to the synaptic weights therefore adjusting them procedurally ( a += b is the same as a = a + b).
That's the neural network over.

Next we have the training on our own dataset.

cheating_classify = network()
training_labels_cheating_in = array([[10, 10, 10], [8, 9, 10], [10, 7, 6], [10,9,10],[9, 3, 2],[7,6,5]])
training_labels_cheating_out = array([[1, 1, 0, 1,0, 0]]).T
# Training:
cheating_classify.train(training_labels_cheating_in, training_labels_cheating_out, 10000)

We have the examples of cheating and not cheating in "training_labels_cheating_in" and whether the students cheated or didn't cheat in binary form in "training_labels_cheating_out".

After this point, it should be obvious what the last line means, it basically trains on the in values and the out values for the number of iterations that we specified. To reiterate, it starts of with random weights and then over time gets more accurate by changing the weights systematically.

Now, the final function, called the predict() function.

def predict(scores):
  return(cheating_classify.process(array(scores)))

This uses the process function to process the scores.
And finally the tests:

print(predict([8,5,1]))
print(predict([8,9,10]))

The computer outputted 0.00076284 for the first call and 0.9999991 for the second.
This seems about reasonable.

Thanks for reading and I hope this boosted your understanding of neural networks.
In my next tutorial I'll introduce you to tensorflow and give you a basic overview of it.

Special thanks to @ved07 for being a nice critic.