As I am intrigued about the capabilities of AI, I attended Dr James’ talk with the first years on Neural Networks; I was certainly not disappointed.
You can find my notes in the A5 brown notebook labelled Talks pages 1 – 8.
Some backstory on what Neural networks are:
They are computational structures that are used to train a computer to do certain tasks. They are based, in principle, on the same way that our biological neurons work in our brains. Pretty cool, huh.
These networks even have the ability to work out functions that we didn’t even know existed, they can invent them for us. This is an example of ‘true machine learning’.
Neural networks, or NN as I’ll refer to them as from now on, are used for prediction based on the inputs that it’s given or categorising; they can also work independently or together.
Above is an example of a NN and how it works. You have the initial inputs and then you have weighted variables that are randomly assigned to start but are then altered after once it starts to work out what works and what doesn’t. Then the activation function acts on the total input to create the output – AF(total input).
It sounds super complicated, but in theory, I actually sort of understood how it worked – wooo magical miracles! Below are the steps explained fully as to how they learn:
Step 1: Assign random weights to all neuron connections
Step 2: Assign initial inputs and expected outputs
Step 3: The input neurons “feed forward” their outputs to each hidden layer they’re connected with
Step 4: Hidden layers calculate their total sums multiplying each input layer by its connected weight and then adding to each of these together.
Step 5: Hidden layers calculate their outputs by putting their total sum in the activating function.
Step 6: Output layer neurons calculate their total sums multiplying each hidden layer output by its connected weight and then adding each of these together.
Step 7: Output layers calculate their outputs by putting their total sum in their activation function.
Step 8: Calculate the error of the network e.g. 1/2(Expected Output – Actual Output)2
Step 9: Calculate the gradients for each neuron in the output and hidden layers
Step 10: Alter weights (this is what causes learning to happen.
After this, Dr James explained by showing us. He made a computer learn. It did a thing in less than a second. It was alive! It created quite a buzz from everyone and this is when we got onto the subject of ethics. To demonstrate Dr James gave an example by dropping the file that had the computer learning and trashed it, he had essentially ‘killed’ the computer – does this mean he’s just committed murder? This sort of topic made me think and I am thinking that this topic could be something to consider for my reflective journal piece.
By attending this talk, it has made me think about areas outside of my project which evoke some sort of argument. I intend to attend more sessions such as these to get some inspiration and learn something new.