You are probably on this page because you saw the title and thought – well, it wouldn’t hurt to have some idea of what a neural network is, especially with AIs about to dominate our jobs (and my job… just kidding). While I don’t claim to be a neural network expert, I do know enough about them to give you a very basic summary of how they work. Hopefully, you will have some idea of how to piece together a neural network in your brain. You probably won’t be able to code one yet, but at least you will understand how they work.
What is a neural network?
A neural network is an artificial brain, that takes in arrays of data points (numbers) as an input and returns a classification of it or a probability that it meets some criteria. Now you must be thinking: “Wait a minute… I thought they can classify bigger stuff than points.” And you’re right. They can, but they have to break down all the data into points first. Think about an image for example. An image is made up of pixels, which are basically small points with color.
What does a neural network do with the data points?
When you are thinking about stuff in your brain, your brain cells are calculating a bunch of linear equations to imagine the stuff you are thinking about. So, thinking about how to win a video game? Your neurons are making equations for that. Pretty cool huh?
The neural network start with the equation result = inputs * weights + bias
. The inputs are an array of numbers as I already explained, and the weights are also an array of numbers. The bias is just a number by itself. You multiply the inputs and weights together using something called a dot product. That means the inputs and weights must be the same size.
As you have probably figured out, the weights weigh different parts of the input, to make them more important than others. The bias is sort of a tiebreaker in case the weights and the inputs all make each other zero. Basically it’s like if two people are arguing and someone has to intervene to make a solution.
Now if you think about it, you probably have more than one brain cell in your brain, otherwise you wouldn’t be reading this article. So what are the other neurons doing?
Neural networks have layers
The other neurons are chilling out and waiting for the first neuron to finish its calculation so that they can make their own complicated calculations with that result. All of these calculations are called layers just because it’s awkward to normal people if we kept calling them calculations.
There are a bunch of different equations we can use as the layers such as binary, sigmoid, ReLU, gausian, softplus, maxout, and so on. They have weird names, I know. Some people just call these calculations activation functions. The last layer is where we calculate the error of the prediction. It’s simply the prediction we got minus the target prediction we were supposed to get, squared.
If you still don’t understand how a neural network is made up, just imagine a subway sandwich with one of the bread slices being the weights and bias equation and the other slice being the error calculation. The rest of the ingredients inside the sandwich are just different layer calculations.
What happens when the neural network is wrong?
If the neural network gives the wrong answer, that means the error is too high. We have to fix it by doing something with the error. So what most people would normally do is just go to ChatGPT and ask it how to fix their neural network. Don’t do this. All you have to do, is take the derivative of the error equation. Then take the derivative of all the other layers.Then multiply them together.
Now in case you were like me and did not study derivatives properly before reading this article, all you have to do is search Google for a list of derivatives for each activation function. That way, you won’t have to do any hair-splitting calculus. Besides can you imagine a CPU having to run calculus instead of code for all these AI models?
Neural networks actually need two derivatives, for our weights and bias respectively. You can actually just use the derivative number as the derivative for the bias, but for the weights, you have to multiply this number by the input array first. Now that we have our derivatives, all we have to do is subtract the derivative weights from the original weights, and the derivative bias from the original bias. We can make a new prediction and repeat this process until our error is small enough.
Now you know how to make a neural network inside your head.