In order to build flying machines, we don’t build airplanes that flap their wings, or that are made of bones, muscle, and feather.Join the Elements of AI online course!
Likewise, in artificial neural networks, the internal mechanism of the neurons is usually ignored, and artificial neurons are often much simpler than their natural counterparts.
If the goal is to build AI systems rather than to simulate biological phenomena, the electrochemical signaling mechanisms between natural neurons are also mostly ignored.
The vocabulary in modern artificial neural networks and in the study of human cognition is partially shared – but the explanations behind words often differ.
What is actually different, and what is similar?
Neurons and Perceptrons
What is a neuron? Let’s look into biological neurons and their artificial counterparts. Here’s an illustration of activity between two motor neurons:
You will see that biological neurons and machine learning neurons have little in common.
Communication between biological neurons is electrochemical in nature whereas artificial neurons communicate purely in numbers. What is similar, though, is that both neurons form interconnected networks.
Perceptron: an artificial neuron
The perceptron is a learning algorithm. A perceptron learns associations between inputs and outputs, inspired by how connections between neurons in the human brain work. A perceptron is a classifier: it can sort items into two distinct classes (e.g. a cat or a non-cat).
A single neuron can’t process much on its own: some connections between nodes (called weights) and the overall network structure play an important role in the flow of information and learning.
A perceptron becomes a multilayer perceptron—a neural network—simply when we add many perceptrons together.
In artificial neural networks, depth is generated by adding new layers between the input and output layers (the deep in deep learning usually refers to just this). Some of these are called hidden layers because of their in-between location and because we don’t explicitly tell the network what kind of features it should be looking for in each layer.
The structure of a neural network can be anything. A layer may hold many neurons, and neurons may be interconnected in a single layer.
Information may flow from a neuron to another in a feed-forward way, or it can return to previous neurons in a recurrent way. The illustration above shows a toy example of both.
Recurrent neural networks (RNNs) are used in models where consecutive information—or context—a kind of a short-term memory—is important, such as in machine translation.
Learning Changes Neural Connections
In both human and artificial neural networks, when something is learned, connections between neurons become stronger.
In artificial neural networks, these connections are called weights. Weight changes can occur through backpropagation, for example. As the model makes predictions, errors are propagated back, which fine-tunes the interactions between neurons, and changes the weights.
An important difference between artificial neural networks and you: unlike most artificial networks, you do not need to be exposed to the same object multiple times to remember having seen it before.
Although: it’s often said that with humans, the problem is not storage, it’s retrieval! It’s completely normal to “forget” a friend’s name even though you recognize their face. If an artificial neural network recognizes what’s in a picture, it will have no problem fetching the name (and all other information) associated to it.
Connecting previous knowledge to new information
You know that this thing could be used for sitting although you’ve never seen it before. Maybe you would even call it a chair. That’s you using your previous knowledge to generalize into newly acquired information.
Generalizing is also what artificial neural networks can do, but only to an extent. If an artificial network has learned what a chair is by only looking at four-legged wooden stools, it will not be able to classify a Ball Chair as a chair.