What is a perceptron and how do they work?

Biological Neuron

We’ll start by focusing on a biological neuron. It has dendrites that receive signals/inputs from other neurons. The inputs are then summed up in the cell body, and then the axon transmits the output to the next neuron.

Perceptrons which are the basic building blocks of artificial neural work with a similar mechanism.

Perceptron Architecture

A perceptron receives multiple inputs x1, x2, x3…., and produces a single binary output. Weights w1, w2, w3 are real numbers that express the importance of the respective inputs in relation to the output. The neuron’s output 0 or 1 is determined by whether the weighted sum (∑jWjXj) is less than or greater than some threshold value.

We can express the output in algebraic terms:

Basically, a perceptron makes decisions by weighing the evidence. Let’s look at a real-life example. Suppose you want to go swimming, and you’re undecided based on a number of factors:

Is the weather good? Will your friends accompany you or not?

We’ll represent the above factors with binary variables x1, x2 and have x1=1 if the weather is good, x1=0 if the weather is bad; and x2=1 if your friends will accompany you, x2=0 if they won’t.

We can use a perceptron to model this kind of decision making. We’ll choose a weight W1=6 for weather and W2=2 for friends. The larger the value of W1, the more important the weather condition is in making your final decision. When we choose a threshold of 5 for the perceptron, it will implement the model; outputting 1 whenever the weather is good and 2 when it’s bad. By varying the weights and the threshold, we can get different models of decision-making.

An activation function is then applied, after which the neuron gives an output. There are various types of activations functions that we’ll discuss in-depth in our second part of this series.

What is the difference between a perceptron and a neuron?

A perceptron has a single layer, while a multi-layer perceptron is neural networks. Neural networks have hidden layers where information passes before a final output is produced.

Neural Network Architecture

What are the different types of deep learning models?

Deep learning models can be grouped as:

Supervised Semi-supervised Unsupervised

Supervised learning

The machine has a “supervisor” that gives it all the answers. Data is already labeled (i.e. cat, dog, in a given image), and the machine is using these examples to learn, which will then be applied to future examples.

Each example is made up of an input object (vector) and output (supervisory signal). The algorithm learns from the labeled training data and then produces an inferred function that can be used to map new examples.

Most classification tasks depend on supervised learning. The tasks include:

Identifying objects in images like writings, street signs, pedestrians Speech and language recognition Spam detection Sentiment Analysis

Semi-supervised learning

A great example of semi-supervised learning is a child growing up who has learned from the parents (labeled information) combined with what they have observed on their own (unlabeled information) such as trees, houses, etc.

Semi-supervised learning makes use of both labeled and unlabeled data for training. The majority of training data is unlabeled with a few instances of labeled data.

We can group semi-supervised learning as:

Transductive: This is where we infer the correct labels for the given data. Inductive: This is where we infer the correct mapping from X to Y.

Unsupervised learning / Hebbian learning.

Unsupervised learning is when machines learn the relationship between elements in a dataset on their own.

The data is then classified without the help of labels. The algorithm looks for hidden patterns (features) in order to analyze the data.

The common algorithms include:

Anomaly detection

Clustering

Neural networks

Clustering is the most common unsupervised learning algorithm, which detects similarities and anomalies within a given dataset. It’s commonly used in: