For the majority of our readers (mostly students looking at to do their undergraduate or graduate studies), the terms neural networks, deep learning, machine learning and artificial intelligence are sometimes confusing. In fact, a lot of people use one term interchangeably with the other. In this post, I will try to dissect and demystify some of the basic understanding of neural networks, deep learning, machine learning and artificial intelligence.

I believe that you have noticed that Facebook does recognize human faces in an image. So, how does Facebook do that? Well, it’s because of the neural network. The modern AI chatbots and personal assistants like Siri, Alexa or Cortana are also leveraging neural networks. It’s because of neural networks that YouTube, Spotify, and Netflix send you the list of recommended videos or songs.

For a section, Neural Network, Deep Learning, Machine Learning, and Artificial Intelligence are just buzzwords. But, it’s not entirely true at all. The concepts of neural networks, deep learning, machine learning, and artificial intelligence are really fascinating. But, the true understanding of how they work is limited to very few experts (folks at MIT, Stanford, Google, Facebook, and Amazon).

Additionally, there are few organizations and startups that just use those terms to get media attention and/or to raise funding; and not implementing them at all. But, there is a difference between knowing the name of something and knowing (and understanding) something. So, let’s try to understand them at the basic level.

Demystifying Neural Networks, Deep Learning, Machine Learning, and Artificial Intelligence

The neural network is a computer system modeled after the human brain. In simple words, a neural network is a computer simulation of the way biological neurons work within a human brain.

As per Dr. Robert Hecht-Nielsen, the inventor of one of the first neurocomputers, a neural network or artificial neural network (ANN) is

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”

ANN is a group of algorithms that are used for machine learning (or precisely deep learning). Alternatively, think like this – ANN is a form of deep learning, which is a type of machine learning, and machine learning is a subfield of artificial intelligence.

Artificial Intelligence (AI)

Artificial intelligence is the field of study by which a computer (and its systems) develop the ability for successfully accomplishing complex tasks that usually require human intelligence such as visual perception, speech recognition, decision-making, and translation between languages. AI is usually defined as the “science of making computers do things that require intelligence when done by humans”. In other words, artificial intelligence is concerned with solving tasks that are easy for humans but hard for computers.

Machine Learning (ML)

Machine learning is a field of study that applies the principles of computer science and statistics to create statistical models, which are used for future predictions (based on past data or Big Data) and identifying (discovering) patterns in data. Machine learning is itself a type of artificial intelligence that allows software applications to become more accurate in predicting outcomes without being explicitly programmed.

Machine learning is the ability for a computer to output or does something that it wasn’t programmed to do. While machine learning emphasizes making predictions about the future, artificial intelligence typically concentrates on programming computers to make decisions. If you use an intelligent program that involves human-like behavior, it can be artificial intelligence. However, if the parameters are not automatically learned (or derived) from data, it’s not machine learning.

Read: Beginners Guide to Machine Learning, Artificial Intelligence, Deep Learning, and Big Data

“At its core, ML is simply a way of achieving AI.” ML is the type of AI that can include but isn’t limited to neural networks and deep learning. Read more about the difference between ML & AI.

Deep Learning

Deep learning, also known as the deep neural network, is one of the approaches to machine learning. Other major approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks.

Deep learning is a special type of machine learning. It involves the study of ANN and ML related algorithms that contain more than one hidden layer.

Deep learning involves mathematical modeling, which can be thought of as a composition of simple blocks of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.

The word “deep” means that the composition has many of these blocks stacked on top of each other – in a hierarchy of increasing complexity. The output gets generated via something called Back-propagation inside of a larger process called Gradient descent which lets you change the parameters in a way that improves your model.

Traditional machine learning algorithms are linear. Deep learning algorithms are stacked in a hierarchy of increasing complexity.

Read: Best Online Courses on Deep Learning, Machine Learning, and Artificial Intelligence

You should also read the following two blog posts:

Difference Between Artificial Intelligence, Machine Learning, and Deep Learning – NVIDIA

Difference Between Artificial Intelligence, Machine Learning, and Deep Learning – Qualcomm

Imagine a baby is trying to learn what a dog is by pointing the finger at the objects. The parents will either say “Yes, that is a dog” or “No, that is not a dog”. As the baby continues to point to objects, the baby becomes more aware of the features and characteristics that all dogs possess.

In this case, the baby is clarifying a complex abstraction (the concept of dog) by building a hierarchy of increasing complexity created. In each step, the baby applies the knowledge gained from the preceding layer of hierarchy.

Software programs use the deep learning approach in a similar manner. Deep learning is a way to automate predictive analytics. The only difference is that the baby might take weeks to learn something new and complex; a computer program could do that in a few minutes.

Related Posts:

Top Resources and Platforms to Learn Data Science & Machine Learning Skills

How to get Jobs in Data Science, Machine Learning & AI

Data Scientist Jobs in India

A neural network, which is a special form of deep learning, is aimed to build predictive models for solving complex tasks by exposing a system to a large amount of data. The system is then allowed to learn on its own how to make the best predictions.

You can also put it in this way – deep learning is an advanced version of the neural network.

Read: Scopes of Machine Learning & AI in FinTechs

Supervised Learning and Unsupervised Learning

Some neural nets use supervised learning, while others use unsupervised learning.

Supervised Learning is a type of machine learning algorithm that is used if one wants to discover known patterns on unknown data.

Say, if a machine learning algorithm is provided with some images of different objects with different types (animals or buildings). In supervised learning, the algorithm will learn how to say which type of object is there in a particular image that was NOT presented to the algorithm during its training stage. Supervised learning is guided by labeled data fed to the machine.

Unsupervised Learning is the type of machine learning algorithm, used if one wants to discover unknown patterns in known data.

Suppose, if a supermarket has got a dataset with the shopping list of customers. The authorized person at the supermarket can use unsupervised learning to understand what kind of products the customers are likely to buy together. So, the supermarket staff could place products near each other that are likely to be bought together.

In a few cases, ANN deploys Reinforcement Learning. In this case, ANN makes a decision by observing its environment. If the observation is negative, the network adjusts its weights to be able to make a different required decision the next time.

Neural Networks and Human Brain

Computers and human brains have much in common, but they’re essentially very different. What if you combine the best of both worlds—the systematic power of a computer and the densely interconnected cells of a brain? This is what the scientists have been doing over the last couple of decades with the help of neural networks.

A typical human brain contains something like 100 billion neurons (brain cells). Each neuron is made up of a cell body (the central mass of the cell) with a number of connections coming off it: numerous dendrites (the cell’s inputs—carrying information toward the cell body) and a single axon (the cell’s output—carrying information away).

A single neuron passes a message to another neuron across this interface if the sum of weighted input signals from one or more neurons (summation) into it is great enough (exceeds a threshold) to cause the message transmission. Neurons interact and communicate with one another through an interface consisting of axon terminals that are connected to dendrites across a gap (synapse) as shown below.

Ultimately, the neurons do something like move the muscles in your arm or trigger a memory from your childhood.

The basic idea behind a neural network is to simulate lots of densely interconnected brain cells. The system will learn things, recognize patterns, and make decisions like a human. The system doesn’t need to be programmed explicitly. The system will learn all by itself – just like a brain.

Here is a recent example in which a neural network is tasked to predict the color of points in 2-D space according to their positions.

From the above picture, you can see that a neural network consists of three components: the inputs, features in the data; the network, layers of neurons; and the output, the prediction of the trained model. Additionally a method of the training the network is required, in order to make its predictions useful.

What is the advantage of Neural Networks over Traditional Computing?

Conventional or traditional computing involves a central processor that can address an array of memory locations where data and instructions are stored. Computations are made by the processor reading instruction as well as any data the instruction requires from memory addresses, the instruction is then executed and the results are saved in a specified memory location as required.

ANNs are not sequential or necessarily deterministic. There are no complex central processors, rather there are many simple ones which generally do nothing more than taking the weighted sum of their inputs from other processors. ANNs do not execute programmed instructions; they respond in parallel (either simulated or actual) to the pattern of inputs presented to it. There are also no separate memory addresses for storing data. Instead, information is contained in the overall activation ‘state’ of the network. ‘Knowledge’ is thus represented by the network itself, which is quite literally more than the sum of its individual components.

Conclusion

Artificial neural networks (ANNs) and the more complex deep learning technique are some of the most capable AI tools for solving very complex problems and will continue to be developed and leveraged in the future.

At the moment, deep neural nets are the most promising avenue at the moment in the quest for true Artificial Intelligence.

How does Neural Networks Work?

Further Resources to Understand How Neural Networks Work:

Introduction to Neural Networks

Artificial Neural Networks: Part1

Artificial Intelligence, Deep Learning, and Neural Networks Explained

How does Artificial Neural Network (ANN) algorithm work?

Online Courses on Neural Networks:

Introduction to Deep Learning

An Introduction to Practical Deep Learning

Deep Learning Specialization

Neural Networks and Deep Learning

Neural Networks for Machine Learning

Deep Learning for Business