IT KNOWS WHAT EVEN YOU DON’T KNOW ABOUT YOURSELF

You have been on YouTube for two hours when all you wanted to do was watch a 5-minute video. How did this happen?

Today, tactically placed machine learning algorithms are discreetly influencing our life decisions. Undoubtedly, some of these influences optimize our lives by saving us valuable time. For instance, shopping suggestions on Amazon and dating suggestions on Tinder and Bumble help us to make informed choices. However, if abused, the same learning algorithms have the prowess to reform the political outlook of a nation; the infamous Cambridge Analytica and Facebook was one such incident. However, the session between Mark Zuckerberg and the senators was an even worse punishment for the viewers. But the session accentuated a pivotal concern about the lack of cognizance about machine learning and data sharing algorithms. Thus, it is imperative to comprehend what constitutes such innately powerful learning algorithms. Let’s take a deeper look.

Mark Zuckerberg at the senator hearing for the Cambridge Analytica Scandal. Facebook was accused of mishandling data of more than 50 million users

LEARNING TO LEARN

The most straightforward approach to comprehend the meaning of Machine Learning is to scrutinize the two words in isolation. As we know, “Machine” is something that alleviates cumbersome tasks from our lives. “Learning”, however, is associated with acquiring knowledge about a phenomenon. In reality, the true meaning of “learning” can be grasped through its provenance. For most of us, learning began in kindergarten with mental exercises such as assorting colors and recognizing shapes and patterns. Surprisingly, majority of machine learning algorithms find their footholds in analogous practices.

THE UNDERPINNINGS OF MACHINE LEARNING

Ultimately, machine learning is nothing but a reward for decades of diligent data gathering and processing by mankind. However, until recently, the norm had been to pedantically spoon-feed machines the exact way we wanted the data to be processed. Fortunately, even though datasets have become gargantuan in modern times, contemporary algorithms have automated majority of the data processing.

Exoskeleton of conventional programming

Image Courtesy: Introduction to Machine Learning (MIT OpenCourseware) by Prof. Eric Grimson

Modern programming with a dash of machine learning

Image Courtesy: Introduction to Machine Learning (MIT OpenCourseware) by Prof. Eric Grimson

MACHINE LEARNING: A.I.’s COVETED OFFSPRING

Artificial intelligence and machine learning have been used interchangeably by the uninitiated populace in offhand discussions. Smartphone manufacturers are liable for much of this misconception as most of them propagate the idea that their devices utilize “A.I.” to enhance user experience. However, in reality, machine learning algorithms are what optimize our surfing and media experiences — every tap, scroll, like and comment optimizes these algorithms further.

Although it is true that the two stalwarts, machine learning and A.I., are anything but mutually exclusive, each has its idiosyncrasies. For instance, music suggestions on Spotify, or the “you might be interested” shows on Netflix are consequences of sophisticated learning algorithms. These algorithms track users’ daily activities to provide felicitous recommendations. Ultimately, machine learning analyzes our past choices to predict our future habits to improve our present. But what distinguishes machine learning from A.I.?

A.I. ≠ MACHINE LEARNING; A.I. = Machine Learning + Deep Learning +Neural Networks….;

For starters, A.I. is somewhat of a virtual blueprint of an animal brain. A.I. finds it application in places where improvisation is a necessity. Virtual assistants such as Siri and Alexa are paragons of applied A.I.; not many of us can deny asking both thoughtful and gratuitous questions to these assistants. Surprisingly, these assistants now have answers to questions that even some pseudo-intellectuals fail to answer. For instance, if you were to ask Siri “Is there one person out there for all of us”, you would get a response like “it’s your opinion that counts”. Based on these responses some could make the argument that Apple could even charge for a “therapist” version of Siri — I hope Apple is not reading this.

It is noteworthy that Siri wasn’t always so eloquent in its responses. So, how did it get there? Well, part of it was due to machine learning. Several learning algorithms would have processed millions of responses to similar questions and then created an A.I. application that could adapt its responses. Therefore, as depicted by this specific example, machine learning is a tool or an enabler for A.I. — and so are neural networks and deep learning.

The contemporary ecosystem of learning and programming

Image Courtesy: https://towardsdatascience.com/cousins-of-artificial-intelligence-dda4edc27b55

WHEN MACHINES SURREPTITIOUSLY BECAME MORE “HUMAN”

The computing ecosystem was disrupted when Arthur Samuel, an engineer at IBM, engendered a checkers program in 1949. In particular, this program was light years ahead of its time as it had the capability of learning from its mistakes. Thus, this marked the inception of the term “machine learning”. The technology was analogous to an infant learning the basic rules of life by trial and error; just like an infant, the machine recognized what moves were inauspicious and tried to eschew them.

Almost a decade after the advent of the revolutionary checkers program, the world of machine learning was astounded yet again. An algorithm, called the “Nearest Neighbor” (NN), caught everyone’s attention this time. Machine Learning was now in its kindergarten stages as it was learning to recognize patterns in datasets — a development that would soon militate every modern computer application for navigation, e-commerce, etc.

Arthur Samuel, developer of the checkers program, demonstrating the revolutionary learning algorithm on television in 1956

COMPONENTS OF LEARNING

The most astounding peculiarity of learning algorithms is that they align closely with the learning curve of a human. Over the course seen we have discretely seen machine learning has three major aspects: prediction, error, and learning. YouTube is a familiar example to understand the functional aspects of machine learning. The learning process is initiated when we sign in and start perusing the content. Every click from that point helps to populate a dataset that helps the algorithm to create a regression model. This model is then used to make “predictions” and provide suggestions to the user that the algorithm “thinks” the user will be fond of.

Then comes the error component — the moment of truth — where the model is correct if the user chooses to go down the “suggestions” rabbit hole and squander hours on end or the model might need optimization in case the user shuns the suggestion(s) for a prolonged duration.

In the learning phase now, the algorithm records what kind of videos to avoid that lie outside the regression model. This information is consolidated to train the model continuously and also to optimize upcoming suggestions/predictions.

TYPES OF ALGORITHMS: THE TRAINEE BECOMES THE TRAINER

Since understanding different algorithms is an involving process, a deep-dive into each type of machine learning algorithm would be the scope for a separate post. However, a high-level overview of each type is still imperative to close the loop on this topic. Learning algorithms have been widely categorized into three categories: supervised, unsupervised, and reinforced learning. The nomenclature itself is a stepping stone towards deciphering the functionality behind each algorithm.

SUPERVISED LEARNING: THE “TRAINING WHEELS” PROTOCOL

Supervised learning is just a notch above the traditional programming practices where the “Training” phase in commenced by inundating the model with relevant data. Since these datasets are already classified based on defining features, the algorithm learns the basis of data classification through observership. Email filtering is a classic example of one such algorithm. Lastly, certain items are labeled undesirable, and based on the commonalities among such items, the algorithm elects whether an email is worthy of alighting in your inbox.

UNSUPERVISED LEARNING: TIME TO TAKE TRAINING WHEELS OFF

However, the learning approach is deemed “unsupervised” if the same dataset is keyboarded without any labels whatsoever. It is an innate function of such algorithms to recognize the defining traits and categorize the data. Once the data has been compartmentalized, all the upcoming data only serves to optimize the algorithm further. A rudimentary yet lucid example of unsupervised learning would be a recruiting algorithm that hires or rejects applications by analyzing the pool of existing employees — LinkedIn, you listening? If the candidate serves to be an outlier in the defined cluster, he or she might not be felicitous choice for the firm. This approach is erringly like the previous example of infants learning to assort objects based on shapes and colors.

Distinction between types of machine learning.

Image courtesy: https://towardsdatascience.com/unsupervised-learning-with-python-173c51dc7f03

REINFORCED LEARNING: THE ART OF SNAKES AND LADDERS

The next progression, reinforced learning, is a consolidation of various facets of machine learning and is also slightly more involving. The game of snakes and ladders perfectly encapsulates the concept. At the pith, the algorithm operates on a reward and punishment system. The objective here is to achieve the final target, i.e. the top square, with least punishment. Moreover, “reinforced” implies that the algorithm learns every time it is punished by a venomous snake bite and remembers not to repeat the action. Figuratively, this is like a child learning the ropes of life, where every experience teaches the child the best way to lead life without garnering significant penalties.

FINAL CALL: IS IT POSSIBLE TO GO OFFLINE — EVER?

Undoubtedly, machine learning and A.I. have made massive headway since their inception. However, the advancements are astounding and scary at the same time. It is not only the layperson who has reservations about the capacity of machine learning and A.I. but also the tech revolutionaries who are the vanguard of this data-driven revolution. Elon Musk, the non-fictional rendition of Tony Stark, has recently opined his qualms by stating that A.I. might become an obstreperous curse for the society. He foresees an imminent threat to humanity within the next decade. Furthermore, Musk has also founded an open source A.I. firm called “Open A.I.” to create more amicable robots. Musk states that the most ideal scenario would be for humans to merge their consciousness with the advanced framework of A.I.

WAS THE “I,ROBOT” PROHPECY CORRECT?

Based on the depictions and perceptions of A.I. from the movies and other digital publications, it might seem that a superior robot army is going to vanquish the humans from the driving seat. Although A.I. and machine learning have bolstered the quality of life by empowering revolutionary creations such as self-driving cars and complex data-processing, they possess the potential to inflict permanent damage to our society. So, what can we do as users and inputs to minimize the tyrannical aspects of such technologies?

Firstly, we need to be more cognizant about our data. This would require users to shun their passiveness and become more wary of sharing any information with unsolicited applications. Moreover, users should be initiated about the basic underpinnings of such technologies. Furthermore, the society can advocate for a regulatory body for A.I. development. However, it should be patent that the purpose of this body would not be to curb development but steer it in the correct direction. I believe, while an unopposed “A.I. takeover” isn’t an imminent threat, users and developers need to closely monitor every development in the field to prevent such a mishap.