Many people think that the concept of artificial intelligence (AI) is a recent development, but as far back as the 19th century, Ada Lovelace (1815-1852) realized that the data stored and manipulated inside computers was not obliged to represent numerical quantities but could instead be used to represent more abstract concepts like musical notes. As Ada wrote:

Supposing that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent. […] We might even invent laws for series or formulæ in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining.

Alan Turing (1912-1954) also pondered various aspects of AI, but nothing substantial occurred in this space until the Dartmouth Workshop in 1956, which laid the groundwork for what was to come.

From that time until only around five years ago, AI was largely relegated to academia. There was a burst of activity around the late 1980s and early 1990s into rule-based expert systems, but they failed to live up to the hype. The problem was that marketing groups attached the AI label onto anything they could, with the result that engineers and researchers ended up shunning the term for many years.

All this started to change in the past ten years. Machine learning wasn't even a blip on the 2014 version of the Gartner Hype Cycle, but just one year later it had already surmounted its Peak of Inflated Expectations.

New developments in algorithms and architectures coupled with advances in computing power have led to exponential advances in Artificial Neural Networks, AI, machine learning, and deep learning, all of which are entwined.

Of particular interest is the way in which AI is now starting to appear in everyday products and applications, like handwriting recognition (see The (Electronic) Pencil is Mightier than the Keyboard and Artificial Intelligence-Based Handwriting Recognition). Another interesting application is to use AI-based computer vision to help people with poor vision (see Deep Learning Machine Vision System Aids Blind and Visually Impaired).

Of course, there is also the possibility to apply AI for nefarious applications, such as cracking computer security or creating fake audio signatures (see Thinking of Using Voice Authentication? Think Again!)

A lot of people still don’t fully appreciate how wide-spread AI deployment is destined to be, but it will soon be all around us. Earlier this week, for example, McDonalds purchased the Israeli AI startup Dynamic Yield in order to customize its drive-through experience.

You know how when you go to the drive through, the screen currently shows a random assortment of additional items with which to tempt you. Well, in the not-so-distant future, each customer will be presented with a unique list based on environmental factors (temperature, humidity, precipitation), what they've already ordered, and — over time — what other people have purchased after making similar orders.

My own belief is that full-up augmented reality is still five to ten years out, but that when AI-enhanced AR comes on line, it will dramatically impact the way in which we see and interact with the world around us. In the meantime, it's worth paying attention to all of the AI-related articles that are appearing on a daily basis. For example, the following list presents a few such articles from our sister site EEWeb.com:

–Max Maxfield is Editor-in-Chief of EEWeb.com, an Aspencore sister site of EE Times, serving users of electronic design tools.