It’s been a fruitful year for Google’s efforts in the AI space, as witnessed by the number of new products the company introduced, as well as some critical improvements made to the existing services.

The largest number of announcements came out of Google I/O, the company’s annual developer conference held in May. Among other things, Google introduced Smart Compose for Gmail, made some really impressive updates to Google Maps and, perhaps most importantly, announced its new artificial intelligence-powered virtual assistant, dubbed Google Duplex (see a good summary of all new products and features introduced at Google I/O in 2018 here).

In the company’s own words:

“[Google Duplex is] a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed toward completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, as they would to another person, without having to adapt to a machine.”



The recording of a phone call to a hair salon made by Duplex was so impressive that it even led some to question whether Google Duplex has passed the Turing test (hint: it hasn’t; at the very least, the person making the judgment has to be aware that she might be talking to a machine). It also sparked a heated conversation about whether it’s appropriate to use technology in such fashion without making people on the receiving end aware that they are not interacting with an actual human being, but rather are talking to a bot. While it might be hard to answer such questions definitively, we’ll probably see more of this discussion soon enough, since Google started rolling out Duplex to some of its smartphones in December.

Another interesting update from Google arrived with the latest update to their Pixel line of smartphones (Pixel 3 & 3 XL), which came with some really impressive new camera capabilities enabled by the use of AI (we’ll touch on it again later in this post in a section dedicated to the advancements in computational photography).

Finally, DeepMind Technologies, fully owned by Alphabet Inc, managed to achieve a major milestone with the latest iteration of its AlphaZero program.

We’ve already seen the impressive achievements of AlphaGo and AlphaGo Zero in the game of Go in 2015 & 2016, when it handily won most games when competing against two of the strongest Go champions; in 2018, however, DeepMind’s team managed to achieve something even more interesting — the newest AlphaZero engine demonstrated its clear superiority over all of the strongest existing engines in chess, shogi and Go.

What is particularly interesting about AlphaZero is that it managed to achieve this feat without having to study any of the logs of the games played by humans; instead, the program self-taught itself how to play all three games, being provided with only the basic rules to start with. As it turns out, by operating without the limitations that come from learning from the games that were previously played has resulted in AlphaZero adopting “a ground-breaking, highly dynamic and “unconventional” style of play” that differed from anything seen before. That, in turn, makes the engine more useful to the community who might then learn new tactics by observing machine-developed strategies. It also creates a promise of the real-world applications for this technology in the future, given AlphaZero’s ability to successfully learn from scratch and tackle perfect information problems.

DeepMind is also working towards creating systems that can deal with imperfect information problems, as it demonstrated with its recent success with AlphaStar that beat a few professional players in StarCraft II (whereas in the past, AI has struggled to successfully play StarCraft due to the game’s complexity).

Microsoft