Amazon’s Alexa recently received some deservedly bad press for a spectacular failure: She mistakenly sent a recording of a couple’s private conversation to one of their employees!

Now in their defense one could say that any complex technology is bound to fail in some rare unforeseen circumstances. Amazon explained that:

Echo woke up due to a word in background conversation sounding like “Alexa.” Then, the subsequent conversation was heard as a “send message” request. At which point, Alexa said out loud “To whom?” At which point, the background conversation was interpreted as a name in the customers contact list. Alexa then asked out loud, “[contact name], right?” Alexa then interpreted background conversation as “right”.

However, we should not find this kind of error be acceptable. This invasion of privacy was rooted in inferior technology and poor corporate philosophy.

On the technology side, Amazon’s reliance on statistical, machine learning approaches means that the system makes crucial decisions based on keywords alone, without any deep understanding of the meaning of the sentence or of its context. Alexa is quite dumb.

Top Articles on How Businesses are using Bots:

In contrast, a the system based on a more advanced ‘Third Wave’ technology, such as a proper cognitive architecture, would have understood that there was no intent of sending a message to someone else.

Currently, all the major chatbots and so called ‘personal assistants’ are designed to serve some mega-corporations’ agenda — and not primarily the user’s preferences and goals. The corporations own and control these systems, and your data!

This philosophy often leads to promiscuous and careless use of the data, and denies the individual fine-grained control over what they want to share, what to keep private, and what they want to delete.

A further issue, one that cuts across the technology and philosophy domains, is that current ‘Second Wave’ technology is inherently incapable of deep personalization — it is unable learn about particular relationships, preferences and constraints between individuals and groups. A smarter, more adaptive system could learn what kind of messages are appropriate for whom.

A more intelligent personal assistant that is owned, controlled, and adapted to the individual user would alleviate many of the current legitimate privacy concerns, and also dramatically reduce the risk of such a dramatic failure of intelligence.

I, for one, am looking forward to my own super-smart personal personal assistant.

Peter Voss is founder of SmartAction.ai and CEO and Chief Scientist of AGI Innovations Inc and Aigo.ai Inc

Please like and share — if you like :)