The Voice space is heating up quickly, in case you didn’t notice. Over the last two years, the technology space has adopted voice as a core component of functionality. With voice, it’s not only about what you see, but it’s what you say that makes things happen.

The landscape of voice is also expanding rapidly as companies are investing in more technology to empower machines to respond to voice. It started with Apple with Siri and was quickly followed by Amazon with Alexa, along with a host of others. The reason voice is so important is simply that we’ve witnessed a tipping point in the technology world where previously humans had to learn how to understand machines, but now machines are learning how to understand humans. The computing systems in the background have become so advanced and complex that you can’t expect the general consumer to understand them, but you can absolutely expect the computers to understand human language and direction. That tipping point is where AI is coming in, but that’s a different story for a different column.

Trending AI Articles:

As technology is adopting voice there are clearly 3 pillars upon which companies are building value. These are:

Information-centric

Action-centric

Conversation-centric

INFORMATION–CENTRIC VOICE

This pillar is focused on the interaction between human and machine to get insight and/or gather information. This is where Google focuses the majority of its efforts to continue to aggregate the world’s information. 50% of searches are now done via voice and many of the capabilities of Google Home are focused on providing access to information. It’s used for things like reading the news, getting the weather, and more. It does connect to a series of Home devices, but for the most part, the use cases are focused on data and information.

Being information-centric means these providers are device-centric as well. Google Home works with their physical devices as well as with apps embedded in cellphones and computers. Google’s voice service has to be activated through some kind of Google device, which is not seen as limiting in any way but it does require that foundation from which to perform.

ACTION-CENTRIC VOICE

This second pillar is focused on the interaction between a human and a machine to drive actions and is typically dependent on a very specific device. This is exemplified by Amazon and their Alexa assistant. Alexa offers a host of physical actions that can be driven which include things like turning on the lights, playing a song or driving purchases for the home. Alexa also delivers things like games and trivia, both of which create actions in the home and require you to interact with a device.

Amazon recently announced Alexa for Business and this is a logical extension of their home strategy where they establish a marketplace for apps that create action in the office. Booking a conference room or scheduling a call are the intended uses, but over time they will expand to include additional kinds of office actions.

CONVERSATION-CENTRIC

This third pillar is one where we focus on Voicera and is around extracting value from person-to-person conversations. Rather than having a conversation between a human and a device, we focus on when a value is created through the content of a conversation. Other players are in this space as well. Some focus on recording for historical value while others are intended for coaching. Our point of view is to focus on collaboration and the activation of value that comes from meetings, which are conversations between a minimum of two people. Meetings are the most adopted form of collaboration but they are commonly disconnected from the rest of the enterprise workflow.

Our goal is to both activate the content of the meeting and connect that output to your workflow (i.e. CRM, communication, etc..). We care about activating the content of the meeting and are 100% device agnostic (we have no device). Being conversation-centric means being where the conversation takes place and having permission to be involved. This is a huge differentiator from the companies that are tethered to a device where the microphone may be on, but they do not have consent to be involved in the meeting.

MACHINES ARE LEARNING TO UNDERSTAND HUMANS

Machines are certainly getting smarter, and their role is to make our lives better. Interaction with machines is getting easier as a result of the advances in voice to drive engagement. You will use the tools created by these approaches to augment your own abilities and to go about your day in a more effective manner. As these systems become more impactful, the way you interact with them will become simpler. You will talk to them rather than type. You will speak rather than be forced to learn new UI’s. Your voice will become the key that unlocks all that potential and unifies all these different technology tools.

The next few years will be very exciting as more companies take advantage of these 3 different approaches as their own pillars of developing a voice-based strategy.