Artificial intelligence is mostly a reaction-based technology. If the A.I. routines in a bot are not that smart, the bot doesn’t react appropriately (or very quickly). If a bot knows how to respond correctly, and with urgency, it’s due to those reactions being programmed into an interface in the best way possible.

Of course, there are two types of “intelligent” responses a bot can provide. In one, the bot simply uses data to give the correct answer to the user. The other is much more difficult and involves an emotional response — the bot understands the user and their emotional state. In a customer chat, a smart bot would know which information to relay but could also pick up on the fact that the person on the other end of the messaging app is not paying attention, has become hostile, doesn’t want to talk, or has a bad attitude.

Recently, I heard about a company called Cogito that is trying to build more emotional intelligence (or EQ) into their A.I. For now, the A.I. works by analyzing voice calls between human agents and customers. The goal is to understand how an A.I. can act more like a human and, eventually, help robotic agents.

“Our A.I. measures conversations by speech reaction patterns using real-time feedback, including how [people] talk, as a way to measure the EQ of humans,” said Josh Feast, the CEO of Cogito.

Feast explained that there is a difference between a “persona” and a “personality.” A persona is easier to build — you can teach a bot to mimic a real person using certain words and speech patterns. A personality is much harder — this requires a higher level of EQ and an understanding that any conversation — by voice or chat — has an underlying tone, narrative arc, and bias or slant. While it’s more difficult to achieve, there is something fresh and authentic about a bot that has personality, whether it’s in a messaging app or activated by voice.

Cogito analyzes a wealth of data during calls. The team looks for things like whether someone is talking quickly, if they are interrupting often, if their tone is uneven, or if they are taking long pauses. Feast says humans are amazingly good at picking up on these social signals — if someone sounds tense, for example, or if they are withdrawing from a conversation. Humans are good at adapting to tone.

One of the signals developers are looking at is when a tone goes flat. When we disengage from a conversation, we tend to speak in a boring, monotonous manner. We’re basically “done” talking. We want to finish up the conversation quickly and move on to something else. Our participation is reduced, our speech volume gets lower, and we take longer pauses in the conversation.

Cogito says this “signal score” is quantifiable, and the company uses that data to help human agents understand what’s happening during a call. Feast says that, in the future, these signals could be used for an A.I. that adapts to a voice call or adapts within a chatbot conversation. In fact, he agreed that messaging apps might have a better chance of detecting EQ than voice-enabled bots. A chatbot can detect when someone has not typed a message in a while, for example, or has started using more aggressive words.

The fact that a chatbot is expected to be “less human” could even explain why the field has exploded lately. You can teach a bot to text people more easily than you can teach a bot to speak. The data models are more widely available for text than voice. Anyone who has chatted with Amazon Alexa or Apple Siri knows the conversation ends pretty quickly. These are powerful voice agents, but they have a limited vocabulary. Chatbots have a more limited set of variables to help them detect user frustration and to provide an adequate response, but these limitations work in their favor. They can handle more EQ.

For now, the Cogito A.I. is helping human agents. Soon, it could help bots become more human. That’s an ambitious goal, but one that will become incredibly important.