The basic results of our 2019 research are:

This is how the results are defined:

1. Answers Attempted. This means the personal assistant thinks it understands the question and makes an overt effort to provide a response. This does not include results where the response was "I'm still learning" or "Sorry, I don't know that" or responses where an answer was attempted, but the query was heard incorrectly (this latter case was classified separately, as it indicates a limitation in language skills, not knowledge). This definition will be expanded upon below.

2. Fully & Correctly Answered. This means the precise question asked was answered directly and fully. For example, if the personal assistant was asked, "How old is Abraham Lincoln?" but answered with his birth date, that would not be considered fully and correctly answered.

If the question was only partially answered in some other fashion, it is not counted as fully and correctly answered either. To put it another way, did the user get 100% of the information they asked for in the question, without requiring further thought or research?

The following is a comparison of Answers Attempted for 2017 to 2019 (Note: We did not run Google Assistant on a Smartphone in 2017, which is why this is shown only for 2018):

Alexa, Cortana, and Google Home all attempted to answer more questions in 2019 than they did in 2018. Google Assistant (on a Smartphone) dropped slightly, and Siri remained the same.

Now, let's take a look at the completeness and accuracy comparison:

Interestingly, every personal assistant included in last year's study dropped in accuracy to some degree. This indicates that current technologies may be reaching their peak capabilities. The next big uptick will likely require a new generation of algorithms. This is something all the major players are surely working on.

Note that to be 100% Fully & Correctly Answered requires the question to be answered fully and directly. As it turns out, there are many different ways for a question to not be 100% fully and correctly answered:

The query might have multiple possible answers, such as, "How fast does a jaguar go?"

Instead of ignoring a query it doesn't understand, the personal assistant may choose to map the query to something it thinks of as topically "close" to what the user asked for.

The assistant may provide a partially correct response.

The assistant may respond with a joke.

The assistant may simply answer the question flat-out wrong (this is rare).

View more on the nature of the errors in a detailed analysis below.

There are a few summary observations from the 2019 update: