Due to the length and depth of this conversation, we have broken down the key takeaways into three blog posts. To get these posts straight to your inbox, sign up here.

Let’s get started with Part 2! If you missed it, read Part 1 here.

Q1: How do you set user expectations? How do you ensure the conversation is relevant?

Diane: As a company, we’re super clear from the beginning: Amy and Andrew are AI. But ultimately, it’s up to the customer in the initial hand-off to determine how to introduce Amy or Andrew into the email thread. We’ve seen some people introduce them as “scheduling assistants” and others introduce them as “AI tools”.

Tali: We’re very upfront that Wendy is a chatbot. When you first meet Wendy and throughout each chat there’s space for jokes where Wendy can be self-referential. So if someone asks a question that’s outside of Wendy’s knowledge base, she might say something like, “Sorry I didn’t catch that. I’m just a chatbot after all.” That’s just one example of how voice design can frame the user’s experience. When people have really clear expectations about an AI’s limitations they’re likely to be more empathetic when it doesn’t understand something.

Diane: Setting clear expectations in a conversational interface is an interesting challenge. It’s not like a website or an app, where you only have five options or buttons to choose from, followed by a predictable flow of actions. Because the experience has been opened up as a dialogue, we’re allowing users to come back to us with essentially anything in response. Meaning, there is an exponentially large number of ways the conversation could go.

Tali: We face the same issue. One thing we’ve been developing is a layer of context throughout each chat with Wendy. Context can help a person give a better answer and it can also make the interview experience more informative. At a basic level, context helps frame a question. Like when Wendy asks candidates,“ Can you tell me about yourself?” context matters. Your response would change based on where you get asked — in an interview, at the airport or on a street corner. The fact that this is an interview is top of mind for the majority of our users, so we also use context throughout the chat to shed light on why each question is asked. Our product team positions Wendy to learn about a candidate while informing them about the job opportunity. For both of us, it seems like setting expectations sets people up to get the most out of these interactions.

Q2: How do you define a successful user experience? What do you measure?

Diane: The question here is, how do you take a qualitative interaction such as a conversation, and find a quantitative way to measure its success? There isn’t really standardized tooling or research methods for conversational interfaces yet, so the metrics we use are hyper-specific to the product. Amy and Andrew’s goal is to get the meeting on the calendar efficiently and effectively, in as few emails back and forth as possible. Because this is a multi-faceted goal, we can break it down and try to measure how successful we were piece by piece. For instance, did the meeting end up getting scheduled? How many emails did it take to schedule it? We’re also very interested in measuring the qualitative elements to the conversation, “How did it feel interacting with Amy or Andrew to set up this meeting?” One way we can look at this is based on how often people are expressing gratitude (or frustration) towards Amy or Andrew.

Tali: I think it’s interesting that you guys are thinking all the different types of emails that a user might send as a basis for your design. At Wade & Wendy, our design is based on the ultimate goal of getting this user to either prove that they’re a fit for X job or learn about them and try to get them a job elsewhere. That’s why one way we measure the quality of a chat depends on whether the recruiter was able to make a decision with the information Wendy learned. That’s the base line. In the end, having a very specific and desirable outcome for an AI interaction — like a new job — is a great way to engage people and set expectations for a purposeful interaction.