The first step to creating a conversation flow was to map out the conversation with branches for each possible user selection. We avoided free-form user entry as much as we could and provided guardrails to ensure the smallest margin for error. The bot has a specific goal, so we designed the interface more like a smart survey with free-form being used only for questions like “What’s your zip code?”

Once we had a skeleton for the conversation flow, we began adding some meat to the bones: the personality.

We had to give the bot a conversation style and persona. This is where we had to flex our creative muscles a bit. In our initial plan, we developed the persona of a 35-year-old educated female with a whole laundry list of attributes we wanted to see emulated in our bot’s conversation style.

There was just one problem, though. We quickly realized how hard it was to write the copy in the tone of our persona, since we just aren’t that person.

The solution? Interview someone who matched the persona we were creating for the bot. Once we found this person, we asked them to write copy naturally for the bot in their own voice, as though they were speaking with a friend. The results we received were unrivaled by our attempts to mimic the personality of the bot persona.

By finding a real-life human to be the bot’s persona, we were able to create natural conversations seamlessly.

Step 3: Building A Functional Prototype

TARS Bot Prototype, no coding needed

While researching tools for the project, we found a tool called TARS, which allowed us to create a fully functioning chatbot prototype without writing any code.

This phase began with the construction of different conversation flows that user testers could provide feedback for. We created a base conversation, duplicated it several times, and altered it slightly to match the different variables we wanted to test in the conversation.

TARS made it incredibly simple to create a customized conversation flow that our users could actually interact and chat with. The TARS bot didn’t allow us to test elements of the visual design, but we were able to ask questions like, “How did the bot’s introduction make you feel?”

We saved the visual testing for Invision.

Step 4: Visual Design

Chatbot visual design (Sketch)

Once we had a solid base for the conversation map, had developed a personality, and constructed a functional prototype to use for testing, it was time to begin visualizing how the chatbot would look and feel.

We began by creating wireframes and jotting down the different chat instances that would be needed for the visual design.

We needed instances like:

Free-form user entry

User entry with buttons

Display of available housing locations

Hand-off to a representative

Chatbot closed

Chatbot location on page

With everything laid out nicely, it was time to render some high fidelity mockups for the chatbot.

Although this project was mainly a user experience challenge, there was room here to use visual elements to improve the experience for users.

Of course, there were challenges with the visual design. How do we display so much information in a concise and easy to understand manner? For the housing locations, for example, we found that the long string of information didn’t work as well as a carousel that allows users to easily view the closest location to them, then scroll to the right to see additional locations.

It’s critical that users never felt stuck or confused when using your chatbot. We added a “help” button in the top right of the chat interface to allow users an easy way to restart, speak with a representative, or visit the FAQ.

Step 5: User Testing

Usertesting.com Testing

Once we had all of our ducks in a row, including a hypothesis for the chatbot and how users would prefer to use it, we created a test plan. The test plan consisted of all the aspects of the experience that we wanted to test. We leveraged UserTesting to gather testers, then sent them on guided experiences where they were asked to voice their opinions about the different components of the chatbot we wanted to test.

Our test plan consisted of the following goals:

What kind of tone/conversation type do users prefer?

How do we make the chatbot easily discoverable?

Does the image or name of the bot adjust people’s opinions of the interactions?

What is the best way to hand off customers to a rep?

Is a chatbot is quicker than the current apartment search UI? Does it feel quicker?

Is a chatbot quicker to find and use than the current website?

What page should the chatbot be on?

We ran a test that focused on each of these different testing points and as project lead, we reviewed all of the user testing videos. It was an exhaustive process, but the results were extremely helpful and insightful. Some test results were as expected, while others yielded some surprising and unexpected results.

For example, we assumed that users would generally enjoy the bot’s personality (remember how meticulously it was constructed?). It was friendly, professional, and most of all, informative. However, when we tested the friendly bot versus a cut and dry, to the point bot with no fluff in the conversation, users preferred the terse bot to the friendly one. On reviewing the results, we learned that users didn’t want to waste time being friendly with a bot. They simply wanted the information as quickly as possible and because they knew it wasn’t a human, they weren’t willing to entertain a conversation.

Saying things like “That’s great, thank you for the information! Let me get that squared away for you.” added more reading for users. Replacing that message with, “Got it, here are the results!” made for less steps in reaching their goal.

After reviewing the user testing videos and formalizing a consensus for all of the results, we had to improve the bot to match users’ feedback. This was the easy part. With clarity on how users wanted the bot to function, all we had to do was make the necessary changes.

With these changes implemented to the flow and design, we delivered the finalized chatbot recommendations and design to the client. Overall, we learned a lot from this project and felt that it was a smashing success. This project made us realize that no matter how sure we are about a certain experience, users may have a different opinion about how they’d like it to be.