And we’re at episode three of Product to Product‘s second season, a podcast for / by product people!

Listen to the episode below:

As we continue to study the human side of product, this episode shares a unique spin on the real-world stories of navigating the human-related aspects of the product space. Our featured guest is Sanette Tanaka Sloan, Senior Product Designer at Vox Media. (Our first product designer on the podcast! 🙌)

In 2016, Sanette joined Vox Media as the design lead for their first ever innovation team. One of the first big projects she was tasked with was designing a voice bot for the Amazon Echo. The project’s mandate: change the way Vox Media’s audience was finding and consuming their news stories.

Stepping into this “innovator” design role, Sanette picked up a whole lot while designing an “invisible” AI interface. She experienced first-hand how thoughtful AI product design can ultimately have an impact on “human” behaviours. So, Sanette gets real with our colleague Eleni about how to use product design to ultimately influence audience actions.

The episode can be listened to above, and we’ve also included a transcript below. You can subscribe to Product to Product on iTunes (here) and Google Play (here), or get the latest episodes delivered to your inbox by subscribing here.

Eleni: ​Hey, Sanette. How’s it going?

Sanette:​ It’s going well. How are you?

Eleni: I’m great. Can you start by telling us who are you and what do you do at Vox?

Sanette: ​My name is Sanette Tanaka Sloan and I am a Senior Product Designer on our audience team here at Vox Media. So my team is responsible for our design system and our audience-facing experience across all of our edit networks. So basically, we’re concerned with how do you read, watch, or otherwise engage with our stories. And I work for Vox Media, which is the parent company of a ton of news brands like Vox, SB Nation, The Verge, Eater, Curbed, Racked. It sums up to about 350 websites.

Eleni: I understand that you had a career in journalism before your current career in product design. Can you tell us why you made that transition and how did you make that transition?

Sanette:​ Yeah, you’re correct. I used to be a reporter at the Wall Street Journal. Ever since I was in high school, I was always interested in media and journalism as an industry. In college, I did a ton with our school paper; I was the editor my senior year, I also took photos, I did layout. So I did design and reporting and I really liked being involved in all of it. And then when I graduated and I became a reporter, I realized that my interests were more along the lines of communication and media as a larger entity and not necessarily strictly in reporting. I had that realization maybe a year in and I started doing some more tangential design related things at the Wall Street Journal. Taught myself a ton of design things, I read a lot of books, I started teaching myself how to develop websites. But at some point I realized that I needed to take a little bit more time and do some more concentrated studies. So I decided to go back to school. I went to graduate school in Ireland and I earned my masters in creative digital media with a concentration in interaction design.

​And so that’s what helped me fully pivot to design. And then while I was there, I did a lot of freelancing. Found out I really enjoyed UX and product design and thinking through a product’s life cycle. And so right after that, I joined Vox Media as a designer—which was kind of like the perfect combination of my background.

Eleni:​ You were brought in when you first joined Vox Media to be a part of their innovation team. Can you tell us why Vox created this team and what was the opportunity that they saw in the current media landscape?

Sanette: ​I joined the innovation team about two years ago when I first joined Vox Media. It was a very small team; there were four of us. And Vox has always been kind of at the forefront of a lot of new media trends. But up until that point, most stories were still being presented in a very text-based way. They still are now, but even more so a few years ago and that was just the norm at the time. And what Vox wanted to do was set up a team that would have the space to explore some different ways of telling stories and try to figure out what was working, what wasn’t, and where we could invest resources in the future. Even in the time I was on the team, and then time since, there have been so many additional avenues in order to tell stories. You could do it through chat, through voice. There’s just been so many other ways beyond an article format.

​So that was our intent. We wanted to explore those spaces and figure out where we could grow potentially in the future.

Eleni:​ What was appealing to you personally about being in an innovator product role?

Sanette:​ When I was a reporter, I felt very confined to the templates that I had available to me. I could work within an article template. I could work through a feature template. There wasn’t a lot of space to reimagine what a story was at its core and that was something that our innovation team talked about. What could a story be? How could it be told? Could it be told in a couple of words? Does it need to be told in a way that could be read? Could it be showed more visually? There were a lot of questions that I found personally really intriguing and it hearkened back to what I wanted to do when I was a reporter. So that’s why I wanted to join the team. I wanted to be part of defining that. And being in a media company, there’s so much room for definition where there really aren’t that many precedents; where we are just making it up as we go, which is why I love being in media.

​This team was really well poised to actually act on some of those things and really set the tone of how we could operate in the future. And I know that’s a big mission—especially for four people in a large company—but that was my goal and that was something that I felt even if we could shift the company thinking a little bit or help influence the future product roadmap, that’s a really powerful thing. It was a really great position to be in for my first foray combining design and media.

Eleni: ​I know that one of the first projects you worked on was an Amazon Echo Bot. Can you walk us through what that project was?

Sanette:​ So this was a project that we did a little bit toward the latter end of being on this team. And that project was a bot that we designed and developed in house in conjunction with the Northwestern University night lab. So they came to us with the initial idea for the project and they wanted to work with us to come up with just a testing bot that we could learn from and potentially launch. And so the bot was pretty simple. It ran through the Amazon Echo. It ran through a prompt-based nature. So you basically would ask the bot for the news. The bot would respond with the news and you could either dive into the story or you could move onto another story. We tested it a ton and launched it internally. It was really interesting. We learned a lot from the project.

Eleni:​ Around this time, were there a lot of other examples of media companies doing similar things or were you innovating right in a black hole and everything was completely new?

Sanette:​ Honestly, there weren’t that many examples out there. There were a handful of projects, but they were very few and far between. At the time that we created the bot, the Amazon Echo API had only been opened the fall prior. So it was only available for a couple of months. But I think the biggest example was something called flash briefings—which is still available on the Amazon Echo now. It’s basically a funnel where news companies will say headlines and bits of stories and then the Echo will just play it. So it’s kind of just playing a story. They’re definitely better now, but at the time they weren’t really optimized for voice. It was like someone reading the beginning of an article to you.

​And as far as other tools, not voice-related in this space, there were a few Slack bots here and there. And Google Home hadn’t been announced yet. Facebook Messenger hadn’t opened up their API yet for their chat interface. It was a really fresh arena and that was actually very tricky to find other competitors in the space because we just didn’t have a ton of examples to pull from.

Eleni:​ I feel like one or two years ago, there was a lot of hype around bots. A lot of articles that were basically like, “Bots are the way of the future.” And now that it’s more normal, those articles aren’t getting written as much anymore. At the time, did you sense that hype and how did you navigate that to focus on creating something that would be useful?

Sanette:​ That’s an excellent question and I completely agree with that. And I would say that the bot hype isn’t totally over yet. It’s just becoming more normal. That was totally true and that’s something we definitely felt too. I mean in that fall, there were a slew of more bots that were even announced. New York Times had their election bot that came out. (We’re talking about the year of 2016.) There was a lot of hype. And there was definitely a lot of feelings like, “Oh, this is the wave.” And I think part of that was because there were all of these developer tools that were suddenly released to media companies. We could get our bots on Slack and we could get our bots on Facebook Messenger. And the idea of having all of these avenues to reach people that weren’t just our dot com websites was super appealing. To me, that was one of the biggest reasons there is a ton of hype around it.

​And we definitely felt that too. I mean when we were thinking of innovation, we thought a lot about how can we explore this space. How can we explore the voice space? The chat space? But one thing I do want to note that it’s important to keep in mind, bots are not new. They’re not new at all. People have been interacting with bots for decades. Like if you think about like just calling a cable company and going through a voice automated system and having to say words to move along. That’s something that we’ve been using for the past 40, 50 years. Even though bots felt new in the media space, they aren’t new as a concept. And so there are examples out there that we can pull from and that we should. And there’s a lot of space too to improve upon it. So that’s something I really wanted to do too with our voice bot.

Eleni:​ Did you work from any hypotheses around firstly how people consume news currently and how their news consumption behaviour might change? What kind of hypotheses and assumptions were you working from or trying to achieve?

Sanette: One of the biggest assumptions that we were making, just by nature of picking the project based on the Amazon Echo. We were making the assumption that people will actually listen, they’ll go to this product for news. I think we’re still trying to figure out if this is the right platform for news. So I think that was the biggest assumption we made. That like people will want to ask Alexa for news and that they would be open and willing to hearing the response.

For example, on thing we tried to do to get around that is we decided to partner with The Verge for the content at the beginning. The Verge is a technology brand and we used their content because their audience tend to be a little bit quicker to adapt to new technologies and to try out new products. We figured that if there was one brand out there that would be amenable to this it would be The Verge.

​So we tried to counter it, but that was definitely one big assumption that we were making at the start of the project.

Eleni: ​How did you do user research when you’re dealing with users who are basically unfamiliar with the type of product that you’re designing? I know you used the example of voicemail systems where you can press numbers and work your way through the system. I don’t think people typically like that. So how did you approach the user research phase?

Sanette: ​So, you’re right. We looked at a lot of competitors out there. I read a lot of books. Google has put out some amazing resources on voice design and what makes good voice. Same as Amazon. There’s a lot of documentation on how to set up a voice bot in terms of narrative and structure. Actually a lot of our research happened from testing. So really amazing thing about working on a voice bot is it’s actually insanely easy to test. There’s so many ways you can test voice. We set up different experiments. We tried out different navigation structures. And then we test first in a pretty lo-fi way, and then eventually with the actual Echo prototype.

Eleni:​ Can you tell us about some of those tests?

Sanette:​ So Northwestern University had the first idea for how to do tests and they recorded pieces of news. At the time, Amazon had a link where you could basically drop in text and then hit play and then it would be read back the way the Echo would read it. So Northwestern had the idea to play a bunch of things and then record it. Then a user asks them basically whatever they would say in response, then you would just play the correct response manually. So it was a way to test the navigation, test the flow, test the different prompts you could say or the different things that a user might say and react to that. And that was one of the lo-fi ways. Also I would read things out loud just in my own voice, which is negating some of the challenges of working with such a product like the Echo, but it helps get some of those similar things like flow and content down. And then finally, we did testing with an actual prototype. Our developer, he built a first version. We fed in the words. We set up the Echo. And then we recruited people mostly within our office and folks within our network because we were a pretty scrappy team and had to try to test however we could.

​At the time, there were a good number of people we tested who hadn’t interacted much with an Echo, so we grouped people into two groups. Ones who either had an Echo or had interacted with one before, and ones who were totally new. We had a couple different exercises to try to get people at a baseline level so we could be sure that we were actually testing our bot and not testing the Echo as a product.

Eleni:​ What were some of the discoveries that came from your tests? How did users react and what were some of the learnings that came out of all those different experiments?

Sanette:​ One of the things we learned very quickly is people forget what they hear super fast. We originally had an introduction to explain what this bot was and how to use it. And what we found was people zoned out within about seven seconds of listening to it. So we shortened the introduction and we eventually just cut it, and launched right into the first story. And that seems very natural now, but a lot of the bots at the time that were being released had these lengthy introductions explaining what it was and how to use it. So just responding with the answer felt pretty novel to us at the time. And that was something that we found during testing.

​Another thing that was interesting that came about was due to our own team constraints and technical constraints. We decided to go with a keyword response type of navigation. So you would say a word and we would play a certain recording after that. We talked about doing more of an open format. But just at the time, this made the most sense for this product. But when we tested the different prompts, we found that people also really quickly forgot the prompt. So the bot, to give some context, is a gadget bot. So you would ask about a different gadget and then the bot would respond with different reviews about that gadget. And this is something The Verge already does a lot. They review different gadgets and different products.

​So the three prompts were price, detail, and cool factor. In our testing, we found that people would immediately forget about all three of those prompts or they would commit a different set to memory—which I found super interesting. So what we did was we shorten the time basically between when people heard the prompts and when they’d have to act on it. So we set them at the end of the stories. We also prompted users again if there was silence about what the prompts were. And then finally, although we specified very few prompts, we made it incredibly open so you could say, “Give me the details,” or, “Tell me more,” or, “More facts.” We tried to anticipate a lot of other things someone might say in order to access more information and then we played that as well.

Eleni: ​It’s interesting because it seems like in some ways the experience or the ultimate information that you get is not unlike if you listen to the radio for example for ten minutes in the morning and got your ten minute debrief. But then there’s this interactive element and this element of a bot. How does it change the consumption experience for the end user?

Sanette:​ Yeah, and I’m going to treat this question a little bit more like our vision for what this bot could be. We started with the idea that we wanted this bot to be a resource for someone who was looking to know more about a new gadget or a new product that’s out there. That’s a reason our audience comes to The Verge. They want to know more about products. They want to know The Verge’s take on it. So our original idea was that by taking that information and putting it in the format of a bot, it would make it much more searchable. Like our eventual goal was to have someone come up and say, “Hey, tell me about the new iPhone,” and then be able to get relevant information that’s told in The Verge’s editorial voice, that had their stamp of approval, and that you could come away with a better understanding of what that product is.

​So our version was the very, very basic level of what that could be. But our goal was really to get that content categorized in a way that would be searchable by voice and would be very relevant to an audience member who wanted to know more as they were going about doing their things. If they were preoccupied in other instances. So that was one route that we were thinking of. And honestly, we talked about a tone of other brands and other types of stories that we could tell through voice.

​To me, the compelling thing about voice is multitasking and doing other things while getting info. And so we tried to be very cognizant of that and not treat it like just a shorter version of an article, or an article read out loud. But we tried to think about what might a person be doing at any moment and why might they also simultaneously want information from our brand. What is that information? How can we deliver it in a way that’s suitable to voice? And I think that’s something that a lot of media companies are trying to get at and it doesn’t always come across that way because the first MVPs are never realizing the full vision. But I think that is something to be cognizant of because I do think there are a lot of products out there, products that we’ve made (even like the prototype we made), that really are just similar to sort of reading something out loud and that’s definitely not where we want to be.

Eleni: ​I feel like obviously media has changed so much in the past couple of decades and there’s maybe a shallow way of thinking that says that peoples’ attention spans can’t hold onto anything. And it’s interesting to hear how you’re designing for that and trying to accommodate that into the way the product is designed, and the way that people will be absorbing the information that it’s delivering.

Sanette:​ Yeah. I think it’s a really delicate balance, because we want to embrace that and help our information or stories fit into someone’s schedule, while at the same time maintaining quality and giving good information. And it’s really tough and it requires work on the product side, it requires work on the editorial side too to really make sure that we’re delivering the best parts. And so using the Echo bot as our example, we narrowed down those three categories of what is most important about this gadget. We had dozens of things that exist in our article that augment each gadget and we tried to distill it down to the most universal basic pieces—which I’m sure, depending on the product, is also really hard because in an article you can pick and choose. But in a lot of these bot paradigms, you have to commit to a structure. It’s interesting. We’re really rethinking how a story can get told.

Eleni:​ You wrote about this project and you said you didn’t want to just skin a voice and interface as a human conversation. How did you balance robot versus human elements when designing the way somebody would interact with your interface?

Sanette:​ That was really tough. I wanted to make sure that our bot felt true to The Verge and felt true to Vox Media, but it’s challenging when you’re working with another product; when you have to weigh another product’s personality on top of your information. So I found that for instance when we were testing by reading the content out loud, it was perceived very differently than when we played it through the Alexa voice. And so one of the things that we decided to do was make a really purposeful decision to not try to bake additional personality into our bot. And the reason for that is because we felt that Alexa already has a lot of personality just innate in her voice and innate in the product, and we didn’t want to create a strange combo of The Verge voice and then the Echo voice. And so we made a decision to not try to be cutesy or have a name or do anything that would invoke a human type of feeling. What we wanted to do was to make sure that people knew that the content is being written by humans and the content has an editorial voice and the content comes from The Verge, and let The Verge name speak enough for its brand and personality.

​So that was the decision that we made and I really think it would depend on the platform if we were to make that decision again. Like later that year, Google opened up their assistant API and they released the Google Home product and that has a very different personality from the Amazon Echo. So I think we would have handled that differently if we were in that instance. At the time it just felt like we wanted to get people to the content and we didn’t want to distract them with a less great human experience.

Eleni: ​Were there any major assumptions or beliefs that you had before doing this project that got disproven as you got deeper into it?

Sanette:​ Yeah. Well, one of the things that I thought from the beginning was I assumed that people who had interacted with an Echo before or owned an Echo, would have a base level of behaviours that we could expect from them. So one of them being I thought people would know how to stop the bot and how to make it stop talking. So we didn’t initially build in anything beyond what the Amazon Echo offers by default with all of their bot products. We didn’t add anything in addition. And we found in testing that a lot of folks, even ones who had owned an Echo, didn’t know how to make it stop. And the command is really simple. It’s, “Alexa, stop.” But they didn’t all realize that that was what you needed to say and I think it’s because at the time it was still a super, super new product and a lot of people, even though they had one, had only had it for a few weeks or months, or hadn’t interacted with it a ton.

​And so behaviours were really all over the place and I really contrast that to something like mobile behaviours. Like where at this point, the product is so saturated in its market that we have a level of behaviours that we know. We know that people use thumbs and we know how they scroll. And we can’t assume that with a nascent technology. It was also even more pronounced in folks who had never interacted with an Echo before. There was one person who in testing she waited for the Echo to finish something and then at the end, she said, “I really didn’t want to hear that.” And I was like, “Why didn’t you say anything?” And she said, “I just felt like I’d be rude. Like it felt weird to like stop, Alexa.” It was so interesting to see how people prescribe different behaviours to a new technology.

​Yeah. This is something that I remember we ended up building in a lot of additional stop commands that our bot would recognize to just cease talking in order to counter that.

Eleni:​ I feel like another assumption that floats around is that people are generally passive about what they read in the news or passive about their news consumption. Do you think these technologies, like bot technology or chat technology, which by their very nature force you to interact with the platform, do they change how people interact with news or how they consume news? How does that interactivity change how people consume news?

Sanette:​ Yeah. It’s interesting to hear you say that it’s more active because I think the hope of these technologies is that it will eventually be more in the background. It will be something that can fit into someone’s life. It could be something that they could ask, “How is figure skating scored?” from a trusted news source. Right now it would require someone getting to a screen and then finding that information versus with some of these technologies like you would be able to just ask that question. And I’m sure you could ask that question now and the Echo would answer that for you. But I think what we’re hoping is that the strength of our editorial voices and the strength of our content will be at the forefront of peoples’ minds, and they’ll turn to these technologies and ask those questions of us. And I hope that it will be more baked into someone’s lifestyle and be less of an effort than it feels like right now.

I think a lot of the examples that we’ve put out there so far do require a lot of effort to learn how to use it, to learn like how to engage with it. My vision for new storytelling forms is it’ll feel more seamless & helpful to a person’s life.

Eleni:​ I know that you’re no longer on the innovation team. How does being a product designer on an innovation team differ from being a product designer on a more traditional product team? What are the differences in terms of how you’re thinking every day and what you’re doing every day?

Sanette:​ Just for some context, I’m now on our audience team, which is the team that’s responsible for all the audience facing experiences, for the ways that people read, watch and engage with our content. And the team I’m on now is much more concerned with how can we enhance the way that people are interacting with our stories now in the places that they are, as opposed to trying to think about where we are going to be in the future in two years or five years from now. And so the design process is a lot different because now I have a lot more precedents to draw from, more data and what’s working and what’s not working. We can identify quicker wins. We’re not thinking as much of a total reworking of the landscape, but the same types of principles still hold true.

​So the same type of work I did, whether it was on a voice interface or a chat interface or now, it’s similar. So I’m still doing research. I’m still doing a lot of mocks. Still doing a lot of brainstorming. Still doing testing. So the actions are the same and that was actually something that was really helpful when I was getting onto the innovation team was just applying the same tried and true design practices to new interfaces I hadn’t worked with before. I mean good design is still good design and it goes through a process so fortunately that seems to me, at least so far, fairly consistent.

Eleni: ​There’s an idea that good products should be habit-forming. When you’re building a technology or designing for a technology that’s trying to introduce new habits and break old habits, how do you do that? What are the challenges associated with that and how do you design for that?

Sanette:​ I think one of the challenges that we face is we’re still at the point where the way we mark engagement and success is we just want more attention and engagement interactions with all of our stories and platforms across the board. And unfortunately, people just have limited time and so we haven’t come up with a really solid way yet of thinking about how engagement in one platform negatively impacts the other.

​And we do that in some respects in smaller silos of we want to see more page views, we don’t want to sacrifice time spent on that page or scroll depth. But I think really the metric that we need to be aiming for, and that we’re starting to, is usefulness. Are people getting the value that they need out of our stories, regardless of where they’re finding them? And for some people, that might be only on our website. Some people might only be on our newsletters. It might be through a new platform like voice. And for most, it’s probably going to be some sort of combination. So I think it’s challenging because we don’t want to necessarily move people from one habit to another habit, or one platform to another platform. I think we need to be better about thinking holistically, like how does this all fit into somebody’s life and how can we make those interaction points as seamless as possible.

​And then practically speaking, one of the ways that we try to do that is by linking content among each other within our platform. So on our voice bot, we said, “Visit theverge.com/circuitbreaker if you want more information.” We tried to link back to our website. On our website right now on Vox, we just launched a new podcast. We have a prominent display of that podcast and you can listen to it on the home page, as you’re doing other things. So we try to cross reference, to just build awareness. But I mean ideally we want people to feel like their avenues that they’re getting information is useful and it works for them. So those are the habits that we want to focus on building.

Eleni:​ How did this project wrap up? Where is it now?

Sanette:​ So we launched an in-house prototype that was really fun to play with and taught us a ton of what we can do in the future. It didn’t launch publicly, but we have a lot of the ideas tucked away in our back burner, a lot of documentation of what we want to do in the future. And in the meantime, I did write a blog post with a lot of our takeaways and our findings. And I’m curious to see too what other news companies come up with.

Eleni:​ How did your time in this role make you a better product designer?

Sanette: ​I think it’s made me become a lot more cognizant of what behaviours I’m assuming of people and also the role of visuals. A lot of our projects were chat-based and voice-based, they were around word—which feels very natural to me because my background is in writing and reporting. But I realized the reliance that we have on visual indicators to tell people where they are and what they can do next and orient them and so I’m really cognizant about using that intentionally and not like overtly. So I think that’s helped me become a much more careful designer.

Eleni:​ Did it change how you see design itself?

Sanette: ​It made me feel like design really does have the potential to change, like make a lot of significant change in peoples’ lives. I know that sounds like a huge statement, but I do think it’s very exciting to work on new interfaces. I find that very fulfilling and it’s something I want to do again in the future. And I think it’s because it breaks down a lot of paradigms about how you think design works. I felt like we were setting a new standard and new way of how it could work or at least thinking about it. It just made me excited to be in this field and able to be working on projects like this.

Eleni:​ Yeah. I mean we talked already a little bit about how much news has changed and I think journalism is arguably more impacted by product people than journalists in terms of how it’s going to evolve. What role do you think product design is going to play in continuing to change how humans are consuming information and stories?

Sanette:​ As product design becomes more evolved as a field within media companies and as we start to do more research on our audience members and understand them better, I think we’ll be able to create better products that will better serve them in their lives and better connect them with the information that they’re interested in that our reporters are putting together. I think for a long time, product has been more of a, I don’t want to say, blocker, but it’s been something that reporters have had to work around in order to tell stories or try to use what they had available to them to do the best that they could. I think on the product team at Vox, we are much better poised to help streamline the work that’s time consuming or not as helpful for them and help provide new routes where they can better express their storytelling.

Eleni: ​Sanette, thanks for chatting with us. Where can people find you if they want to learn more about you?

Sanette: ​My website is sanettesloan.com, and my Twitter handle is @SSKTanaka.

Eleni:​ Thanks so much fore doing this. We really appreciate it.

Sanette:​ Thank you. You too.

Subscribe to Product to Product on iTunes (here) and Google Play (here), or get the latest episodes delivered to your inbox by subscribing here.