2016 raised the stakes, didn’t it? Just a year ago, it seemed like a big deal that iOS is always bugging us about our iCloud storage. Now, designers are asking themselves questions like, “wait, did my UI just give rise to a megalomaniac?”

advertisement

advertisement

[Photo: iDriss Fettoul via Unsplash We talked to designers from across the industry to find out what UX challenges await us in the coming year. From fighting hate crimes in the real world to bringing transparency to the algorithms that govern the virtual one, one theme emerged from the group: UX is no longer just about user experience; it’s about social justice. Fixing Fake News Fake news is an epidemic. Up to 75% of people believe the bogus headlines they read on sites like Facebook–a problem so great that it may have swayed the results of the 2016 election. Truthfully, fake news is a broad problem that lives at the core of how the internet works. It’s about more than Facebook headlines or Google search results. How can we curb the spread of misinformation, when misinformation can spread worldwide, instantly? “Fake news is the tip of the spear.” “Truth in digital is an overall concern. Fake news is the tip of the spear,” says Mark Rolston, founder of Argodesign. “It’s certainly pressing and has potential for catastrophic damage to our ability to govern if we don’t take it seriously. But I think the issue also goes much deeper. Twitter essentially lets anyone stand on the same mountaintop and shout anything they want to the world. There’s no more filter. No more ‘grading’ of those doing the shouting. Nutjob or statesman? You decide.” As for Facebook, the company has announced the first of many updates to curb the spread of fake news, but it’s not enough–most fake news still sounds like it will be shareable, and much of it will not be flagged as fake. This year, we need to see every platform on the internet assess its culpability in this problem. Because Facebook isn’t writing the fake news–but it has certainly given these companies a microphone.

advertisement

advertisement

It’s a problem even when things appear to be working correctly. But more importantly, a lack of transparency also means we have no way to fix things when they go wrong. In fact, at the code level, experts have told me that their own software will make decisions that they don’t understand. We need UX fixes for all of this–especially because machine learning is allowing these algorithms to self-train their own biases. “Since the algorithms ‘learn’ from existing data, and aren’t ‘smart’ in the way humans are smart, mistakes and/or human biases can lead to mis-predicting something based on a person’s gender, race, religion, history, acquaintances, etc.,” says Mike Stebbins, senior mechanical designer at Teague. “Take the Google photo app accidentally classifying African-Americans as gorillas, or the Chicago Police Department warning people who were on their ‘heat list’ that they were being watched; the list was generated by looking at a person’s acquaintances, those people’s arrest histories, and whether those people had been shot in the past.” Pope recently imagined how algorithms could break down their decision making into plain language. It’s a promising proposal. But in truth, getting transparency from these algorithms may involve federal intervention. We may need laws that mandate that we be able to see how our algorithms work, not unlike being able to see the terms and conditions on a credit card. [Photo: Argodesign] Chatbots That Have Something To Say, Both To Us And Each Other We’re surrounded by AI assistants from Google, Amazon, Apple, and Microsoft. Yet while these personalities are quick with a joke, they’re not good for much else. Their creators have focused on being conversational for the sake of being conversational, a cloying reincarnation of skeuomorphism. Do I really need to have a five-minute fake texting conversation to get three news headlines that I could have skimmed in seconds? Do I really need to have a five-minute fake texting conversation to get three news headlines that I could have skimmed in seconds? The very tone of these assistants is a UX problem. Rolston suggests that “we learn to let the computer sound like what it is–a computer with limited context and personality.” We should tone down the forced social graces that we’ve ascribed to Siri and Cortana, which verbally tap dance with wit for our enjoyment, and just let them be boring, old, helpful machines instead. On the technical side of things, these AIs aren’t ready to overcome the limits of their own fragmentation. This is a critical aspect of user experience–the various AI systems that populate our homes need to be able to acknowledge one another as interconnected systems at the software level with the same ease that a few friends might around a dinner table. Or at least the same ease of an iPhone that can run Gmail. “There will not be one ecosystem that rules them all,” says Charles Fulford, executive creative director at Elephant. “As AI drives Google, Amazon, and Apple . . . these will need to be able to speak with one another–not to mention the bizillion other AI devices and assistants. Creating a protocol to link all these systems together will be a big challenge.”

advertisement

[Photo: Jewel Samad/AFP/Getty Images] Bursting The Social Media Bubble Fake news is only part of the problem with social media. We’re part of a society splitting in two, and social media is failing to bridge that divide. It’s letting us live in bubbles–or if you want another analogy, it’s letting us hang at a slow-burn party of like-minded people. “The social media echo chamber is problematic because it typically aligns with what you already think because of the algorithms used by social networks,” says Matt McElvogue, associate creative director at Teague. “This leads to a lack of exposure that (maybe) existed more when we read/watched things that weren’t served to us by AI that aims to make us happy.” In fact, our AIs aren’t always trained to make us happy–Facebook has admitted to experimentally showing us things specifically to make us upset. But as one Facebook designer put it to me years ago when I asked why there was no dislike button, “It’s because I’m trying to connect you with your family!” Even that very racist uncle. Truly, neither painting an illusion of like-mindedness nor presenting users with a 24/7 partisan fight is the right way to burst the social media bubble of 2016. Which is exactly why it’s on our list of biggest UX challenges for 2017. [Pattern: MaleWitch via Shutterstock. Photo: Fredrik Skold/Getty Images] Ensuring Our Interfaces Work For Us, Not Against Us They’re called “dark patterns.” And they’re not so much bad UI as they are evil UI. Whether it’s an ad that pops up to shame you into subscribing to a newsletter when you leave a site (looking at you FxW), or the way Airbnb displays how many other people are looking at a property to get your blood boiling even when it doesn’t matter, or the way Uber has buried the ability to type in your pickup address so that it might move you half a block to optimize its own driver routes, or the way Amazon uses Dash buttons to sneak by deals that aren’t always the cheapest, in 2016, our user interfaces routinely conspired against us–and often under the guise of user-friendly design. The problem with dark patterns is that they’re built to protect corporate interests and improve bottom lines–and the best ones might work without the user ever realizing it. (Plus, Uber can make a superb argument, I’m sure, about how it can save everyone time if you’re willing to walk a few steps out of your way. In some cases, it’s feasible that dark UX might actually benefit the consumer.) But it should be the challenge of every UX designer to see it as an ethical imperative to empower the user wherever possible, to alleviate the burden of unnecessary stress, and to offer them the best, clearest information possible. Ultimately, the customer is always right–and that needs to be true at every touchpoint of an experience.