From dark patterns that trick unsuspecting users to mass social experiments conducted by internet giants, we take a look at the murky, increasingly complicated rights and wrongs of design – and the moral questions you should ask yourself as a designer.

When we think of bad design, usually what comes to mind are mistakes, bodged jobs and confusion – not effective, efficient practice in the form of either tricking users (aka ‘dark patterns’) or conducting arguably harmful experiments on them without their consent.

But at what point do normal, persuasive design tools there to make money become dark patterns? And when do routine experiments, user research and A/B testing carried out every day become wrong? And what should you, as a designer, do about it?

Dark patterns

Whether it’s hidden costs revealing themselves at the last page of shopping without explanation or a mind-boggling double-negative opt-out system for a newsletter subscription, internet users will be painfully familiar with dark patterns.

It’s in the name: ‘user experience’ (or UX) design should be tailored towards the user - and good UX design enhances accessibility, as well as user satisfaction and pleasure . But sometimes even the most intuitive, pleasurable-to-use interfaces just aren’t enough to persuade users to do what you want and they do not (which is usually paying money). And that’s where desperate, unethical dark patterns come in.

Dark patterns work so well because they are a clever inversion of good, ethical design. In 1995 - way before the concept of dark patterns even existed - Jakob Nielsen helpfully concocted 10 principles for interaction design. Do the opposite of any of these and you’re pretty likely to be committing a dark pattern.

For example, as that wonderful hub UX booth points out, Microsoft Word helpfully highlights ‘save’ when asking you if you want to save changes to a document - a clever design trick known as a ‘smart default’, which a busy user who hits the enter key before reading might be very thankful for. This is covered in Nielsen’s principles under ‘error prevention’.



Image: Microsoft Word's helpful trick to avoid hours of tearful work.

Likewise, when a user enters a new credit card on Amazon, ‘yes’ is highlighted when they are asked if they would like to make it their default card. Unlike Microsoft Word’s use of the interaction, this is neutral: it doesn’t hurt users, but it doesn’t necessarily help them.



Image: Amazon's neutral default credit card default.

UX booth then cites the newsletter sign up for a Boston-based rock gym that uses the same technique to highlight ‘yes’ on whether a user wants to receive a newsletter. This is a dark pattern, as it is prioritising the business’s needs over the user’s, and is the exact opposite of what Nielsen recommended in his principles.



Image: the admittedly less world-famous example of a Bostom gym using a dark pattern - though the point still stands.

So what do we do about all this? Enter Dark Patterns, a website set up in 2010 by Harry Brignull and Mark Miquel to name and shame “user interfaces that are designed to trick people” – and to educate users and designers on what dark patterns are and how to spot them. Dark Patterns’ extensive library shows that the internet is teeming with examples of deception.

A particularly common pattern is ‘bait and switch’ where a “user sets out to do one thing, but a different, undesirable thing happens”. A familiar example is Wordpress pre-checking the newsletter subscription box when leaving a comment, clogging your undoubtedly already groaning inbox (thanks to all the other dark patterns out there - sigh).



Image: Wordpress' pre-checked newsletter subscription

Other patterns include ‘misdirection’, which has been used by companies including Skype and Ryanair, where “the attention of the user is focused on one thing in order to distract its attention from another”. For example, Ryanair kindly offer travel insurance – but bury the free option to opt-out within an alphabetised list of countries, where no one would ever think to look. Though I suppose it’s possible I’ve overlooked the great nation of Don’t Insure Me.



Image: Ryanair's tricky travel insurance opt-out.

Trickery this clearly is, but do Ryanair’s actions really come as a surprise? As anyone who has travelled with the blindingly yellow airline will know, it is the kind of airline for which – The Telegraph claims – “almost anyone” has a horror story for.

In other words, Ryanair is budget and relies on add-on costs to continue as it is – to which some customers fall prey, but the better travelled (whether across the internet or the world) can dodge the costs and wander into the affordable sunset. Of course, that’s just not fair – internet wisdom should not dictate how much as you pay and UX design should cater to absolutely everyone.

Some awkward questions must be asked. Unless our society suddenly, radically changes, money will remain a driver of business and, so, design decisions, which might turn shifty as a result. Thankfully, websites like Dark Patterns do their job in exposing dodgy design to designers and users alike.

In the name of science?

Dark patterns tend to be clear-cut morally in comparison to mass user research and experimentation: dark patterns are hidden to trick you, but experimentation is by necessity often hidden by necessity, and users are oblivious other than the check-box consenting them to be studied when they sign up. For many, this one-time, often unnoticed consent is hardly enough after the nature of these experiments has been slowly revealed.

It is important to get straight that experiments happen all time - in fact, if you don’t want to be experimented on, don’t touch the internet. With the inescapable shift of social interaction to the web, A/B testing is easy for companies who might not have the kind of ethical training encouraged in academia.

When ex Facebook data scientist and software engineer Andrew Ledvina told a Wall Street Journal reporter that there is probably no current review board for running tests at Facebook and that there certainly wasn’t one in 2012 during his tenure, he probably wasn’t expecting the resulting article to largely focus on that comment out of the 45 minute interview.

Andrew took to the web to defend Facebook in a post dryly entitled ’10 ways Facebook is actually the devil’, which triggered a further tumble of outraged headlines after Andrew revealed he just doesn’t get the big deal. Every Facebook user has been part of an experiment at some point, he wrote. “Whether that is seeing different size ad copy, or different marketing messages, or different call to action buttons, or having their feeds generated by different ranking algorithms."

Facebook’s data scientists, Andrew continues, are “deeply passionate about making the lives of people using Facebook better, but with the pragmatic understanding that sometimes you need to hurt the experience for a small number of users to help make things better for 1+ billion others.”

In the very same, stressful week (for the Facebook PR team, that is) in which the Wall Street Journal article was published, the social media giant’s notorious mood manipulation experiment was revealed – where, for one week in January 2012, almost 700,000 users’ timelines were skewed to show either sad or happy content. The manipulated users were more likely to reflect these moods in their own posts.

Though Facebook only tested around 0.07% of its users in that period (the same year saw Facebook’s user base grow to over one billion), the shockwaves following the revelations left many more than that tiny fraction feeling exploited, casting doubt over Andrew’s assertion that Facebook carefully and pragmatically hurts a few to benefit the rest, and can justify doing so.

Not that Facebook would have wanted anyone to find out. If only secrecy stops your users from being hurt (or knowing they’ve been hurt when it comes to the 700,000 tested), can a practice really be called ethical?

Though the results from Facebook’s study were actually not that significant – and there is no definite link between key words in users' posts during the experiment and their mood – Facebook didn’t know that results would be fairly harmless when they began the study. In fact, though Facebook’s study was published in a scientific journal, there are some concerns over whether it followed standard academic rules and questions over whether it should have.

This kind of experimentation just isn’t necessary. Facebook, it seems, has learnt its lesson as shown by its Compassion Research Day. The social media giant no longer tries to make people feel bad or conduct underhand experiments. Instead, it obviously prompts people for feedback, such as through the emoticon faces on status updates.

Other controversial experiments include OKCupid’s go at improving its dating algorithms, where they lied to couples about their suitability – for example, telling ‘bad’ matches that they were “exceptionally good for each other”. Like the ex-Facebook data scientist, they responded to the uproar with a feisty blog post entitled ‘We Experiment On Human Beings!’ – inevitably leading to backlash including an article from the Washington Post.

But, arguably, there were key difference between OkCupid and Facebook’s PR disasters – namely, the online matchmaker didn’t deliberately make people sad. Their experiment was an extreme-sounding version of what we sign up to when using large, social websites: the company using us to understand our behaviour - mainly, how to more clicks and time spent on the site.

OkCupid seems to be simply doing its job, which is improving user experience – whereas it’s hard to see how Facebook’s experiment could be seen as user-friendly. And guess what OkCupid found? That lying to inflating the match score of a couple sometimes filled the air with love: nearly one in five mismatched couples, who were told they were a 90% match but were actually a 90%, engaged in what OkCupid thinks is a meaningful ‘conversation’ by exchanging more than three messages.

And the findings themselves were more worthwhile than that: “OkCupid definitely works, but that’s not the whole story,” said OkCupid’s Christian Rudder in the company’s blog post. “And if you have to choose only one or the other, the mere myth of compatibility works just as well as the truth.”

So what do you do as a designer?

Unsurprisingly, there is no shortage of views on what we should do about the darker side of design - from UX strategist Gary Bunker who proposes a code of conduct that would stop all UX professionals working with dark patterns to Chris Nodder, author of the book Evil by Design, who believes it’s okay to deceive people if it’s in their best interests. Some dark patterns are actually now illegal – such as the ‘sneak into a basket’ pattern, which drops items in your order without asking.

The ethics of big data is even more of a moral minefield, exploding with bonkers questions such as should Google tell you if you have cancer? Just recently, Gizmodo reported that Facebook have openly discussed tilting the American election against Donald Trump. And conducting experiments on people’s everyday lives will be even easier with the rise of the Internet of Things and remarkably knowledgeable AI.

In a recent debate on design and ethics from InVision, Head of Design for the UK Government Louise Downe has some advice that should stop anyone going wrong in either user research or dark patterns: “There is no decision too small to push back on if you feel kind of icky about something and it doesn’t feel like the right thing to do. I’ve learnt to check what I’m doing at the tiniest level, not just to look at the broader picture of ethics and what I do”.

Bending the truth and manipulation is a part of functional life, relationships and business - and design is no exception. But, whatever the pressures, the responsibility is yours as a designer to be comfortable with the ethics of your work.