I kind of hate Twitter

So conservative writer Matt K. Lewis took to the pages of The Week this week to explain how he hates Twitter:

Twitter has become like high school, where the mean kids say something hurtful to boost their self-esteem and to see if others will laugh and join in. Aside from trolling for victims after some tragedy, Twitter isn’t used for reporting much anymore. But it is used for snark.

Which earned him a chorus of guffaws (and, yes, snark), like this response from Choire Sicha at The Awl:

Whenever someone writes one of these screeds, they have to ignore that Twitter is entirely self-selecting. You chose who to follow. You chose to behave like a jerk, or a needy child, or a boor. Twitter didn’t make you an ass.

Now, I’ve never met Mr. Lewis, and since he works at the Daily Caller (ugh) I would have to imagine we wouldn’t agree about much if you put us in a room together. But on this point, I think he is right and Mr. Sicha is wrong.

Which is why I sort of hate Twitter, too.

To establish my bona fides, I’ve been using Twitter on and off since 2008, as myself and as comic personas Fake John McCain (during the ’08 election) and Red, White and News. What I experienced there depressed me sufficiently that I eventually walked away from the service completely and stayed away for two years. Last year I got tired of people asking me why I wasn’t on Twitter, so I sighed and got back on hoping that something significant had changed. It hadn’t.

Here’s my complaint: Mr. Sicha’s statement that “Twitter [doesn’t] make you an ass” is just wrong. Twitter does make you an ass. In fact, its design makes it difficult for you to be anything else.

The medium is the message

To understand what I’m talking about, allow me to digress a bit into the thinking of one of the few people whose work changed my life: Marshall McLuhan.

Insofar as he’s generally remembered today, McLuhan is remembered as a gadfly, a provocateur with some half-baked ideas redeemed by a gift for phrasemaking. But he deserves to be engaged with more seriously than that. When I read his 1964 book Understanding Media: The Extensions of Man as a young man, it turned on a light bulb in my head that has never turned off since. It helped me see the world with new eyes.

To understand how McLuhan is relevant to Twitter, you need to delve into his most famous adage: “the medium is the message.” Lots of people know this quote, but not many of those seriously understand what McLuhan was getting at with it. What he meant was that the medium you use to send a message affects the way that message will be received by the recipient. There’s no such thing as a neutral medium — the way you choose to communicate a message changes the meaning of the message you communicate.

Consider, for example, a simple message from one person to another: “I love you.” Think of all the different channels over which that message could be transmitted from person A to person B, and how different it would feel to person B to receive it in each. Whispered into the ear, “I love you” can feel erotic. Stated over a candlelit dinner, it can feel romantic. Written on a piece of paper, it can feel formal. Read out on television, it can feel distant.

The words never change, but the message person B receives does. The medium shapes the message.

This means that the forms we choose to put our communications in are significant. They matter. Different media pull the message in different directions; each has its own particular english it imparts upon the ball. This is why an engrossing novel, picked up and used as a film script without any modification, makes for a terrible movie — idioms that work in print don’t work on the big screen, and vice versa. It takes the services of a talented screenwriter to translate the printed work into a filmed work of similar quality, in the same way it takes a talented translator to take a classic work of Russian literature and produce an English version of the same quality.

What’s fascinating about the Internet is that it’s one of very few communications channels over which more than one medium travels. Television is, well, television, but the Internet is a cornucopia of different media: text, audio, video; Web pages, e-mails, instant messages, Tweets. There is no medium called “the Internet”; the Internet is just the pipe through which lots of different media — increasingly, all media — reach us.

To understand McLuhan’s relevance to the digital age, you have to look at each online medium individually — words put on a Web page will be received and processed very differently than the same words spoken in a YouTube video. And the way you take that look is by examining the unique features that define the medium, that make it what it is.

So let’s take a look at Twitter as a medium.

Twitter is designed to embarrass you

Here are a few salient things that make Twitter Twitter:

Short messages. Twitter messages are limited to a maximum of 140 characters. Low publishing barrier. Twitter is deliberately designed to be as easy as humanly possible to send messages with — you don’t need to provide any metadata about the message (title, subject line, recipient list, etc.) like you do with other online publishing media such as Web pages, blogs or e-mail. You just type a message and hit “send.” Public. Tweets are, by default, readable by anybody. You have to follow someone to get them delivered right to you (see #3 below), but even if you don’t follow a person you can see their Tweets just by viewing their profile. Push delivery. You don’t have to go to a friend’s Twitter page to see what they’re saying; their messages, along with those of others you follow, come to you. This can be either in a feed, or (on mobile devices) in the form of notifications. Similarly, messages others write about you come to as well (as long as they refer to you by your @-username). Near-real-time. Absent technical problems with the Twitter service, messages posted by a user are seen by that user’s followers effectively instantaneously. Semi-ephemeral. While Tweets are public by default, and every public Tweet is archived, Twitter does not make those archives easy to access or search. To the user, they seem to just scroll away into oblivion as the feed updates. Scorekeeping. Twitter provides several mechanisms by which users can “keep score” of their status relative to other users, the most obvious being follower count, which is public and prominently displayed when viewing information about a user.

Given Twitter’s success, it’s hard to argue with any of these choices from a business perspective. But from a McLuhanite perspective, in terms of designing a medium for discussion, these choices are disastrous. They all drive the user in the same direction — away from nuance and towards sharp messages that drive up the user’s “score.”

Let’s discuss exactly how.

Short messages encourages the user to strip out qualifiers. Qualifiers are words we insert into statements to either dial up or dial down their impact. They serve an important social function; they allow us to say something negative about a person while simultaneously indicating that the person is not all bad, sparing their ego and limiting how harsh the critique seems to others. “Ted is a little bit of a douchebag” stings Ted less than “Ted is a douchebag” does. But when you only have 140 characters to work with, modifiers like “a bit of” are the first things to go.

Qualifiers are words we insert into statements to either dial up or dial down their impact. They serve an important social function; they allow us to say something negative about a person while simultaneously indicating that the person is not all bad, sparing their ego and limiting how harsh the critique seems to others. “Ted is a little bit of a douchebag” stings Ted less than “Ted is a douchebag” does. But when you only have 140 characters to work with, modifiers like “a bit of” are the first things to go. Short messages encourages the user to omit detail. The clarity of an argument can be improved by citing sources, identifying limitations, and otherwise fleshing it out. But doing so on Twitter is tedious and unwieldy; the 140-character limit means that anything more than a sentence or two has to be split across multiple Tweets, and there’s no guarantees that someone who sees Tweet 1 of 3 will see Tweets 2 and 3. One alternative is to link out to an external resource (like a Web page) for additional information, but since URLs count against your 140-character allocation too, the medium pushes back against even this limited level of additional detail. If you’re one character over 140, are you going to go back and rewrite your message, or are you just going to drop the URL that points to more information?

The clarity of an argument can be improved by citing sources, identifying limitations, and otherwise fleshing it out. But doing so on Twitter is tedious and unwieldy; the 140-character limit means that anything more than a sentence or two has to be split across multiple Tweets, and there’s no guarantees that someone who sees Tweet 1 of 3 will see Tweets 2 and 3. One alternative is to link out to an external resource (like a Web page) for additional information, but since URLs count against your 140-character allocation too, the medium pushes back against even this limited level of additional detail. If you’re one character over 140, are you going to go back and rewrite your message, or are you just going to drop the URL that points to more information? Low publishing barrier encourages users to publish without thinking. Twitter clearly wants you to use it casually, without a lot of “should I really post this online for the world to read?” deliberation, and from a usability perspective that’s laudable. But the flip side is that it’s easy when being casual to say something dumb or poorly thought through that offends people. We’ve all had moments when we blurted out something that made us sound like an idiot, but in the past there were always some hurdles one would have to overcome to do that online; you can embarrass yourself via email, for instance, but to do so you have to compose not just your regrettable statement but a subject line and list of recipients as well. Twitter removes those hurdles, so now people can embarrass themselves online as easily as they do off. But when you embarrass yourself offline, the only people who see it are the ones standing around you; when you embarrass yourself online, the world can see it, making the potential reputational stakes much higher.

Twitter clearly wants you to use it casually, without a lot of “should I really post this online for the world to read?” deliberation, and from a usability perspective that’s laudable. But the flip side is that it’s easy when being casual to say something dumb or poorly thought through that offends people. We’ve all had moments when we blurted out something that made us sound like an idiot, but in the past there were always some hurdles one would have to overcome to do that online; you can embarrass yourself via email, for instance, but to do so you have to compose not just your regrettable statement but a subject line and list of recipients as well. Twitter removes those hurdles, so now people can embarrass themselves online as easily as they do off. But when you embarrass yourself offline, the only people who see it are the ones standing around you; when you embarrass yourself online, the world can see it, making the potential reputational stakes much higher. Public viewability encourages the user towards posturing rather than candor. When people know their messages are open to public viewing, they frequently self-edit, stripping out information that they would be less hesitant to share in a private communication with one or more known others. This is not unique to Twitter; think of how much less likely you are to see a Facebook status update about a friend gaining five pounds, for instance, over an update about the same friend losing five pounds. While individually this is understandable, when the user is immersed in a community where this is the default behavior, it can lead to depression from the feeling it creates that “everyone’s life is perfect except mine.”

When people know their messages are open to public viewing, they frequently self-edit, stripping out information that they would be less hesitant to share in a private communication with one or more known others. This is not unique to Twitter; think of how much less likely you are to see a Facebook status update about a friend gaining five pounds, for instance, over an update about the same friend losing five pounds. While individually this is understandable, when the user is immersed in a community where this is the default behavior, it can lead to depression from the feeling it creates that “everyone’s life is perfect except mine.” Push delivery makes it hard to ignore what people are saying about you. If someone’s talking about you on the Web, you have to go into Google and search to find that out. If someone’s talking about you on Twitter, though, it’s very likely right in your face. This can be flattering if people are saying nice things, but if they’re not, it can feel embarrassing and/or painful; and people who are embarrassed or wounded tend to do stupid things like lash back at the person who did the wounding that they regret later when the pain has worn off.



If someone’s talking about you on the Web, you have to go into Google and search to find that out. If someone’s talking about you on Twitter, though, it’s very likely right in your face. This can be flattering if people are saying nice things, but if they’re not, it can feel embarrassing and/or painful; and people who are embarrassed or wounded tend to do stupid things like lash back at the person who did the wounding that they regret later when the pain has worn off. Near-real-time creates negative feedback loops within communities of users. Because information spreads so quickly through the network, assertions frequently circulate faster than they can be fact-checked. Usually incorrect statements (“Just heard Celebrity X died!!! #omg”) eventually get corrected (“Whoops, Celebrity X isn’t dead after all!!! #whew”), but with near-real-time interaction, by the time the correction comes the incorrect statement can have circulated far more widely than the correction ever will.

Because information spreads so quickly through the network, assertions frequently circulate faster than they can be fact-checked. Usually incorrect statements (“Just heard Celebrity X died!!! #omg”) eventually get corrected (“Whoops, Celebrity X isn’t dead after all!!! #whew”), but with near-real-time interaction, by the time the correction comes the incorrect statement can have circulated far more widely than the correction ever will. Semi-ephemeral archiving encourages the user to see Tweets as something they are not. Because of the ease of publishing a Tweet, its constrained size, and the way Tweets rapidly scroll off your feed, it’s easy to get into the mindset that Tweets are something less permanent or less public than, say, a blog post. But because the archives are all public and Google-able, a stupid Tweet can live forever, just like a stupid blog post. When combined with the “just say it” low publishing barriers, this can set people up for embarrassment in ways they don’t fully understand until it happens to them.

Because of the ease of publishing a Tweet, its constrained size, and the way Tweets rapidly scroll off your feed, it’s easy to get into the mindset that Tweets are something less permanent or less public than, say, a blog post. But because the archives are all public and Google-able, a stupid Tweet can live forever, just like a stupid blog post. When combined with the “just say it” low publishing barriers, this can set people up for embarrassment in ways they don’t fully understand until it happens to them. Scorekeeping mechanisms encourage the user to behave in ways that drive up their score. When the mark of a high-status Twitter user versus a low-status one is the number of followers and re-tweets they generate, users will gravitate towards creating messages that will get re-tweeted and attract followers. And as in all communications, the best way to make a big impression on an audience isn’t to make a considered, nuanced argument; it’s to walk up to the person you’re arguing with and kick him in the nuts. So users gravitate towards snark, outrage, and other sharp forms of expressions that grab attention, because that’s the behavior the system incentivizes.

Taken together, all of these factors create an environment where even reasonable, thoughful people behave like douchebags. They don’t do so because they are douchebags, necessarily. They do so because Twitter as a medium is optimized for douchebaggery. Its design creates an array of pitfalls that can lead you to come off like a douchebag, even if you have no intention to.

Which is why I kind of hate it.

Do I expect this rant to change anything? Not really. Those people who like Twitter seem to really like it, as incomprehensible as that is to me. But then there are people who enjoy going to dive bars and getting in fights on Saturday night, too. At least if you walk into a new bar and someone comes up and punches you, though, nobody comes up later to tell you sanctimoniously that you used the bar the wrong way.