« previous post | next post »

Rebecca Rosen, "The QWERTY Effect: The Keyboards Are Changing Our Language!", The Atlantic:

It's long been thought that how a word sounds — it's very phonemes — can be related in some ways to what that word means. But language is no longer solely oral. Much of our word production happens not in our throats and mouths but on our keyboards. Could that process shape a word's meaning as well?

That's the contention of an intriguing new paper by linguists Kyle Jasmin and Daniel Casasanto. They argue that because of the QWERTY keyboard's asymmetrical shape (more letters on the left than the right), words dominated by right-side letters "acquire more positive valences" — that is to say, they become more likable. Their argument is that because its easier for your fingers to find the correct letters for typing right-side dominated words, the words subtly gain favor in your mind.

There's a lot of media uptake for this work: Rachel Zimmerman, "Typing and the meaning of words", Common Health; "QWERTY Keyboard Leads to Feelings about Words", Scientific American; Rob Waugh, "Why just typing 'LOL' makes you happy: People like words made of letters from the right-hand side of the QWERTY keyboard", Daily Mail; Alasdair Williams, "The 'QWERTY Effect' is changing what words mean to us", io9; "The right type of words", e! Science News; Dave Mosher "The QWERTY Effect: How Typing May Shape the Meaning of Words", Wired News; Rebecca Rosen "The QWERTY Effect: The Keyboards Are Changing Our Language", The Atlantic, etc.

From the paper — Kyle Jasmin and Daniel Casasanto, "The QWERTY Effect: How Typing Shapes the Meaning of Words", Psychonomics Bulletin and Review, 2012:

We analyzed valence-normed words from three corpora: the Affective Norms for English Words corpus (ANEW; Bradley & Lang, 1999), and two translation equivalents of ANEW in Spanish (SPANEW; Redondo, Fraga, Padrón, & Comesaña, 2007) and Dutch (DANEW). ANEW consists of 1,034 words. Participants used a pencil to rate valence on a 9-point scale composed of five self-assessment manikins (SAMs), which ranged from a smiling figure at the positive end of the scale to a frowning figure at the negative end. Participants were told to mark one of the manikins or a space between two adjacent manikins (see Bradley & Lang, 1999). In SPANEW, translations of the ANEW words were rated by native Spanish speakers using a similar procedure (see Redondo et al., 2007). […]

For each word in the corpus, we computed the difference of the number of left-side letters (q, w, e, r, t, a, s, d, f, g, z, x, c, y, b) and right-side letters (y, u, i, o, p, h, j, k, l, n, m), a measure we call the right-side advantage [RSA = (# right-side letters) − (# left-side letters)]. Overall, there was a significant positive relationship between RSA and valence in ANEW, SPANEW, and DANEW combined, according to a linear regression with items (ANEW words and their translation equivalents) as a repeated random factor using SPSS’s GLM function. Words with more right-side letters were rated to be more positive, on average, than words with more left-side letters. We call this relationship the QWERTY effect.

I don't have access to SPANEW or DANEW, but ANEW is given an an appendix to Bradley & Lang 1999, so I extracted the list from the .pdf and did my own linear regression for the English data alone:

This is pretty similar to the picture given in an earlier paper by the same authors, except that they show an amalgam of all three datasets ("The QWERTY Effect: How stereo-typing shapes the mental lexicon", CogSci 2011):

[And I note that there's a typo in both their CogSci 2011 paper and in the 2012 Psychonomics Bulletin and Review paper, namely that 'y' is substituted for 'v' in their list of left-hand letters, so that 'v' doesn't occur in the list for either side, while 'y' occurs on both sides. Here's a screenshot of the passage in the 2012 paper:

It's not clear whether this mistake is replicated in their code, or only in their explanation of it…]

Whether in the English data alone, or in the combined data for all languages, we can see the that effect is not a very strong one. And in my replication with the 1,034 ANEW words alone, it's not statistically significant. I'm not going to cite the (non-significant) p value, since I don't think it means much, but I'll mention that the adjusted multiple r2 (equal to the proportion of variation accounted for) is 0.0015.

As I said, I don't have access to the other two data bases, but I did manage to get another comparable body of data, for which the results were similar:

It's comforting to see apparent confirmation — at least of the direction of the effect — in an independently-collected data set, with a similar adjusted multiple r2 of 0.0013. At least, it's comforting until we recognize that the source of this data was a random number generator. I created three sets of random data by pairing the "right-side advantage" values of the ANEW words with random re-samplings (with replacement) from the set of ANEW valence estimates. Of the three, this was the one that was most similar to the pattern in the original data. Of the other two, one showed a similar effect with a negative rather than positive slope, while the other had a flat regression line. (The results were comparable when I generated the random values by drawing in a different way from a distribution with a shape similar to the overall histogram of ANEW valence estimates.)

Jasmin and Casasanto found some other confirming evidence as well, for example in valence estimates of invented pseudowords. But I wonder whether this work stands up to the tests suggested in Joseph Simmons et al., "False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant", Psychological Science, 2011.

In any case, I feel that the strength of the effect, if it exists, is far too small to support the interpretation presented in the popular press — with some apparent encouragement from Jasmin & Casasanto — as in the conclusion of the e! article (which I think is just the Springer press release):

Linguists have long believed that the meanings of words are independent of their forms, an idea known as the “arbitrariness of the sign.” But the QWERTY effect suggests the written forms of words can influence their meanings, challenging this traditional view.Should parents stick to the positive side of their keyboards when picking baby names – Molly instead of Sara? Jimmy instead of Fred? According to the authors, “People responsible for naming new products, brands, and companies might do well to consider the potential advantages of consulting their keyboards and choosing the 'right' name."

Update — thanks to Steve Kass, I found the DANEW list in .pdf form as an appendix to the online supplementary materials for the paper, extracted it, and plotted it:

Again, the effect is not statistically significant — and in any case is not large enough to be a concern for companies naming products or parents naming children, with 0.1% of variance in valence judgments accounted for by the "QWERTY effect".

Update #2 — more here.

Permalink