‘The dress’ is a peculiar photograph: by themselves the dress’ pixels are brown and blue, colors associated with natural illuminants [], but popular accounts (#TheDress) suggest the dress appears either white/gold or blue/black []. Could the purported categorical perception arise because the original social-media question was an alternative-forced-choice? In a free-response survey (N = 1401), we found that most people, including those naïve to the image, reported white/gold or blue/black, but some said blue/brown. Reports of white/gold over blue/black were higher among older people and women. On re-test, some subjects reported a switch in perception, showing the image can be multistable. In a language-independent measure of perception, we asked subjects to identify the dress’ colors from a complete color gamut. The results showed three peaks corresponding to the main descriptive categories, providing additional evidence that the brain resolves the image into one of three stable percepts. We hypothesize that these reflect different internal priors: some people favor a cool illuminant (blue sky), discount shorter wavelengths, and perceive white/gold; others favor a warm illuminant (incandescent light), discount longer wavelengths, and see blue/black. The remaining subjects may assume a neutral illuminant, and see blue/brown. We show that by introducing overt cues to the illumination, we can flip the dress color.

Main Text

2 Rogers, A. (2015). The science of why no one agrees on the color of this dress. Wired http://www.wired.com/2015/02/science-one-agrees-color-dress/. Figure 1 Striking differences in color perception of the dress. Show full caption (A) Original photograph. (B) Pixel chromaticities for the dress. (C) Histogram of color descriptions for naïve (N = 313) and non-naive (N = 1088) subjects. Error bars are 95% C.I. (D) Of subjects who reported W/G or B/K (N = 1221), the odds of reporting W/G increased by a factor of 1.02 per unit age, p = 0.0035, 95% C.I. [1.01–1.03] ( Table S1 ). Symbol size denotes number of subjects (largest dot=76; smallest dot=1). (E) Color matches for regions i, ii, iii and iv (panel A), sorted by color description (B/K, left; W/G, right). Symbols show averages (upward triangles, regions i and ii; downward triangles, regions iii and iv), and contain 95% C.I.s of the mean. Grid provides a reference across the B/K and W/G panels. Insets depict color matches for individual subjects in each row, sorted by description. (F) Color matches for region (i) plotted against matches for region (ii) for all subjects (R = 0.59, p < 0.0001). Contours contain the highest density (25%) of respondents obtained in separate plots (not shown) generated by sorting the data by description (B/K, W/G, B/B). The first principal component of the population matches to (i,iv) defined the y axis (gold/black, ‘GK’); the first PC of the population matches to (ii,iii) defined the x axis (white/blue, ‘WB’). Each subject’s (x,y) values are the PC weights for their matches (Supplemental Experimental Procedures). Color scale is number of subjects. (G) Among W/G or B/K respondents, percent of W/G responses increased with image size (N = 235, 10% of original image; N = 1223, 36%; N = 245, 100%; N = 215, 150%; p < 0.0001, OR = 1.004 [1.002–1.007]). The horizontal dimension of the image was about 2°, 7.2°, 20°, and 30° of visual angle. Blurring the image biased responses towards B/K (N = 1048, image was 41% of original size; Chi-square, p < 0.0001). Dress image reproduced with permission from Cecilia Bleasdale. Popular accounts suggest that ‘the dress’ ( Figure 1 A,B) elicits large individual differences in color perception []. We confirmed this in a survey of 1,401 subjects (313 naïve; 53 tested in laboratory; 28/53 re-tested). Subjects were asked to complete the sentence: “this is a_____ and_____ dress” (see Supplemental Experimental Procedures in the Supplemental Information).

Overall, 57% of subjects described the dress as blue/black (B/K); 30% as white/gold (W/G); 11% as blue/brown (B/B); and 2% as something else. Redundant descriptions, such as ‘white-golden’ or ‘white-goldish’, were binned. Naïve and non-naïve populations showed similar distributions ( Figure 1 C), although non-naïve subjects used a smaller number of unique descriptions ( Figure S1 A in Supplemental Information). When country ( Figure S1 B) was removed from the logistic regression ( Table S1 ), experience became a predictor: non-naïve subjects were more likely to choose B/K or W/G, over B/B or other (p = 0.021, Wald chi-square; Odds Ratio (OR) = 1.53, 95% C.I. [1.06–2.17]). These results show that experience shaped the language used to describe the dress, and possibly the perception of it. Males were less likely than females to report W/G over B/K (p = 0.019, OR = 0.75, [0.58–0.95]). Moreover, the odds of reporting W/G increased with age ( Figure 1 D). Of non-naïve subjects, 45% reported a switch since first exposure. Three of 28 subjects retested in the laboratory reported a switch between sessions. Subjects whose perception switched were more likely to report B/K (p = 0.0003, OR = 0.60 [0.46–0.79], where W/G = success).

Subjects were asked to match the dress’ colors. Blue pixels (regions ii and iii, Figure 1 A) were consistently matched bluer by subjects reporting B/K and whiter by subjects reporting W/G, whereas brown pixels (i,iv) were matched blacker by subjects reporting B/K and golden by subjects reporting W/G ( Figure 1 E; Figure S1 C). For a given region, average color matches made by W/G perceivers differed in both lightness and hue from matches made by B/K perceivers (p values < 0.0001). Intra-subject reliability was significant ( Figure S1 D,E). Across all, matches for (i) were predictive of matches for (ii); moreover, the density plot showed three peaks ( Figure 1 F; Figure S1 F,G). These peaks correspond to the highest density of W/G, B/K, and B/B responders (contours in Figure 1 F), suggesting that the brain resolves the image into one of three stable percepts. Thus, ‘the dress’ appears to be analogous to multistable shape images, such as the Necker cube.

3 Zaidi Q. 4 Witzel C.

Valkova H.

Hansen T.

Gegenfurtner K.R. Object knowledge modulates colour appearance. 5 Bloj M.G.

Kersten D.

Hurlbert A.C. Perception of three-dimensional shape influences colour perception through mutual illumination. –5; warm, p = 10–7, F-test), but similar for the tests (p = 0.08), suggesting that illumination in ‘the dress’ is ambiguous. When the dress was embedded in a scene with unambiguous illumination cues, the majority of subjects conformed to a description predicted by the illumination ( We suspect that priors on both material properties [] and illumination [] are implicated in resolving the dress’ color. In the main experiment, the image was 36% of the original size so that the entire image could fit on most displays. In a follow-up experiment (N = 853 additional subjects), the fraction of W/G respondents rose with increasing image size ( Figure 1 G). This suggests that high-spatial frequency information (a cue to dress material), more evident at larger sizes, biases interpretation toward W/G. To further test this, we determined responses to a blurry image: the fraction of W/G respondents dropped. Subjects also rated the illumination for the dress and two test images showing the dress under cool or warm illumination ( Figure S2 A). Judgment variance was higher for the original than for either test (cool, p = 10; warm, p = 10, F-test), but similar for the tests (p = 0.08), suggesting that illumination in ‘the dress’ is ambiguous. When the dress was embedded in a scene with unambiguous illumination cues, the majority of subjects conformed to a description predicted by the illumination ( Figure S2 B).