Participants

Twenty healthy participants (10 females) participated in experiments 1 (mean age = 23.30 years, S.D. = 3.40), and twenty healthy participants (10 females) participated in experiments 2 (mean age = 23.90 years, S.D. = 3.81). Seventeen out of 20 participants from experiment 2 also participated in experiment 1 at approximately 2 months before experiment 2. All participants were right-handed according to the Edinburgh handedness inventory24 and had normal or corrected-to-normal visual acuity. According to self-report, none of the participants had color blindness. All participants provided written informed consent. The experimental procedures received the approval of the Committee for Human Research at the Toyohashi University of Technology, and the experiment was strictly conducted in accordance with the approved guidelines of the committee.

Stimuli

For experiment 1, color images of an emotional face (i.e., 1 female and 1 male posing as fearful, angry, sad, or happy) were taken from the ATR Facial Expression Image Database (DB99) (ATR-Promotions, Kyoto, Japan, http://www.atr-p.com/face-db.html). The database has 10 types of facial expressions from 4 female and 6 male Asian models. The database also includes the results of a psychological evaluation experiment (not published) that tested the validity of the database. External features (i.e., neck, ears, and hairline) were removed from the face images using Photoshop CS2 (Adobe Systems Inc., San Jose, CA, USA). We created different-colored faces for each of the face images by manipulating the CIELab a* (red-green) or b* (yellow-blue) values of their skin area. The CIELab color space is modeled on the human visual system, and designed to be perceptually uniform. Therefore, the change of any component value produces approximately the same perceptible change in color anywhere in the three-dimensional space25. There were three facial color conditions, reddish-colored (+12 units of a*), bluish-colored (−12 units of b*), and natural-colored (not manipulated). Expression continua were created by morphing between 2 different expressions for one identity of the same facial color condition in 10 equal steps using SmartMorph software (MeeSoft, http://meesoft.logicnet.dk/). Two pairs of expressions (i.e., fear-to-anger and sadness-to-happiness) were selected for expression morphing because these pairs can be justifiably associated with different facial colors. That is, fear and sadness could be linked to a bluish face and anger and happiness could be linked to a reddish face (e.g. ref. 26). In total, 132 images were used in this experiment (2 expression morph continua × 3 facial colors × 2 models (one female) × 11 morph increments). Size of all face images was 219 × 243 pixels (11.0° × 12.2° as visual angle). Images were normalized for mean luminance and root mean square (RMS) contrast, and presented in the center of a neutral gray background. Figure 1 shows examples of the morph continua.

Figure 1: Examples of the morph continua for the three facial color conditions used in experiment 1. (A) Fear-to-anger and (B) sadness-to-happiness. Color images of an emotional face were taken from the ATR Facial Expression Image Database (DB99) (ATR-Promotions, Kyoto, Japan, http://www.atr-p.com/face-db.html). Full size image

For experiment 2, the colored emotional face stimuli were the same as that used in experiment 1 except that a neutral expression was also used. We created an 11-step (from 0 to 10) differential color face continuum for each of the five expressions (i.e., fearful, angry, sad, happy, and neutral) by manipulating the CIELab a* and b* values of their skin area. Step 0 had the most bluish-colored face by reducing 10 units of b* from step 5 in increments of 2 units of b*. Step 10 had the most reddish-colored face by increasing 10 units of a* from step 5 in increments of 2 units of a*. Hence, step 5 of the continuum was the original face image color (no color manipulation). In total, 110 images were used in this experiment (5 expressions × 2 models (one female) × 11 facial color conditions). The size of all of the face images was the same as in experiment 1. Figure 2 shows examples of the facial color continua.

Figure 2: Examples of the facial color continua for the five expressions used in experiment 2. Color images of an emotional face were taken from the ATR Facial Expression Image Database (DB99) (ATR-Promotions, Kyoto, Japan, http://www.atr-p.com/face-db.html). Full size image

Procedure

Experiment 1 was performed in four blocks, as follows: (1) a fear-to-anger block with a male face); (2) a fear-to-anger block with a female face); (3) a sadness-to-happiness block with a male face); and (4) a sadness-to-happiness block with a female face. Each block consisted of three morph continua, which were different in facial color; thus, there were 33 images (3 facial colors × 11 morph increments). Each trial began with a fixation for 250 ms, followed by a blank interval of 250 ms, and then an expression morphed face was presented for 300 ms. Participants were asked to identify the expression of the face regardless of its facial color as quickly and accurately as possible by pressing one of two buttons. The two-alternative facial expressions were the endpoints of the morph continuum, i.e., fear or anger in blocks 1 and 2, and sadness or happiness in blocks 3 and 4. After the face was presented, a white square was presented for 1,700 ms. Each face was presented 8 times in a random order, resulting in a total of 264 trials (3 facial colors × 11 morph increments × 8 times) per block. The four blocks were run in a random order for each participant.

In experiment 2, we defined three facial expression conditions based on an association between facial color and expression, reddish-associated (anger, happiness), bluish-associated (fear, sadness), and neutral. We also used two different expression sets consisting of three expression conditions, fear-neutral-anger and sadness-neutral-happiness. The experiment was performed in four blocks, as follows: (1) a fear-neutral-anger block with a male face); (2) a fear-neutral-anger block with a female face; (3) a sadness-neutral-happiness block with a male face; and (4) a sadness-neutral-happiness block with a female face. Each block consisted of three facial color continua that were different in facial expression. Each face was presented 8 times in a random order, resulting in a total of 264 trials (3 facial colors × 11 morph increments × 8 times) per block. The procedure was identical to that used in experiment 1, except for the subject’s task. Participants were asked to identify whether the facial color was ‘reddish’ or ‘bluish’ regardless of its expression as quickly and accurately as possible by pressing one of two buttons.

Data analysis

The expression identification rate (experiment 1), facial color identification rate (experiment 2), and mean response times were computed for each face stimuli. The expression identification rate and facial color identification rate from each subject were fit with a psychometric function using a generalized linear model with a binomial distribution in Matlab software (MathWorks, Natick, MA, USA) (Fig. 3). To compare a facial color difference in expression identification and an expression difference in facial color identification, the point of subjective equality (PSE) was computed and analyzed in a repeated measures analysis of variance for each morph condition (i.e., fear-to-anger and sadness-to-happiness) or each expression set (i.e., fear-neutral-anger and sadness-neutral-happiness). In experiment 1, the PSE was the level of expression continuum at which the probability of facial expression identification is equal to two expressions (Fig. 3A). In experiment 2, the PSE was the level of facial color continuum at which the probability of facial color judging is equal to ‘reddish’ or ‘bluish’ (Fig. 3B). Facial color (i.e., reddish-colored, bluish-colored, and natural-colored) was the within-subject factor in experiment 1, and facial expression (i.e., a reddish-associated expression (anger, happiness), a bluish-associated expression (fear, sadness), and a neutral expression) was the within-subject factor in experiment 2. A post-hoc analysis was performed using the Bonferroni method. This statistical analysis was performed using SPSS software (IBM, Armonk, NY, USA).

Figure 3: Psychometric function and PSE of a representative participant. We computed the PSE for each facial color/expression condition to measure the shift in the psychometric function along the x-axis by facial color/expression. (A) An example is shown for fear-to-anger female face continua in used experiment 1. The x-axis shows the morph continuum and the y-axis shows the percentage of time that the participant (Subject H.N.) judged that facial expression as angry. (B) An example is shown for fear-neural-anger female face continua used in experiment 2. The x-axis shows the facial color continuum and the y-axis shows the percentage of time that the participant (Subject H. T.) judged that facial color as reddish. Color images of an emotional face were taken from the ATR Facial Expression Image Database (DB99) (ATR-Promotions, Kyoto, Japan, http://www.atr-p.com/face-db.html). Full size image

We analyzed response times (RTs) using a linear mixed-effect (lme) model with participants as random effects, and three facial colors and the percent anger (or happiness) in morphs (0 to 10) as fixed effects in experiment 1. In experiment 2, we used three facial expressions and facial color levels (0 to 10) as fixed effects. Test statistics and degrees of freedom for mixed models were estimated using Kenward-Roger’s approximation with package “lmerTest”27. Effect sizes are calculated as marginal and conditional coefficients of determination (R2m and R2c) using the package “MuMIn”28.