Each dog took part in one trial in which they were presented with a single sound stimulus from either one of eight conditions in which speech samples were resynthesized to vary the relative salience of segmental (phonemic) versus suprasegmental (speaker cues and intonation) information or from one of two control conditions ( Figure 1 ). Using the head-orienting paradigm, the sound was played simultaneously from both sides of the subject, and the direction of the subject’s initial orienting response (left or right) was recorded. We obtained head-orienting responses from 25 dogs in each condition. Given that auditory information entering each ear is processed mainly in the contralateral hemisphere of the brain via the dominant contralateral auditory pathways [], it is assumed that if the dog turns with its left ear leading in response to the sound, the acoustic input is processed primarily by the right hemisphere (RH), whereas a right turn would indicate primary left hemisphere (LH) processing [].

A binary logistic regression analysis identified a significant overall effect of auditory condition on head-turn direction [Wald(8) = 37.61, p < 0.001], indicating that the content of the acoustic signals affected the direction of hemispheric lateralization during perception ( Figure 2 ). There were no significant effects of subject sex (p = 0.76), age (p = 0.15), breed type (p = 0.37), current residence (animal shelter or private home; p = 0.16), stimulus exemplar (p = 0.23), stimulus voice gender (where applicable; p = 0.70), or test location (p = 0.18) on responses.

To verify that the LH response bias was specific to the phonemic content, in test 2, we further degraded the same command by replacing the first three formants with sine waves (meaningful sine-wave speech), strongly reducing suprasegmental cues (emotional and speaker related) but retaining meaningful segmental phonemic information. Here, too, dogs showed a significant right-head-turn bias (binomial test: 76% right head turn, p = 0.015), reinforcing the interpretation that in dogs the LH is sensitive to segmental phonemic information independently of the nature and naturalness of the acoustic elements composing the signal.

In test 1, dogs were presented with a familiar learned command in which the original positive intonational cues were artificially degraded (“come on then” with a flat intonation; meaningful speech with neutralized intonation). They showed a significant right-head-turn response bias (binomial test: 80% right head turn, p = 0.004), suggesting that when suprasegmental intonation is neutralized and segmental phonemic cues become more salient, dogs display a LH advantage.

We also tested dogs’ responses to emotional prosodic cues by presenting them with a version of the original command in which the phonemic components had been removed by extracting the formants and plosives, creating unintelligible speech-like vocal stimuli with reduced speaker cues but positive emotional prosody (test 4: meaningless voice with positive intonation). Here, too, dogs showed a significant left-head-turn bias (binomial test: 28% right head turn, p = 0.04), showing that when segmental phonemic cues are neutralized and suprasegmental emotional prosodic cues become more salient, dogs also display a RH advantage. This result furthers recent neuroimaging evidence that auditory regions in the dog’s RH are sensitive to emotional valence in both conspecific calls and human nonverbal vocalizations, with increased activation in response to calls with greater positive valence []. Similarly, humans show stronger RH activation not only in response to emotional speech prosody and vocalizations, but also when exposed to animal vocalizations with strong affective content independently from their familiarity with the species [], suggesting that the perception of emotional content in vocalizations, and its lateralization to the RH, maybe be conservative across mammals.

Both speaker-related (indexical) and emotional (dynamic) cues are encoded in the suprasegmental content of speech signals. We first tested dogs’ responses to speaker-related indexical cues by exposing them to a comparable phrase with neutralized intonation, but spoken in an unfamiliar language (test 3: meaningless [foreign] speech with neutralized intonation). Here the phonemic cues were unfamiliar and the intonational prosodic cues were removed, whereas indexical speaker-related cues remained intact. Dogs in this condition showed a significant left-head-turn bias (binomial test: 24% right head turn, p = 0.015), demonstrating a RH advantage when processing salient speaker-related suprasegmental content in speech. Dogs are known to perceive speaker-related vocal cues such as identity [] and gender [], and the observed RH advantage is consistent with human RH lateralization when processing these features [].

When, in test 5, dogs were exposed to intact meaningful speech containing both segmental phonemic and suprasegmental prosodic cues (“come on then” with happy intonation; meaningful speech with positive intonation), no significant head-turn bias was found (binomial test: 48% right head turn, p = 1.00). While directing dogs’ attention to either of these components using manipulated speech was found to produce opposite hemispheric biases in the previous tests, the simultaneous presence of salient segmental and suprasegmental cues that characterizes natural speech results in the absence of a bias at the population level [].

Do Hemispheric Biases Relate to the Communicative Content of the Signal?

22 Zatorre R.J.

Belin P. Spectral and temporal processing in human auditory cortex. 23 Poeppel D. The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. 24 van Lancker D. Cerebral lateralization of pitch cues in the linguistic signal. Two competing interpretations of hemispheric asymmetries [] can be applied to our observation that in dogs the LH is primarily sensitive to segmental phonemic content, whereas the RH is primarily sensitive to suprasegmental cues. Acoustic (cue-dependent) theories propose that in humans, auditory processing areas in the RH operate at a lower temporal resolution than those of the LH, resulting in a greater preference for processing slow acoustic modulation including suprasegmental cues in speech, whereas the LH is more specialized in analyzing rapidly changing auditory information such as phonemic cues. To test whether the RH bias in response to suprasegmental cues could be explained by a general preference for slow acoustic modulation, we presented dogs with a sine-wave tone matching the intonation contour of the original command (test 6: sine-wave intonation). No orientation bias was found in response to this condition (binomial test: 56% right head turn, p = 0.69), signifying that the observed RH bias for suprasegmental cues in speech does not generalize to slow frequency modulation across any acoustic signals. Furthermore, in our study, dogs expressed opposite response biases to speech signals with equivalent spectrotemporal complexity (meaningful and meaningless [foreign] speech with neutralized intonation), suggesting that the LH bias in dogs’ responses to meaningful phonemic cues was not purely dependent on the increased salience of the rapidly modulated components in the signal, but also on the functional relevance of these cues.

25 Vallortigara G.

Snyder A.

Kaplan G.

Bateson P.

Clayton N.S.

Rogers L.J. Are animals autistic savants. Our results therefore appear to be more consistent with the functional interpretation of lateralization, which proposes that hemispheric specialization is dependent on the communicative function of the acoustic content. Indeed, the observation that the LH is preferentially recruited when dogs process the phonemic cues of the highly familiar and learned command “come on then” is consistent with reports that the LH tends to respond to familiar or learned patterns across mammals []. To clarify whether the LH bias observed in response to meaningful speech with neutralized intonation was related to the subjects’ familiarity with the command (which could be related to familiarity with the speakers’ accents and/or familiarity with the phonemes independently of their meaning) or whether this bias was dependent on the learned functional relevance of the command itself, we carried out additional tests changing either the familiarity of the speaker’s accent or the familiarity of the phonemic content in the signal.

Based on the significant LH response bias obtained in the meaningful sine-wave speech condition, in which the speaker-related cues were degraded, we predicted that reducing the familiarity of the speaker’s accent would not influence responses. Dogs presented with the original command with degraded prosodic cues, but spoken by a nonnative British speaker (test 7: meaningful speech in an unfamiliar accent with neutralized intonation), also showed a significant right-head-turn bias (binomial test: 72% right head turn, p = 0.04), confirming that the LH response bias obtained in test 1 was not dependent on the familiarity of the speaker’s accent.