Abstract Neural substrates underlying the human-pet relationship are largely unknown. We examined fMRI brain activation patterns as mothers viewed images of their own child and dog and an unfamiliar child and dog. There was a common network of brain regions involved in emotion, reward, affiliation, visual processing and social cognition when mothers viewed images of both their child and dog. Viewing images of their child resulted in brain activity in the midbrain (ventral tegmental area/substantia nigra involved in reward/affiliation), while a more posterior cortical brain activation pattern involving fusiform gyrus (visual processing of faces and social cognition) characterized a mother's response to her dog. Mothers also rated images of their child and dog as eliciting similar levels of excitement (arousal) and pleasantness (valence), although the difference in the own vs. unfamiliar child comparison was larger than the own vs. unfamiliar dog comparison for arousal. Valence ratings of their dog were also positively correlated with ratings of the attachment to their dog. Although there are similarities in the perceived emotional experience and brain function associated with the mother-child and mother-dog bond, there are also key differences that may reflect variance in the evolutionary course and function of these relationships.

Citation: Stoeckel LE, Palley LS, Gollub RL, Niemi SM, Evins AE (2014) Patterns of Brain Activation when Mothers View Their Own Child and Dog: An fMRI Study. PLoS ONE 9(10): e107205. https://doi.org/10.1371/journal.pone.0107205 Editor: Marina Pavlova, University of Tuebingen Medical School, Germany Received: March 3, 2014; Accepted: August 12, 2014; Published: October 3, 2014 Copyright: © 2014 Stoeckel et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This research was funded by the Massachusetts General Hospital Center for Comparative Medicine, with additional support from the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, using resources provided by the Center for Functional Neuroimaging Technologies, P41RR14075, a P41 Regional Resource supported by the Biomedical Technology Program of the National Institutes of Health National Center for Research Resources (NCRR). This work was also conducted with support from Harvard Catalyst | The Harvard Clinical and Translational Science Center (NIH UL1 RR 025758) and financial contributions from Harvard University and its affiliated academic health care centers. Finally, the study was supported by the Charles A. King Trust (LES), by NIH K23DA032612 (LES) and by NIH K24 DA030443 (AEE). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist.

Introduction Humans began domesticating dogs to serve in a variety of roles, including as human companions or ‘pets’, 18,000–32,000 years ago [1]. The practice of adopting and nurturing other species (like dogs) or “alloparenting” is a common human behavior across different cultures that arose from the evolutionary need for domestication [2]. Approximately 2/3 of U.S. households have pets, and over $50 billion is spent annually on their care (http://www.americanpetproducts.org/press_industrytrends.asp). Many people have a strong emotional attachment to their pets. Pet owners have been termed ‘pet parents’ in the popular media, and half of pet owners consider their pet as much a part of the family as any member of the household (AP-Petside.com Poll 2009). Pets can be beneficial to the physical, social, and emotional well-being of humans [3]–[6], and animal-assisted therapy is widely used as a complementary medicine and adjunctive mental health intervention [7], [8]. Similarities between the owner-dog relationship and the human-infant relationship have been described within the framework of human attachment theory, developed to explain the role of the human infant-caregiver relationship in development, and extended to adult-adult caregiver, peer, and romantic relationships [9]. Attachment, usually refers to the bond formed between a child and caregiver (typically, a mother) to ensure safety, security, and, ultimately, survival [10] that may apply also to the formation and maintenance of people's relationship with their pets [11]–[13]. On a well-established laboratory-based infant-maternal attachment measure [14], [15], very similar results for human infants' and dogs' behaviors with their mother or owner have been described under high and low stress conditions [15]–[17]. Similar neurobiologic mechanisms of bonding have been implicated in human-human and owner-dog pairs. Oxytocin, beta-endorphin, prolactin, beta-phenylethylamine, and dopamine are increased in pet owners and their dogs during [18] and after [19]–[21] a positive interaction. Functional magnetic resonance imaging (fMRI) has been used to investigate neural responses when humans view the faces of their romantic partner or child compared with other faces [22]–[24]. Some brain regions activated to objects of both maternal and romantic love overlap with the brain's reward system that is hypothesized to facilitate strong interpersonal attachments [22]. Some common regions of activation also have dense expression of oxytocin and vasopressin receptors implicated in pair-bonding and maternal attachment [23]. In this study, our aim was to directly compare the functional neuroanatomy of the human-pet bond with that of the maternal-child bond. To do so, we analyzed patterns of brain function when mothers viewed images of their own child and own dog, with the aim of discovering both distinct and common regions of activation. We focused our analyses on specific brain regions of interest (ROI) known to be involved in the formation and maintenance of social bonds.

Methods The study was approved by the Partners Human Research Committee. Participants provided full written informed consent prior to beginning study procedures. The individuals in this manuscript have given written informed consent (as outlined in the PLOS consent form) to publish the images of their child's and dog's face (Figure 1) and other case details. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Figure 1. Study Schematic of the Experimental Design. Illustration of the passive viewing paradigm of dog and child images used. Sixteen unique color photos of faces: 4 own child (OC), 4 own dog (OD), 4 unfamiliar child (UC), 4 unfamiliar dog (UD) presented in 16 sec blocks (4 images/block) over 6 fMRI runs. Each block of images was followed by a screen with a fixation cross (FX). https://doi.org/10.1371/journal.pone.0107205.g001 Participants Participants were recruited via advertisement in local media, veterinary clinics, dog parks, and the Massachusetts General Hospital Research Study Volunteer Program for Health Registry. Eligible participants were women, aged 22–45 years, who had at least one child, aged: 2–10 years, and one pet dog, owned for at least 2 years, reported low to normal parenting stress (total score <90 on the Parenting Stress Index-Short Form (PSI-SF) [25]), normal affect (positive affect >12.5 and negative affect <29.1 on the Positive and Negative Affect Scale; PANAS [26], were right-handed, and had at least average estimated intellectual function (estimated Full Scale IQ >85 on Weschler Test of Adult Reading (WTAR); [27]. Exclusion criteria included any self-reported lifetime Axis I psychiatric disorder, current major medical illness, conditions that may impact brain reward function (e.g., obesity, substance use, pathological gambling), current or planned pregnancy, use of CNS-active medication in the prior six months, contraindication to MRI, and working in an animal-related field. Assessments Study Session 1 (home visit). Participants' child and dog were photographed in the participants' home, and participants completed the Edinburgh Handedness Inventory [28], PSI-SF [25], WTAR [27], the PANAS [26], Lexington Attachment to Pets Scale (LAPS; [29], and a demographic and dog ownership questionnaire. Participants were then shown a series of unfamiliar child and dog photographs, assembled from participants who consented to having photographs of their child and dog viewed by others in the study, and were asked, “Are you familiar with this child or dog?” to confirm that control images were “unfamiliar”. Visual stimuli preparation: Sixteen unique photographs of children and dogs were selected and edited for each participant in Adobe Photoshop Elements 8.0. The unfamiliar child and dog images were selected based on the familiarity assessment, and the unfamiliar child images were matched to the participant's child for gender and age. Photographs were cropped to 4×3 inches (to include the whole face with minimal neck and shoulders), resized to 800×600 pixels, outlined, and the selected area outside the image was shaded neutral grey. Images were converted to bitmap (*.bmp) format and modified for consistent luminance. Study Session 2 (imaging visit). Participants completed the PANAS and were then placed in the MRI scanner. They received instructions to relax as they passively viewed a variety of images of children and dogs (including some photographs taken during their home visit) as well as a fixation cross. Immediately following the scanning session, participants were given an eleven-question, multiple choice recognition test of the images they viewed in the scanner to verify that they were attentive during the study. Participants were asked about the content of the images, the hair color of the children and dogs, the number of images displayed, etc. Participants were then asked to rate 5 images per category selected from those shown during the scanning session on their emotional value (valence or pleasantness and arousal or excitement; [30]) using the Self Assessment Manikin scale (SAM; [31]). MRI data acquisition and procedure: Brain imaging data were acquired on a 3 Tesla Siemens TIM Trio MRI scanner using a 32-channel head coil. Blood-oxygen-level-dependent (BOLD) functional MRI data were acquired using a gradient echo T2*-weighted pulse sequence (TR/TE = 2000/30 ms, flip angle = 90°, FOV = 200×200 mm, 32 axial oblique slices collected −30 degrees off the AC-PC line, slice thickness = 3.0 mm with 0.3 mm interslice gap, 816 image volumes per slice, matrix = 64×64). A high-resolution 3D MPRAGE sequence was collected for anatomic localization of the fMRI data. For the fMRI scans, visual stimuli (photographs) were presented to participants in a block design format, with six 4∶32 min runs per imaging session. Each run consisted of two 16 s epochs each for each image category. Within each 16 s epoch of images, four individual images were presented for 3.5 s each. A 0.5 s gap separated the images, and a pseudorandom gap of 14, 16, or 18 s separated the epochs. All gaps consisted of a gray blank screen with a fixation cross (Fig. 1). Each run consisted of 136 volumes for a total of 816 volumes across six runs, of which 96 volumes were acquired for each image category. The visual images were presented with a Windows XP laptop computer running PsychToolbox (http://psychtoolbox.org/HomePage) and a Matlab (Mathworks, Inc., 2000) toolbox. Images were projected onto a screen behind the participant's head at the back of the scanner and viewed via a 45° single-surface rear-projecting mirror attached to the head coil. Eye movements were not monitored during imaging, as emotional and neutral images have been reported to result in no differential eye movements [32], [33]. fMRI analysis: fMRI data analysis was conducted with Statistical Parametric Mapping, Version 8 (SPM8: http://www.fil.ion.ucl.ac.uk/spm/software/spm8/) and custom Matlab routines. Standard image preprocessing was performed including motion and field map distortion correction, normalization to the Montreal Neurological Institute (MNI) standard brain template space, and spatial smoothing with a 6 mm FWHM Gaussian filter. Artifact detection and removal was performed using ART (http://web.mit.edu/swg/software.htm). Specifically, an image was defined as an outlier (artifact) image if the head displacement in x, y, or z direction was greater than .5 mm from the previous frame, or if the rotational displacement was greater than .02 radians from the previous frame, or if the global mean intensity in the image was greater than 3 standard deviations from the mean image intensity for the entire resting scan. There were five outliers total across the 14 participants (2 during the own child images and 3 during the fixation period). Preprocessed block design BOLD fMRI data were analyzed in normalized (MNI) space within the context of the General Linear Model on a voxel-by-voxel basis as implemented in SPM8. The time course of brain activation was modeled with a boxcar function convolved with the canonical hemodynamic response function (HRF), including a temporal derivative function. Individual regressors included task conditions, six motion parameters (3 translational and 3 rotational directions), and outliers (one regressor per outlier image identified with ART). A two-stage procedure was used for the statistical analysis of a mixed-effects design in SPM8 [34]. We analyzed the data using a 2×2 repeated measures analysis of variance (ANOVA) to assess the main effects of species (child vs. dog), relationship (own vs. unfamiliar), and the species x relationship interaction using the flexible factorial approach in SPM8. We then generated statistical contrasts for comparing brain activation in response to 1) own child vs. fixation, 2) own dog vs. fixation, 3) own child vs. own dog, 4) own child vs. unfamiliar child, and 5) own dog vs. unfamiliar dog using planned one-sample t-tests. To address our a priori hypotheses and to improve statistical power, we used a ROI approach and small volume correction (SVC) in SPM8 [35]. Briefly, SVC is a voxelwise approach controlling the statistical threshold by only correcting for the number of voxels in the specified ROI(s). The size of the ROI masks used in the present study ranged from 104 mm3 or 13 voxels (HYPO) to 16,984 mm3 or 2,123 voxels (insula). Given the range in size in our ROIs and the potential for functional heterogeneity within these ROI masks, we chose the SVC approach as it would allow us to detect activation in a subset of voxels within these ROI masks. By averaging across the entire ROI mask, we may have less sensitivity to detect activation creating a bias towards the null [36]. Brain regions (ROIs): Our regions of interest were based on previous fMRI studies in the literature implicating these regions in the neurobiology of the maternal-child relationship and facial perception [23], [37]–[39]. These included regions of the classic mesocorticolimbic dopamine reward/motivation system (ventral tegmental area (VTA), ventral striatum/nucleus accumbens (NAcc), amygdala, and medial orbitofrontal cortex (mOFC)), midbrain structures with dense expression of oxytocin and vasopressin receptors (substantia nigra (SNi) and periaqueductal grey (PAG)), and structures involved in social cognition and visual perception (superior temporal and fusiform gyri) and salience and interoceptive function (insula). Also included from these fMRI studies, were the hippocampus (HIPPO), hypothalamus (HYPO), thalamus, and dorsal striatum (caudate and putamen). ROI's were defined using anatomical structures in MNI space selected within the WFU Pickatlas toolbox [40] and the Harvard-Oxford atlas (http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases). Regions unavailable in these libraries (VTA/Sn and PAG) were drawn within the WFU Pickatlas using 3 mm volume-based spheres centered at a voxel location as identified by previous studies (VTA/Sn: x = ±4, y = −14, z = −16 [23], [38]; PAG: x = ±2, y = −32, z = −24 [23]). Significance for these a priori ROIs was assessed with cluster thresholds of p< .01 at the voxel level (uncorrected) and a familywise error (FWE) correction (as implemented in SPM8, using Gaussian Random Field Theory) of p< .05 at the cluster level. For the own child and own dog vs. fixation contrasts, we performed a conjunction analysis using the minimum statistic for conjunction null method [41], resulting in an overall alpha of p< .001 to determine whether shared brain regions were activated to both the own child and own dog images. Behavioral analyses: Valence and arousal ratings of the own and unfamiliar dog and child images were analyzed with a 2 (child vs. dog) ×2 (own vs. unfamiliar) repeated measures ANOVA. Pearson product-moment correlations were calculated to test the association between mean valence and arousal ratings for the own and unfamiliar dog images and LAPS total score. Analyses were performed with SPSS Version 21.0 (SPSS 21, IBM Corp. Released 2012. IBM SPSS Statistics for Mac, Version 21.0. Armonk, NY: IBM Corp.).

Discussion To our knowledge, this is the first report of a comparison of fMRI-related brain activation patterns in women when they viewed images of their child and dog. Here we report substantial overlap in brain activation patterns in regions involved in reward, emotion, and affiliation elicited by images of both a mother's own child and dog. These women also reported similar pleasantness (valence) and excitement (arousal) ratings for their child and dog with a larger difference in the own vs. unfamiliar child compared to the own vs. unfamiliar dog comparisons for arousal. Valence ratings of the own dog images were also positively correlated with self-reported pet attachment. Interestingly, images of their child activated the dopamine, oxytocin, and vasopressin-rich midbrain VTA/SNi, thought to be a critical brain region involved in reward and affiliation that was not activated by images of their dog. When viewing images of their own child, there was less deactivation in another key reward region (NAcc/ventral striatum) compared to viewing their own dog or an unknown child. It is important to note the ANOVA analysis resulted in a significant main effect of relationship (own vs. other), but no main effect of species or relationship x species interaction. However, the planned contrast of own child vs. own dog resulted in significant differences in several regions, including bilateral fusiform gyrus, posterior insula, superior temporal gyrus, and NAcc/ventral striatum. The discrepancy in the results from these two analyses may be explained by methodological differences in the ANOVA and the planned contrast (t-test) approaches. That is, the planned contrast tests whether there is a specific effect between two conditions (e.g., own child vs. own dog) while the ANOVA interaction tests whether there are any differences by relationship status (own or unfamiliar) at different levels of species (child or dog). Given the primary aim of the current study was to test the difference in mother's neural responses to their own child vs. own dog (not unfamiliar child vs. unfamiliar dog), the majority of the discussion has focused on these comparisons. This report extends the mapping of the functional neuroanatomy of human relationships to an important human-animal relationship. A strength of the study is that it had a similar design to previous studies of brain response to visual images of familiar and unfamiliar people [42], friends and romantic partners to adults in love [22], [23], [43]–[47] and infants and children to mothers [23], [24], [37], [38], [48]–[50]; reviewed in [37], [51]. As observed in some of these prior studies of close human relationships, the amygdala, thought to be a critical region for bond formation, was activated to both the own child and dog images. The amygdala may be involved in providing the emotional tone and incentive salience that directs attention to the needs of the child and dog, which is critical for the formation of these pair bonds [24]. Another brain region critical to bond formation, the SNi/VTA, was only activated when mothers viewed images of their child. The SNi/VTA has a high density of dopamine, oxytocin, and vasopressin receptors that plays a critical role in reward-mediated attachment and affiliation [52], [53]. This replicates previous reports of maternal SNi/VTA activation to stimuli related to their child [23], [38], [54]. While SNi/VTA is also reported to have a critical function for other human-human relationships of evolutionary importance (romantic relationships; [22], [23]), this does not appear to extend to the human-pet bond [55], [56]. This could indicate that, in humans, the SNi/VTA is ‘central’ for the formation and maintenance of pair bonds that sustain and propagate our species. There was also overlap in own child and own dog vs. fixation contrasts in brain areas associated with reward (mOFC, putamen; [37], [51], [57]), memory (hippocampus, thalamus; [37], [54], [58]), and visual/facial processing and social cognition (fusiform gyrus; [39], [49], [59]), which suggests importance for both the human-human and human-dog relationships. We did not observe ventral striatum/NAcc activation in response to any of the visual stimulus categories. This is a critical node in the reward network, which may reinforce social interactions that lead to long-term pair bonds [24]. This finding is consistent with previous studies that reported no ventral striatum/NAcc activation when mother's viewed images of their older children or romantic partners [22], [23] but was activated to images of their infants [24], [60]. It is possible that the ventral striatum/NAcc is critical to the formation of pair bonds, while dorsal aspects of the striatum may be more crucial for the maintenance of these bonds. A similar transition from ventral to dorsal striatum driving behavior has been observed in the transition from voluntary to habitual behavior [61]. As in prior studies, we observed activation in other aspects of the striatum (putamen). We observed less deactivation in this ventral striatum/NAcc when mother's viewed images of their own child vs. both an unfamiliar child and their own dog, which may reflect less habituation [62]. While the fusiform gyrus was activated for both own child and dog images, there was greater magnitude and extent of activation in response to the own dog images when compared directly with the own child images. This region is central to visual and face processing and social cognition [39], [63]–[65]. Given the primacy of language for human-human communication, facial cues may be a more central communication device for dog-human interaction [66]. Face perception may contribute to the human-dog bond by helping owners identify their dog, use gaze direction to communicate, and interpret emotional states [65], [66]. Caveats Strengths of the study include the within-subjects design that allowed us to directly assess similarities and differences in response to the child and dog images with each participant serving as their own control, and a well-controlled image acquisition protocol which isolated the faces of dogs and children without including other features or contexts in the image that could complicate the interpretation of results if participants selected their own images from an existing set of photographs as previous studies have done. However, due to the cross-sectional nature of the design, it is not possible to determine whether the observed results relate to formation or maintenance of the pair bonds tested in this study. While we only included mothers who reported a healthy parenting relationship with their child, we did not strictly assess parent-child ‘attachment’ as traditionally defined and measured. We also studied a somewhat homogeneous group of mothers/pet owners: all women with young children between the ages of 2–10 and dogs that had been pets for 3–10.5 years. This homogeneity in ratings of attachment and emotional valence increased our power to detect effects of child vs. dog images on brain activation, but limited our ability to detect relationships between brain activation patterns and self-reported emotional ratings and attachment due to the restricted range of relationships. Due to scheduling constraints, we were unable to scan all women in the same menstrual phase, which has been shown to affect activation in reward-related brain areas [67]. Further research is needed to assess the generalizability of these findings to other relationships such as fathers, parents of adopted children, other animal species, and in mothers with a broader range of attachment.

Summary and Conclusions Mothers reported similar emotional ratings for their child and dog, which elicited greater positive emotional responses than unfamiliar children and dogs. While a common brain network involved in reward, emotion, and affiliation was activated when mothers viewed images of their child and dog, activation in the midbrain (VTA/SNi), a key brain region involved in reward and affiliation, characterized the response of mothers to images of their child and was not observed in response to images of their own dog. Mothers also had greater activation in the fusiform gyrus when viewing their own dog compared to when they viewed their own child. These results demonstrate that the mother-child and mother-dog bond share aspects of emotional experience and patterns of brain function, but there are also brain-behavior differences that may reflect the distinct evolutionary underpinning of these relationships.

Acknowledgments We wish to acknowledge Dr. Margaret Pulsifer, Dr. Karlen Lyons-Ruth, Dr. Carl Schwartz and Dr. Joanne Morris for their thoughtful input on study design, Rosa Spaeth and Alexandra Cheetham for imaging study support, Drs. Satrajit Ghosh and Susan Whitfield-Gabrieli for their input on statistical methods and tools, Ms. Caroline Chan for assistance with data preparation and analysis, and Joseph Ferrara of the MGH Photography Department for providing training on digital photography.

Author Contributions Conceived and designed the experiments: LES LP RLG SMN AEE. Performed the experiments: LES LP. Analyzed the data: LES LP. Contributed reagents/materials/analysis tools: LES. Wrote the paper: LES LP. Manuscript reviews and revisions: LES LP RLG AEE SMN.