In this article, we explore a visually stimulated fMRI paradigm to image the hunger vs. satiety state in healthy lean male subjects. Our preliminary data set in this homogenous cohort suggests that evolutionary ancient limbic and paralimbic regions (anterior cingulate cortex and amygdala) may be involved in the interactive processing of food‐ vs. nonfood‐related visual stimuli in the different states of hunger and satiety.

Porubska et al . (( 21 )) have very recently used fMRI to measure CNS activity during visual stimulation with food‐ and nonfood‐related images in healthy lean individuals. Food stimuli were associated with a greater CNS activity in the left orbitofrontal cortex and the left and right insular or opercular cortices.

Using positron emission tomography imaging in healthy individuals, Tataranni et al . (( 8 )) were the first to describe that hunger was associated with increased regional cerebral blood flow in specific brain regions e.g., the hypothalamus, the anterior cingulate cortex, the insula, the parahippocampal gyrus, and the hippocampus. In contrast, satiation was associated with increased regional cerebral blood flow e.g., in the ventromedial prefrontal cortex, the dorsolateral prefrontal cortex, and the inferior parietal lobe. Further positron emission tomography and fMRI studies have expanded on this (( 5 ),( 9 ),( 10 ),( 11 ),( 12 )) and have also provided first evidence that CNS activity in relation to eating behavior may differ in lean and obese individuals (( 13 ),( 14 ),( 15 )). Moreover, these studies have highlighted that perception and processing of food by the CNS is multidimensional (( 16 ),( 17 ),( 18 ),( 19 ),( 20 )) and that therefore, great care must be taken in defining a specific paradigm for the delineation of neuroanatomical correlates of hunger, appetite, and satiety.

A better understanding of the integrative role of the central nervous system (CNS) in energy homeostasis becomes increasingly important as the prevalence of obesity and obesity‐related diseases is rising worldwide (( 1 ),( 2 ),( 3 ),( 4 )). From experimental studies in animals, it has long been established that certain brain areas are critical for the regulation of caloric intake, notably the prefrontal cortex, the limbic and paralimbic regions, the hypothalamus, and the brain stem. To unravel the neuroanatomical correlates of eating behavior in humans, two different neuroimaging techniques are increasingly explored (( 5 ),( 6 )). Positron emission tomography imaging uses positron‐emitting radiotracers, such as 15 O water, which, after intravenous injection readily crosses the blood barrier and can be used to detect changes in cerebral blood flow. Such changes accompany altered neuronal activity upon exposure to a defined stimulus. Superimposition of functional onto an anatomical MRI allows the mapping of the CNS regions with changes in cerebral blood flow/neuronal activity. A limitation of positron emission tomography, however, is its low spatial resolution (5 mm). Functional magnetic resonance imaging (fMRI) uses the blood oxygen level–dependent signal as a measure for neuronal activity and is based on uncoupling of oxygen consumption and supply (decreased levels of deoxygenated hemoglobulin and increased blood flow) with increased neuronal activity. fMRI has a spatial resolution as low as 1 mm. Another important advantage of fMRI is high temporal resolution, which is crucial for monitoring dynamic processes, such as the processing of visual stimuli (( 6 ),( 7 )).

The statistical evaluation was based on a least‐squares estimation using the general linear model for serially autocorrelated observations (( 27 ),( 28 ),( 29 ),( 30 )). The design matrix was generated with a boxcar function (( 31 )), convolved with the hemodynamic response function, including the individual trials and the resting baseline. Regressors for the food and the nonfood block and the two‐back block were included in the model. The model equation, including the observation data, the design matrix and the error term, was convolved with a Gaussian kernel of dispersion of 4 s full width at half maximum to account for the temporal autocorrelation (( 28 )). For each subject, the two sessions were modeled with a fixed‐effect model and the β values for the regressors were estimated. Subsequently, the contrast for food vs. nonfood, hunger vs. satiety and the interaction were calculated. As the individual functional data sets were all aligned to the same stereotactic reference space, the single‐participant contrast images were then entered into a second‐level random effects analysis for the relevant contrasts. The group analysis consisted of a one‐sample t ‐test across the contrast images of all subjects that indicated, whether observed differences were significantly distinct from 0 (( 27 )). Subsequently, t values were transformed into Z‐ scores. To protect against false‐positive activations, only regions with a Z ‐score >3.09 ( P < 0.001) and with a volume >216 mm 2 (8 voxels) were considered (( 32 ),( 33 )). The additional cluster‐size requirement ensured an overall imagewise false‐positive rate of P < 0.05 (( 34 )).

To align the functional data slices onto a 3D stereotactic coordinate reference system, a rigid linear registration with six degrees of freedom (three rotational, three translational) was performed. The rotational and translational parameters were acquired on the basis of the 3D‐modified driven equilibrium Fourier transform and echo‐planar imaging‐T1 slices to achieve an optimal match between these slices and the individual 3D reference data set. This 3D reference data set had been acquired for each subject during a previous scanning session. The 3D‐modified driven equilibrium Fourier transform volume data set with 160 slices and 1‐mm slice thickness was standardized to the Talairach stereotactic space (( 25 )). The same rotational and translational parameters were normalized, i.e., transformed by linear scaling to a standard size. The resulting parameters were then used to transform the functional slices using trilinear interpolation, so that the resulting functional slices were aligned with the stereotactic coordinate system. Subsequently, a nonlinear normalization was performed (( 26 )).

The fMRI data were processed using LIPSIA software (( 24 )). This software package contains tools for preprocessing, registration, statistical evaluation, and presentation of fMRI data. Functional data were motion‐corrected offline with the Siemens motion correction protocol (Siemens, Erlangen, Germany). To correct for the temporal offset between the slices acquired in one scan, a cubic‐spline‐interpolation was applied. A temporal high‐pass filter with a cutoff frequency of 1/360 Hz was used for baseline correction of the signal and a spatial Gaussian filter with 7 mm full width at half maximum was applied.

The experiment was carried out on a 3T scanner (Siemens TRIO, Erlangen, Germany). In each session, 20 axial slices (19.2 cm field of view, 64 by 64 matrix, 4 mm thickness, 0.8 mm spacing), parallel to the anterior commissure–posterior commissure plane and covering the whole brain were acquired using a single‐shot, gradient‐recalled echo‐planar imaging sequence (transition time 2,000 ms, echo time 30 ms, 90 flip angle). One functional run with 945 time points was run, with each time point sampling over the 20 slices. Prior to the functional run, 20 anatomical T1‐weighted 3D‐modified driven equilibrium Fourier transform (( 22 ),( 23 )) images (data matrix 256 × 256, transition time 1.3 s, echo time 10 ms) and 20 T1‐weighted echo‐planar images with the same spatial orientation as the functional data were acquired.

In each session, two different experimental blocks were presented: one with pictures of ready‐to‐eat edible objects (food), and one with pictures of items that were clearly unrelated to food (nonfood). 50 food and 50 nonfood images were selected and matched for proportionality and graspability (examples are shown in Figure 1a ). The 100 images that we used are available for download from our website at http:innere.uniklinikum‐leipzig.dedownloadfood.ppt (for food items) and http:innere.uniklinikum‐leipzig.dedownloadnon_food.ppt (for nonfood items). None of the nonfood objects was related to a hand‐mouth action. Each block consisted of 45 s of presentation of food or nonfood visual stimuli, followed by 30 s of an attentional task (“two‐back task”), followed by a 15‐s resting baseline, i.e., black screen (liquid crystal display projector; see Figure 1b ). Each block lasted for 90 s. Ten blocks with food pictures and 10 blocks with nonfood pictures were presented in a pseudorandom order. At the beginning of each session, the screen remained blank for 45 s. The total duration of each experimental session was 30 min and 45 s. During the food or nonfood blocks, subjects were asked to press a button (right index finger) whenever a “target picture” was presented (e.g., image frame with no object). This allowed to control that the participants were actually paying attention to the pictures. During the attentional two‐back task, a series of letters lasting 1 s each was presented followed by a black screen for 1.5 s. The subject was asked to press a button when the same letter was shown two steps earlier (“two‐back task”). The two‐back task was presented to distract the attention from the previously shown pictures. All stimuli were displayed with a liquid crystal display projector on a back‐projection screen mounted in the bore of the magnet behind the participant's head. Participants viewed the screen wearing mirror glasses.

Twelve healthy, nonsmoking, nonvegetarian, male subjects (26.42 years (range 21–29 years); BMI 18.4–24.7) participated in this study. All subjects had normal or corrected‐to‐normal vision and were native German speakers. None of the subjects was taking medication at the time of the study. Written informed consent had been obtained from all subjects prior to the study. Each subject was measured twice and on separate occasions for the condition of hunger and satiety. For the starving session, participants were not allowed to eat for at least 14 h (noncaloric beverages only). On the other occasion, the fMRI measurement was conducted 1 h after ad libitum ingestion of a mixed meal (large pizza). Subjects were instructed prior to the actual experimental session and were then positioned supine in the scanner.

We concentrated on brain areas, which showed a response to (i) visual stimulation (food and nonfood images) and (ii) exhibited differential activity in hunger and satiety. Areas showing a significant interaction ( P < 0.05) between the state of satiation and the quality of visual stimulation were the anterior cingulate cortex, the superior occipital sulcus, and the right amygdala ( Table 3 , Figure 2c ). From these regions, the estimated β values were extracted and averaged across subjects for condition and stimuli quality. The results are shown in Figure 3 . With the exception of the left anterior cingulate cortex, these regions show signal increases to the visual stimuli, and to a greater extent to food‐related than to nonfood‐related pictures. This effect was more pronounced in hunger compared to satiety.

When comparing food vs. nonfood condition, we found significant differences in the activation pattern in a widespread network. In the food condition, prominent increases in CNS activity were observed in the left and right insulae, the left striate and extrastriate cortex, the anterior midprefrontal cortex, the thalamus and the left cerebellum. Smaller activity increases were found in the left inferior parietal lobes and the posterior and anterior cingulate cortex. For the nonfood condition, the main changes in activation were observed in the right parietal lobe, and the left and right middle temporal gyrus ( Table 2 , Figure 2b ).

Significantly enhanced CNS activity during the hunger condition was found in the left striate and extrastriate cortex as well as in the right anterior lateral orbitofrontal cortex, the left orbitofrontal cortex ( Table 1 , Figure 2a ). In the satiety state, increased activity was only observed in the left posterior middle temporal gyrus.

Accuracy levels (percentage of correct responses) in target‐recognition and two‐back‐recognition were at ceiling (>90%). Accuracy and mean reaction time for target‐recognition did not differ significantly between hunger and satiety ( P = 0.18 and P = 0.352, respectively). For the two‐back task, the error rate and the mean reaction time were not significantly different between hunger and satiety ( P = 0.301 and P = 0.502).

Body weight of the subjects was 68.9 ± 2.4 kg in the hungry state vs. 69.8 ± 2.4 kg in the satiated state ( P = 0.01). Hunger ratings on the visual analog scale (from 0 to 100 mm, “not hungry” to “very hungry”) were 77 ± 10 mm in the hungry state and 19 ± 5 mm in the satiated state ( P < 0.001).

Discussion

Neuroimaging is emerging as an important tool for the neuroanatomical dissection of CNS regions of energy homeostasis. Thereby, the experimental design is of crucial relevance. States of hunger and satiety are relative terms, that are affected by a variety of conditions. Thus, imaging results will most likely vary with the type of stimulus presented (food vs. nonfood; favorite vs. nonfavorite foods, low‐ vs. high‐caloric, carbohydrate, fatty foods), how the food is presented (picture, smell, taste, talk), and the circumstances, under which food presentation takes place (e.g., fasting, nonfasting, hypoglycemic, euglycemic).

We have created a paradigm of hunger vs. satiety state in a first series of experiments using visual stimuli (food‐neutral and food‐related objects) in healthy lean male subjects to explore brain areas, which are differentially activated by food in different satiation states. Using fMRI imaging, which allows a high spatial and temporal resolution of changes in brain activity, we found significant differences between the condition of hunger or satiety in the orbitalfrontal cortices. This is in agreement with the landmark study by Tataranni et al. ((8)) and subsequent findings of other groups ((16),(17),(35)), suggesting a crucial role of the orbitofrontal cortex in the subjective feeling of hunger and desire for food. Moreover, it has been proposed that the orbitalfrontal cortex is relevant for “coding” the reward value of food and represents the subjective pleasantness of food ((36),(37)).

Comparison of CNS activity related to visual stimulation in our setting showed activation in the cingulate cortex, both insulae, the striate and extrastriate cortex, and the anterior midprefrontal regions for food images and the right parietal lobe, the left and the right posterior middle temporal gyrus for nonfood images, respectively.

We are well aware of limitations of our study such as the relatively small cohort size, the inclusion of only young, lean and healthy men, and possibly too liberal statistical threshold. Nevertheless, bearing these caveats in mind many of our findings fit well with emerging concepts of brain areas activated by hunger or satiety.

For example, the heterogeneous activation pattern may reflect the complexity in processing visual stimuli, which, in their wake may also arouse other perceptional qualities (e.g., taste, smell, reward in case of food images). Although our results are not in full agreement with findings of a recent fMRI study, which reported specific activation in the left orbitofrontal cortex only after visual stimulation with food to nonfood‐related objects ((21)) we also observed specific activation of the right and left insular regions described in that study. In addition, significant changes in insular cortex activation after visual stimulation have previously been described by Killgore et al. ((10)) in a study examining the effect of high‐ and low‐caloric food images on CNS activity.

Moreover, our finding of the insular cortex would fit well with current notion regarding the role of the “insula” as part of a circuitry linking orbitofrontal cortex, hypothalamus, and the limbic system. Besides exposure to real food and visual presentation of food images, activation of the insular cortex has also been demonstrated when food is presented or perceived “by imagination,” smell, taste, or texture ((9),(10),(16),(17)). This is in agreement with the putative role of the insula in integrating, visual, olfactory, gustatory, and somatosensory as well as visceral inputs. Furthermore and in the same context, the insula is perceived to coordinate motivated cognition and emotional behavior, whereby different functional aspects may be attributed to different anatomical insular subregions ((38),(39)).

The novel finding of our study is that specific CNS areas are involved in the interactive processing of food vs. nonfood‐related visual stimuli in the different states of hunger and satiety. These include, besides the left anterior cingulate cortex, the right superior occipital sulcus, and the amygdala. Animal studies have shown that the amygdala plays a crucial role in the coordination of appetite behaviors. This is in agreement with the reasoning that food (even when presented only as an image) will cause a larger CNS “hunger response” in evolutionary conserved brain areas, sustaining survival, in particular because visual presentation of food was possibly the first way of food contact. Notably, amygdalectomy in animals is associated with indiscriminate oral sampling of food and nonfood objects ((1),(41)). Our findings, in part, confirm a previous fMRI investigation by LaBar et al. ((42)). However, in contrast to our study design, LaBar et al. investigated a more heterogeneous group of participants (female and male individuals), who were not matched for BMI. As functional neuroimaging data have been shown to differ considerably with the gender and the BMI ((6),(14)), we included only male lean probands with a defined BMI (BMI 19–24) in our study. In addition, LaBar et al. ((42)) explored a sequential hunger–satiety design, which may introduce a potential bias, while we have explored a random design after a defined fasting period of 14 h and imaging on separate, independent occasions. Nevertheless, a significant interaction between the states of satiety and the quality of visual stimulation in the vicinity of the right amygdala has been a consistent finding in both studies.

Imaging of hunger remains an unresolved issue. Before we can start to unravel factors regulating specific areas of the brain, such as hormones, substrates, or diseases, some sort of agreement on a well‐defined experimental setting is desirable. Several studies are now available and point at overlapping brain regions activated by either hunger or satiety under a variety of experimental conditions (smelling, tasting, eating, imagining, and seeing food). A simple, uniform, and widely usable procedure similar to the hyperinsulinemic euglycemic clamp for measuring insulin sensitivity should be developed. In view of the methodological simplicity, visual stimulation similar to what we used might perhaps meet this need. With the progress made in neuroanatomical imaging and the discovery and applicability of endocrine “tools” that modulate caloric intake, we believe that a translation of molecular and animal‐based concepts of satiety control into human physiology and pathophysiology will be an elementary step toward the development of novel strategies for prevention and treatment of obesity.