Evidence from both behavioral and neuropsychological studies suggest that different types of organizational principles govern semantic representations of abstract and concrete words. The reviewed neuroimaging studies provide new evidence about the role of brain areas of the semantic network involved in the encoding of some types of information during processing of abstract and concrete concepts, better characterizing the neural underpinnings and the organizational principles of semantic representation of these types of word.

INTRODUCTION Semantic representation—the knowledge we have of the world—is a fundamental component of our mind. Many researchers across cognitive sciences have come to an interdisciplinary consensus that human semantic representation could be based on similarity/relatedness measures—i.e., the degree to which concepts are close in their meaning. On the one hand, embodied (or experiential) cognition theories treat meaning as based upon perceptual, motor, and affective states arising from our direct sensory experience and actions. On the other hand, distributional theories treat meaning as based on the statistical distribution of words—or, more generally, any sort of regularity—across spoken and written language (Andrews et al. 2009). One of the similarity measures proposed in the embodied view can be inferred by the overlap between semantic features in each concept pair (i.e., semantic feature-based similarity; Montefinese et al. 2018) often obtained by asking participants to list features they consider relevant in describing the meaning of a concept (Montefinese et al. 2013). For example, the words mouse and rat can be considered semantically similar because they share a great number of semantic features, such as have a tail, have whiskers, are gray, and they belong to the same semantic category (i.e., rodents). Similarity measures proposed in the distributional view, instead, can be inferred by the distribution of lexical cooccurrence frequencies of each concept pair across word corpora or texts (i.e., language-corpus-based similarity, Landauer and Dumais 1997). In this scenario, the concepts occurring in similar contexts can be considered as semantically related, such as, for example, the words mouse and trap. Other relatedness measures, like the association strength, may offer even further promise. This measure represents the conditional probability to generate a word in response to another one in a free word association task. For example, in the word association tasks the word cheese is often produced as an associate in response to the word mouse. Semantic relatedness measures are also related to word concreteness: how directly the represented concept is related to sensorial experience. Word concreteness is usually assessed by participants on Likert scales, where on one side of the scale concrete words lie, that is, words referring to something that exists in reality and one can have immediate experience of it through the senses (smelling, tasting, touching, hearing, seeing). For example, when we think about the word cat, its properties come to mind are usually has whiskers, meows, is furry, and so forth. Abstract words lie on the opposite side of the scale and refer to words whose meaning cannot be experienced directly but can be defined by other words and is grounded in the internal sensory experience and linguistic information, such as the word love. While concrete words have direct sensory referents (Crutch and Warrington 2005) and greater availability of contextual information (i.e., are less difficult to contextualize), abstract words tend to be more emotionally valenced (Kousta et al. 2011) and less imageable (i.e., have low sensorimotor grounding). Different organizational principles have also been argued to govern semantic representations of concrete and abstract words: concrete words are predominantly organized by the featural similarity measures and abstract words by associative relations and cooccurrence (Crutch and Warrington 2005). A long tradition of neuroimaging studies provided support for a distributed neural network coding the conceptual features that are learned from experience and corresponding to the sensorial, motor, and affective cortical systems in addition to the brain convergence zones where information from more modalities is integrated (e.g., in the inferior parietal lobe) and a central amodal hub localized in the anterior temporal lobes (ATL). For example, with regard to the differences in brain activity during semantic processing of abstract and concrete concepts, a meta-analysis of functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies (Wang et al. 2010) showed that abstract concepts determined a greater activity in the inferior frontal gyrus (IFG) and middle temporal gyrus (MTG) compared with the concrete ones. Conversely, the concrete concepts determined a greater activity in the posterior cingulate, precuneus, fusiform gyrus, and parahippocampal cortex (PHC) with similar but nonsignificant trends in temporal, occipital, and parietal regions. Together, these results indicate a greater engagement of the verbal system for the abstract words and a greater engagement of the perceptual and mental image generation systems for the concrete concepts. Although it is clear that different brain areas are involved in semantic processing of abstract and concrete words, it is still matter of debate which brain areas encode the different types of information underlying the meaning of abstract and concrete words. The aim of this Neuro Forum is to examine three neuroimaging studies (Della Rosa et al. 2018; Martin et al. 2018; Wang et al. 2018) that tackled this issue. By using a state-of-the-art technique (representational similarity analysis, Kriegeskorte et al. 2008), the first two studies (Martin et al. 2018; Wang et al. 2018) investigated the neural substrate associated with similarity measures for concrete and abstract word meaning, respectively. The third study examined the neural substrate associated with the imageability and contextual availability dimensions in abstract and concrete word meaning (Della Rosa et al. 2018).

SIMILARITY MEASURES IN ABSTRACT AND CONCRETE WORD MEANING Martin and colleagues (2018) investigated where visual and feature-based representations of concrete concepts are stored in the brain and whether and where these two kinds of features are integrated. To these aims, during fMRI scanning, 16 native English speakers were enrolled in two feature-verification tasks (i.e., to decide whether a feature is true of a given concept) on 40 concrete concepts visually presented in form of words. The authors justify their choice to use words rather than pictures as stimuli to ensure that processing of visual features was not related to the physical input but to the extraction of this information from preexisting conceptual representations. The first task biased the attention to the processing of visual features of concepts (e.g., Is the object round?); the second one biased the attention to the processing of more general semantic features of concepts (e.g., Is the object living?). Using representational similarity analysis, for the first time the authors showed that only the perirhinal cortex represented both the visual and semantic conceptual features regardless of whether the task biased attention toward visual or conceptual features. As pointed out by the authors, this result was predictable given the connectivity properties of the perirhinal cortex to the temporal pole, PHC, lateral occipital cortex and other sensory regions of the neocortex, that make it the best candidate to integrate both visual and semantic conceptual features of objects. In line with the idea that the ATL is a transmodal convergence zone abstracting conceptual information from the featural cooccurrence, the authors found that this region represented only the feature-based similarity regardless the task. Critically, the authors found that the PHC responded only to the conceptual features in the nonvisual task. Consistent with previous literature about the role of the PHC in processing contextual associations, the authors speculated that this result could be due to the fact that objects occurring in the same contexts often share many conceptual features but not many visual features. In sum, Martin and colleagues (2018) clarified the role of some areas of the semantic network in the processing of visual and semantic feature-based similarity for concrete concepts. Little is still known about how abstract concepts are represented in the brain because most of prior research on semantic representation adopting representational similarity analysis is focused on the investigation of concrete objects. To fill this gap, Wang and colleagues (2018) investigated which information (feature-based similarity vs. language-corpus-based similarity) described the abstract concepts in the brain and where this information was encoded. In doing this, six healthy native Chinese speakers were enrolled in a four-session fMRI study, where they performed a familiarity judgment task (i.e., to decide whether the word is familiar or not) on visually presented words denoting 360 abstract concepts. Using the same analytical approach used by Martin and colleagues (2018), Wang and colleagues (2018) found that for abstract concepts both language-corpus-based and feature-based similarities were associated with pattern of activation in a distributed brain network. In particular, results revealed that the pattern of language-corpus-based similarity across concepts significantly correlated with the neural pattern in the left lateral temporal, inferior parietal, and inferior frontal regions, which form the classical language system. However, this pattern of results was not significant in the homologous right regions, suggesting that, unlike the comprehension of the natural speech recruiting both the hemispheres, the word-word cooccurrence statistics in the language is mostly encoded in the left hemisphere. A more distributed pattern has been found for the feature-based similarity: results showed significant correlation with a distributed network including the left triangular portion of the IFG, intraparietal sulcus, inferior temporal gyrus, posterior MTG, and supramarginal gyrus. The authors posit that these regions could integrate and coordinate the several semantic features represented in the segregated neural systems. Interestingly, some of these regions are involved in the emotion processing, in line with the idea that the affective content is particularly relevant in abstract concept representation (Kousta et al. 2011). Consistently with this idea, further analysis showed indeed that the featural valence dimension explained most of the brain activation pattern compared with the other feature types. This result is in line with behavioral evidence revealing that abstract words are more emotionally valenced than concrete ones, even when variables such as imageability and context availability are held constant (Kousta et al. 2011). To summarize, this study suggested that abstract words are organized according to both the principles of distributional and experiential data and that these types of information are encoded in different brain areas of the semantic network.

IMAGEABILITY AND CONTEXTUAL AVAILABILITY IN ABSTRACT AND CONCRETE WORD MEANING In addition to the valence dimension and language-corpus-based and semantic feature-based similarities, other types of information may be the principles organizing semantic representation of abstract and concrete concepts. While the reviewed previous studies investigated the neural underpinnings and organizational principles of one type of words (concrete concepts by Martin et al. 2018 and abstract concepts by Wang et al. 2018), the third reviewed study included both abstract and concrete concepts. In doing this, Della Rosa and colleagues (2018) aimed at unraveling the role of the left IFG in semantic representation of abstract concepts, by comparing directly abstract and concrete concepts on the basis of imageability and contextual availability dimensions. Twenty-seven native Italian speakers performed a lexical decision task (i.e., to decide whether a string of letters is a word or not) on 70 words (35 abstract and 35 concrete) and 70 pseudowords. Results showed a crucial role of the left IFG for words with low imageability and contextual availability, as well as in semantic representation of abstract concepts, corroborating the idea that abstract concepts are characterized by low imageability and contextual availability. Classically, the left IFG has been linked to a verbal semantic processing supporting abstract representations compared with the concrete ones (Wang et al. 2010). The authors proposed that this region may be the neural crossroads in differentiating abstract from concrete knowledge. Psychophysiological interaction and regression analyses showed that only activity in the left MTG and angular gyrus, brain areas known to be involved in semantic processing (Wang et al. 2010), significantly predicted the activity in the left IFG for abstract concepts with low imageability only (and not for those with low contextual availability). These results are in line with the literature proposing a role of the angular gyrus in the processing of concrete concepts and the posterior MTG in the processing of abstract concepts (Wang et al. 2010). To sum up, this study suggested a role of the left IFG as a neural crossroads between different types of information necessary to differentiate between abstract and concrete concepts.

IMPLICATIONS AND FUTURE DIRECTIONS Together, these studies shed new light on how abstract and concrete concepts are represented in the brain. Across different languages (English, Chinese, and Italian) and tasks (semantic vs. lexical), they clarified the role of some brain areas, such as the ATL and IFG, belonging to the left-lateralized semantic network. Moreover, they have better characterized the word meaning structure of both abstract and concrete words. However, it may be the case that the measures tested in the reviewed studies could be partial and not able to capture fully the word meaning. To this regard, for example, Bruni and colleagues (2011) demonstrated that distributional models are improved by augmenting them with perceptual information related to the referents of words. Moreover, Andrews and colleagues (2009) showed that semantic representations learned by combining experiential and distributional data were more similar to human representations, as compared with when the different data sources were treated independently. Further research could test these very promising models by integrating experiential and distributional data and testing whether convergence zones of the semantic network are involved in the encoding of this kind of information. Using representational similarity analysis, future studies could also compare directly abstract and concrete conceptual representations and investigate whether the involvement of the experiential and distributional similarity measures in their semantic representation may be modulated by the task demands (e.g., semantic implicit vs. semantic explicit encoding), context (e.g., social vs. nonsocial context), and modality presentation (e.g., spoken vs. written). This could provide information about the flexibility and dynamicity of semantic representation of abstract and concrete concepts.

GRANTS This work was supported by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Actions (grant number 702655) and by the University of Padova (SID 2018).

DISCLOSURES No conflicts of interest, financial or otherwise, are declared by the author.