This post contains most of the contributions to the Round Table Quo vadis linguistics in the 21st century at the Societas Linguistica Europaea 2014 conference at the Faculty of English, Adam Mickiewicz University, in Poznań. Its aim was to discuss the future of linguistics as a discipline in Europe and the world. The motivation behind this topic was the conviction that as linguists we need to make a strong statement about the essential role of our field in facing global societal challenges, and in bringing together insights from the humanities and the sciences. We must provide rationale and support to the development and enhancement of linguistic studies. Very often the role of linguistic studies is underestimated or misunderstood, and linguists are treated as those who “speak a lot of languages” or, alternatively, as those who “teach languages”. As much as the above may very well be true, there is much more to linguistics than that and the missionary and explanatory message should be delivered both to researchers in other fields as well as to the public. The statements and discussion at the Round Table were intended to serve this purpose. The main speakers were Peter Hagoort and Martin Haspelmath, the discussants were Martin Hilpert, Katarzyna Bromberek-Dyzman and Eitan Grossman.

Peter Hagoort is director of the Max Planck Institute for Psycholinguistics, Nijmegen and the founding director of the Donders Institute, Centre for Cognitive Neuroimaging, a cognitive neuroscience research centre at the Radboud University Nijmegen. He is also professor in cognitive neuroscience at the Radboud University Nijmegen. His own research interests relate to the domain of the human language faculty and how it is instantiated in the brain. In his research he applies neuroimaging techniques such as ERP, MEG, PET and fMRI. At the Max Planck Institute he is heading a department on the Neurobiology of Language. For his scientific contributions, he received the Hendrik Mullerprijs from the Royal Netherlands Academy of Arts Sciences (KNAW), the “Knighthood of the Dutch Lion” from the Dutch Queen, the NWO-Spinoza Price, an honorary doctorate in science from the University of Glasgow, the Heymans Prize, the Academy Professorship Prize from KNAW. Peter Hagoort is member of the Royal Netherlands Academy of Arts and Sciences (KNAW) and of the Academia Europaea. is director of the Max Planck Institute for Psycholinguistics, Nijmegen and the founding director of the Donders Institute, Centre for Cognitive Neuroimaging, a cognitive neuroscience research centre at the Radboud University Nijmegen. He is also professor in cognitive neuroscience at the Radboud University Nijmegen. His own research interests relate to the domain of the human language faculty and how it is instantiated in the brain. In his research he applies neuroimaging techniques such as ERP, MEG, PET and fMRI. At the Max Planck Institute he is heading a department on the Neurobiology of Language. For his scientific contributions, he received the Hendrik Mullerprijs from the Royal Netherlands Academy of Arts Sciences (KNAW), the “Knighthood of the Dutch Lion” from the Dutch Queen, the NWO-Spinoza Price, an honorary doctorate in science from the University of Glasgow, the Heymans Prize, the Academy Professorship Prize from KNAW. Peter Hagoort is member of the Royal Netherlands Academy of Arts and Sciences (KNAW) and of the Academia Europaea. Martin Haspelmath is Professor at the Max Planck Institute for Evolutionary Anthropology, Leipzig. He studied linguistics in Vienna, Cologne, Buffalo and Moscow, and received his Ph.D. and habilitation degrees from the Freie Universität Berlin. Before moving to Leipzig in 1998, he worked at the Otto-Friedrichs- Universität Bamberg and the Università degli Studi di Pavia. His research interests in linguistics are primarily in broadly comparative and historical morphology and syntax as well as language contact. He started out with a focus on European languages, in particular Lezgian, but more recently he sees himself as a generalist whose primary concern is the discovery and explanation of linguistic universals (he is co-editor of The World Atlas of Language Structures, 2005). Together with several colleagues he has pioneered the approach of language comparison via specialist consortia (Loanword Typology, Atlas of Pidgin and Creole Language Structures, The Leipzig Valency Classes Project). He is the editor of a scholarly blog Diversity Linguistics Comment in which the present report appears. He is a well-known advocate of Open Access of scholarly publications. Martin Hilpert is Assistant Professor of English Linguistics at the University of Neuchâtel. He holds a PhD from Rice University and did postdoctoral research at the International Computer Science Institute in Berkeley and at the Freiburg Institute for Advanced Studies. He is interested in cognitive linguistics, language change, construction grammar, and corpus linguistics. Katarzyna Bromberek-Dyzman is Assistant Professor in the Department of English pragmatics at the Faculty of English, Adam Mickiewicz University, Poznan. She is also head of Language and Communication Laboratory at the Faculty of English. Her research is in experimental pragmatics, affective pragmatics, affective neuroscience, pragmatic inferencing and ‘mind-reading’, language and affect interface. She is member of a Steering Committee of Euro-XPRAG – Experimental Pragmatics in Europe, Research Networking Program funded by European Science Foundation. Eitan Grossman works at the Hebrew University of Jerusalem. His fields of interest are functional, typological and sociolinguistic approaches to language, diachronic linguistics, especially grammaticalization studies, language contact, syntax and text-linguistics, Coptic and Egyptian, Yiddish, Celtic.

Below you will find the speakers’ short reports on their contributions to the Round Table.

Peter Hagoort “Linguistics quo vadis?”

Martin Haspelmath: “The future of linguistics: Two trends and two hopes”

Martin Hilpert: “The big to-do list: Defining challenges for 21st century linguistics” (video report)

Katarzyna Bromberek-Dyzman: “Linguistics needs to twist: Go experimental and interdisciplinary”

Eitan Grossman: “Linguistics across its (internal and external) borders”

**********************

Peter Hagoort: “Linguistics quo vadis?”

Let me start with an observation. Some forty years ago linguistics played a central role in cognitive science. For instance, the well-known report of the Sloan Foundation on Cognitive Science (1978) argued that the major fields contributing to the newly established area of cognitive science included philosophy, psychology, computer science, anthropology, neuroscience and linguistics. In answering the question what cognitive science stands for, the report presents language as the prime example of a cognitive system (“What is a cognitive system? This report concentrates on language as a prime example.”; Sloan Foundation report, p. viii). In that era, linguistics was seen as a key player in cognitive science. Although today language is still an important topic at cognitive science and cognitive neuroscience meetings, studies on language-relevant topics are no longer strongly influenced by the developments in linguistics. I consider this to be an unfortunate situation. For one, currently many studies on language in cognitive neuroscience could profit from more linguistic sophistication. Second, linguists could help cognitive (neuro)scientists to be more advanced in their thinking about representational structures in the human mind. Even if these structures are not necessarily linguaform in nature, linguists have built up a lot of expertise in thinking about representational issues that could be tremendously useful for other fields of cognitive (neuro)science.

In the remainder I will first present my diagnosis: why did linguistics get marginalized in cognitive science? Then I will give some suggestions for improvement. Part of the diagnosis is related to changes in our views on the nature of the human mind. Some forty years ago the commonly held belief was that the relevant mental representations are propositional or linguaform in character. Hence, studying linguistic structures was vital for our understanding of the human mind. This is, however, no longer the prevailing view. Representations are these days often seen as “high-dimensional geometric manifolds. They are maps.” (Paul Churchland, Plato’s Camera, 2012). Language-like structures do not embody the basic machinery of cognition. “the human neuronal machinery differs from that of other animals in various small degrees, but not in fundamental kind.” (Churchland). An example of this thought-transition is the so-called imagery debate. Influential cognitive scientists (e.g. Stephen Kosslyn), have argued that visual mental imagery is an internal, visual experience. On this account visual mental images are structurally analogous to visual representations, and are caused, at least in part, by psychological processes shared with the visual system. Others (e.g., Zenon Pylyshyn), argue that all thoughts, including mental images, are propositional. Based on many empirical studies on mental imagery, the conclusion is that Kosslyn’s position is strongly supported, at the expense of a propositional view on mental imagery. In short, the idea that the language of thought consists mainly of mental objects with a sentence-like structure is no longer the prevailing view in cognitive (neuro)science.

The other component of the diagnosis is related to developments internal to the field of linguistics. The field of linguistics as a whole has become internally oriented, partly due to the wars between different linguistic schools. With exceptions, linguists have turned their backs to the developments in cognitive (neuro)science, and alienated themselves from what is going on in adjacent fields of research. The huge walls around the different linguistic schools have prevented the creation of a common body of knowledge that the outside world can recognize as the shared space of problems and insights of the field of linguistics as a whole. When even at major linguistics conferences the program contains presentation such as “A short tour through the minefield of linguistic terminology” (SLE, 2014), one should realize that this state of affairs is a serious threat to the influence that linguists exert on the research agenda of cognitive science broadly. In the absence of an agreed-upon taxonomy of the central linguistic phenomena it will be nearly impossible for a researcher from another research field to relate to, or exploit linguistic knowledge successfully. Compare this to a related domain in cognitive science: memory research. Although there are different theories on memory, there is substantial agreement among memory researchers about the basic memory taxonomy (e.g. procedural memory, semantic memory, episodic memory) and the phenomena that are covered by this taxonomy. A similar situation is strongly needed in linguistics. The major linguistic societies could do us a great favour if they would be able to organize a coherent taxonomy of the central linguistic phenomena.

Another reason why linguistics has lost some of its credibility in its scientific Umwelt is due to disagreement about the methodological standards that one should adhere to. It is not universally accepted among linguists, in contrast to researchers in most other fields of cognitive (neuro)science, that in linguistics the same quantitative standards (including the proper statistics) should be adhered to as in the rest of cognitive science. For instance, Gibson & Fedorenko’s 2010 paper in TICS “Weak quantitative standards in linguistics research” triggered a host of often unfriendly replies from linguists. One of the counterarguments that I read went as follows: “When linguists evaluate contrasts between two (or more) sentence types, they normally run several different examples in their heads, they look for potential confounds, and consult other colleagues (and sometimes naive participants), who evaluate the sentence types in the same fashion. The fact that this whole set of procedures (aka, experiments) is conducted informally does not mean it is not conducted carefully and systematically.” Running sentences in your head and consulting a colleague is fine for discovering interesting phenomena and possible explanations (for the context of discovery anything goes), but it does not suffice as the context of justification. We are all open to confirmation bias. The fallibility of introspection is equally well-known; it is a method that hence has fallen out grace in psychology a long time ago. Thus, to justify one’s theory, empirical data have to be acquired and analyzed according to the quantitative standards of the other fields of cognitive science. In many circumstance, claims by an expert linguist of the form “sentence A is grammatical and sentence B is ungrammatical” will not suffice as a valid empirical data point in support of a specific linguistic theory.

The remedy: How could linguistics increase its impact and visibility within cognitive (neuro)science? Here are some tentative suggestions:

(i) Exploit the current availability of large corpora, and new analytical tools (e.g. graph theorical network analysis; analysis tools from evolutionary biology) to investigate the structure of linguistic knowledge. The increasing availability of large corpora puts linguists in a historically unprecedented position. It is nowadays possible to use quantitative tools effectively to characterize linguistic phenomena in a way that is more representative of their distribution in the communities of language users than could be done before (see the contribution of Martin Haspelmath below for similar arguments).

(ii) Do proper experimental research (including the use of inferential statistics) according to the quality standards in the rest of cognitive science. For the branches of linguistics in which it is claimed that linguistics contributes to understanding a central mental faculty of the human mind, linguists should exploit the full range of experimental tools to investigate language as a mental phenomenon (in addition to its cultural, sociological and historical characteristics). One should not expect that linguists will be experts in all the experimental methods currently available. But as in many other branches of cognitive science, it should come from interdisciplinary collaborations in which linguists should engage themselves to a much larger extent than currently is the case.

(iii) Embed linguistic theory in a broader framework of human communication (inclusion of gesture, dialogue, sociolinguistic variation, etcetera). Students of linguistics today are used to the fact that communication is inherently multimodal in nature. A well-known computational linguist in the Netherlands told me recently that it gets harder and harder to attract students to classical courses in linguistics, since analyzing sentences in isolation from their multimodal environment is deviating from the modern ways in which the younger generation is used to exert their language skills.

(iv) Maximize interdisciplinary contributions in a cognitive (neuro)science environment. The history of linguistics shows that it has had some of its most fruitful periods when it was embedded in a multidisciplinary enterprise. An example in case is the era of information science during the war and shortly thereafter (cf Levelt, “The history of psycholinguistics: the pre-Chomskyan era”, 2012). Often this also goes together with a more favourable funding situation for linguistic research.

(v) Provide language-specific information (instead of mostly top-heavy theory), which is the unique selling point of linguistics.

In my experience many of the recommendations that I made above are already taken on board by the younger generation of linguists. I am therefore optimistic about the prospects of linguistics as, once again, a central domain of scholarship in all of cognitive (neuro)science.

**********************

Martin Haspelmath: “The future of linguistics: Two trends and two hopes”

Nobody knows what the future will bring, but sometimes one can extrapolate from current trends, and it may be of some interest to say what I hope the future will bring. So I will talk about two important trends, and two hopes that I have. The first trend is that linguistics is getting more quantitative, and the second trend is that the world is getting flatter. I expect these trends to become even stronger in the future. The first hope is that linguists will find a balance between uniqueness of individual languages and shared features of languages around the world. The second hope is that linguists will make better use of contemporary technology to connect people and ideas.



First trend: More and more work in syntax, morphology and phonology is taking a quantitative approach. “Corpus linguistics”, once a subfield, is becoming a normal observational approach in linguistics, and experimental work is getting more prominent in all areas of grammar, from phonology to pragmatics (cf. initiatives such as EURO-XPRAG, and Katarzyna Bromberek-Dyzman’s contribution to this round table). Language typology, too, is getting more quantitative, with the use of sophisticated statistics becoming more important (e.g. Bickel 2015). This can easily be verified by simply looking at the pages of Folia Linguistica: While in the 1990s, not more than a quarter of the papers included tables with numbers, nowadays at least half of the papers tend to have such tables. Another trend that is difficult to overlook is that historical linguistics is getting more quantitative. With increasing sophistication of quantitative modeling, there is increasing interest in this kind of work also in high-profile journals like Nature and Science (e.g. Bouckaert et al. 2012). As a result, there is now a new Max Planck department devoted to historical research of this kind, headed by Russell Gray (in Jena).

This trend toward quantitative methods has clear positive aspects: By considering numbers rather than just utterances, linguists will learn to think in multicausal terms, less categorically and more in terms of probabilities. In addition, linguists will be able to publish in high-prestige journals. But there are also reasons for caution: If linguists can in principle publish in such journals, then the pressure will rise that they have to publish in such journals. But they should not lose sight of the qualitative issues that we have been grappling with over the last century: These will not go away, but we should try to keep them in mind and elucidate them with the new methods.

Second trend: Linguistics is no longer restricted to a few major centres of research (“the world is getting flatter”). Linguistics is (and will continue to be) a cheap science, so it is increasingly possible also in less wealthy countries to make significant cobtributions. Due to modern communication technology, distances play much less of a role than they did two decades ago (let alone a century ago). This was brought home to me when we looked at the applications for the doctoral positions in Leipzig University’s Graduiertenkolleg IGRA: They came from 27 countries from four continents (including six African and four Asian countries). Journals from less wealthy countries are now easily accessible and have a chance to be perceived as important at some point (e.g. Nepalese Linguistics, Revista Virtual de Estudos da Linguagem (Brazil), Language & Linguistics in Melanesia (Papua New Guinea), Semantics-Syntax Interface (Iran). With scienticfic social networks like Academia.edu and ResearchGate growing quickly, exchange of papers and ideas is becoming easier and easier.

This trend is very positive: Linguists is less wealthy countries are increasingly able to contribute significantly to the development of the field. And linguists in the wealthy Western countries will increasingly have to take a wider perspective and take an interest also in other languages than the well-known European languages. Of course, the increasing connectedness of people around the world also means that small languages are becoming less and less useful and will die out at a fast pace, so linguists should make an effort to document them as quickly as possible.

First hope: I hope that in the future, linguists will find a better balance between the uniqueness of languages and cross-linguistic trends. In the early part of the 20th century, the view was widespread that languages are unique, and that each language has its own categories. But later in the 20th century, the view came to prevail that all languages basically make use of the same categories. The particularism of Franz Boas and European structuralists such as Hjelmslev and Martinet is well known, but this view was also characteristic of Russian linguists such as Lev V. Ščerba, who in his last paper (1945: 186) demanded that underdescribed languages should be studied “concretely, without seeing them through the prism of the researcher’s native language, or another language with a traditional grammar, which distorts the grammatical reality…” In the second half of the 20th century, by contrast, the Chomskyan view came to prevail that all languages basically have the same categories. Thus, even languages lacking articles and initial complementizers have been described in terms of DP and CP, and movement operations are assumed even in languages that do not have salient word order alternations. This goes as far as using English-derived terms for non-English languages, as when people talk about “wh-movement in Russian”, or the “to-dative construction in Japanese”.

The balance that we need is that on the one hand, languages are recognized as different not only in their words and morphemes, but also in their categories. But the fascinating similarities between languages should not be neglected – it is only by comparing languages that we can find out what is truly general about human language. But since the descriptive categories differ across languages, we need to compare them with a different set of concepts (comparative concepts, Haspelmath 2010). If comparison and description are recognized as two different enterprises (even if they are carried out by the same people), then we can both respect the particularity of individual languages and investigate the general properties of all human languages.

Second hope: I hope that linguists will make better use of modern technology to disseminate their work. All too often, interesting work is available online, but hidden behind a toll barrier (“Sorry, you do not have access to this article – Purchase options: Article purchase EUR 35.00…”).

Why is this? Do the commercial publishers really add so much value to our papers and books that each user has to pay such a high amount? I suspect that the price is so high not because publishing a scientific paper costs so much, but because there is no functioning market for scientific publication. In the past (until the end of the 20th century), scientific publishing served the purpose of disseminating our ideas. But with ubiquitous internet access, dissemination has become very cheap (cf. the services Academia.edu and ResearchGate, which are extremely useful and presumably do not cost much.) The reason why we still go for a particular journal or a particular book imprint is not that otherwise our colleagues wouldn’t have access to our work, but that uploading a paper without selection gives us no prestige. But to advance our careers, we need prestigious publications. This is why we publish with well-known imprints, and the publishers continue their expensive business as usual.

Instead of the traditional overpriced model, linguists should find a way to publish their work both for free AND with prestige. Since it is our colleagues’ recognition that gives us prestige, it is sufficient if we organize ourselves better – we do not need commercial publishers to give us prestige. Just as we can organize conferences on the university’s premises with our own resources, without involvoing an expensive commercial company, we can edit journals and books without external companies. There are dozens of free linguistics journals (as can be seen in the Directory of Open-Access Journals) and these do not charge the authors, so there cannot be any good reason why we should pay such a large amount of money to the traditional commercial publishers. The main value that they have is the prestige of their labels. Creating a prestigious label for a free journal is not easy, but it is certainly possible. In semantics, the journal “Semantics & Pragmatics” is very prestigious, and neither authors nor editors pay anything. For books, we recently founded the imprint “Language Science Press”, which has a distinguished Advisory Board, and which is attracting more and more authors. I would hope that universities and other research organizations will keep supporting such free journals and book publishers, because scholar-owned publication is much cheaper than the traditional commercial publishers, who often take big profits out of the academic system. I would not be optimistic about author-pays open access, because it leads publishers to move away from selective publication to more profitable mega-journals, and these would not be good for science.

References

The Oxford handbook of linguistic analysis. Oxford: Oxford University Press, to appear. Bickel, Balthasar. 2015. Distributional typology: Statistical inquiries into the dynamics of linguistic diversity. In Heine, Bernd & Narrog, Heiko (eds.).Oxford: Oxford University Press, to appear. Bouckaert, Remco, Philippe Lemey, Michael Dunn, Simon J. Greenhill, Alexander V. Alekseyenko, Alexei J. Drummond, Russell D. Gray, Marc A. Suchard & Quentin D. Atkinson. 2012. Mapping the origins and expansion of the Indo-European language family. Science 337(6097). 957–960. doi:10.1126/science.1219669 Haspelmath, Martin. 2010. Comparative concepts and descriptive categories in crosslinguistic studies. Language 86(3). 663-687. Ščerba, Lev V. 1945. Očerednye problemy jazykovedenija. Izvestija Akademija Nauk SSSR 1945.4(5): 173-186.

**********************

Martin Hilpert: “The big to-do list: Defining challenges for 21st century linguistics” (video report)

Martin Hilpert: Challenges for 21st century linguistics

**********************

Katarzyna Bromberek-Dyzman: “Linguistics needs to twist: Go experimental and interdisciplinary”

To meet the challenges posed by an unprecedented boost of neuroscience, linguists need to twist from the well-established mono-disciplinary, autonomous approach to language, and get interdisciplinary. This involves on the one hand, adopting adjacent fields’ research methods suitable for exploring and explaining mechanisms and processes underpinning language and verbal communication. On the other hand, it also means exploring language and its use in new dimensions – not merely as linguistic structures, but also as processes underpinning language production, perception and comprehension. Such a multi-disciplinary perspective, rather than a mono-disciplinary one, seems on demand at the moment. Neuroscience has challenged the traditional division of research into humanities and the sciences, and set new research agenda for almost all branches of science. It includes linguistics. Linguists need to employ their language-related expertise to answer bigger scale questions about the nature of language systems in connection with other systems of meaning involved in communicative sense making. Neuroimaging research shows that language processing is not computed in a mental and neurophysiological vacuum. Quite to the contrary. The co-occurring physiological, cognitive and affective contexts dynamically frame and modulate the linguistic content’ processing.

Language studies explore language in terms of linguistic signs and structures. In verbal interactions people use linguistic signs and structures as a means of communicating the content of their minds: thoughts and feelings. Linguistics should be able to explore and explain language as both: (i) linguistic systems of signs and structures, and (ii) a property of the mind and brain, to account for how linguistic signs/structures are processed by the mind/brain in the course of verbal communication. Linguistic pragmatics studies language next to non-linguistic communicative systems of meaning, as communicative effects that give rise to contextualized meaning. Contextualized meaning is an emergent communicative quality that no longer belongs solely to a linguistic system, and is no longer studied only in terms of sign/structure meaning. Non-linguistic systems of meaning (situational, relational, emotional) are employed to communicate meanings on a par with the linguistic systems. To account for how language and linguistic content combine with situational, relational and emotional content, an interdisciplinary perspective is needed, as to explain human verbal communication is to explain the totality of communicative effects.

In verbal communication, people convey communicative import in multiple modalities. An important issue is to explore how this multimodal content combines with the linguistic content. Rather than focusing on the big picture involving the entirety of linguistic and non-linguistic communicative effects, let us take a look at language-emotion interface. Emotions underpin human communication. Although psycholinguistic and linguistic research have mainly explored affectively neutral knowledge, a large part of our world knowledge consists of experiences in which linguistic and emotional events are neatly combined. So much so, that it is hard to tease them apart. Verbal interactions devoid of emotional coloring are an exception rather than a rule in everyday verbal interactions. Yet, emotions accompanying verbal discourse have not been traditionally considered a proper research topic for language sciences. In order to meet the neuroscience’ challenges and explain human verbal communication at mental and neural perspective, next to linguistic systems perspective, theoretical and experimental research need to factor in the emotional effects in communication/comprehension. Emotions and their behavioral, mental and neural signatures do not pertain to a traditional mono-disciplinary language view. Yet, verbal communication is hardly ever devoid of emotional content. The question is how to contain and constrain emotional significance in the realm of language studies.

These concerns seem especially relevant in the face of recent affective neuroscience research showing that human brain detects and computes emotional value of the perceived input (verbal, nonverbal) prior to accessing and computing its cognitive value (e.g. semantic meaning). Neuroimaging research dedicated to the exploration of temporal chronometry of emotional content processing has focused on studying emotional content pertaining to kinesic, vocal and lexical valence. In order to tell how facial expressions ‘speak’ in verbal interactions, kinesic valence (facial mimicry underpinning smiling and frowning) has been researched. Studies addressing the question – when in the processing of facial emotional signals, do they start influencing the mind/brain processing patterns, show that kinesic valence modulates the visual processing within the first 100 milliseconds after facial emotional cue has been registered (e.g. Nathalie 2013; Vuilleumier & Purtois 2007). Similar results in terms of temporal dynamics have been reported for vocal valence – the nonverbal emotional information carried by human voice (i.e. happy vs. sad voice). Vocal valence research shows that even when we cannot understand the semantic content of a verbal message, as when spoken in a foreign language or dialect, we still can infer the emotional state of the speaker just by listening to the tone of voice (e.g. Pell et al. 2009). Similar to visual emotional signals, vocal valence provides a powerful means of affective communication. The computation of emotional value of an auditory input, similarly to kinesic valence input, takes place within the first 100 milliseconds after stimulus onset in case of implicit emotional value decoding (e.g. Schirmer & Kotz 2006; Woods 1995) and around 360 milliseconds post stimulus onset in case of explicit, attentional emotional content decoding (e.g. Wambacq et al. 2004). In everyday discourse vocal signals are usually complemented by facial expressions that provide further information as to what others think/feel. Integrating visual and acoustic emotional signals facilitates emotion recognition (e.g. Dolan et al 2001; Kreifelts et al. 2007). Seeing an emotional facial expression while at the same time hearing emotional voice appears to trigger an involuntary process of audiovisual binding that shapes the comprehension of affectively-tinged content (e.g. Pourtois et al. 2000).

In lexical valence neuroimaging research studying the emotional content in language, of interest is the temporal dynamics of lexico-semantic and emotional information integration in the processing stream: is a word’s or a sentence’s emotional connotation activated only after full lexico-semantic analysis, with emotional significance being subordinate to other lexical attributes, or are the two systems of meaning computed on a parallel basis? Valenced words show distinct processing patterns as early as 200 milliseconds after onset. This might be taken to indicate that emotional valence can enhance early lexico-semantic analysis during word processing (e.g. Kissler et al. 2006; Hinojosa et al. 2004; Trauer et al. 2012). A number of studies report emotion dependent ERP differences in word processing even before 150 milliseconds (e.g. Hoffman et al. 2009; Scott et al. 2009; Skrandies 1998), which suggests that the processing of emotional content sometimes can precede, or bypass full semantic analysis. Neuroimaging research on sentence valence, focusing on the incremental computation of emotional meaning over the course of a sentence, shows that emotional effects operate additively during an early period of perceptual encoding, within the 200–350 milliseconds after stimulus onset, while synergistic effects of implicit affect in explicit attention, have been observed to emerge at a later processing stage, around 400–600 milliseconds after stimulus onset (e.g. Chwilla et al. 2011; Moreno & Vazques 2011; Pratt & Kelly 2008; van Berkum et al. 2013). These findings show incremental interaction of attention and affect in distinct processing stages. Affect competent stimuli not only mark their value in the early processing stages, but keep permeating the ensuing processing stages. Thus, it seems that the emotional content next to linguistic content are equally central and reciprocally linked, at least in terms of temporal dynamics of computation, to guarantee rapid, on the fly comprehension of the communicative content (van Berkum et al. 2009).

Summing up, language is not processed in mental/neural vacuum. In processing communicative content, multiple systems of meaning (words, syntax, semantics, voice, mimicry) are simultaneously engaged to facilitate comprehension. Since the emotional content impacts perception and comprehension so rapidly, and so systematically yield its impact on the co-occurring linguistic content, language theories need to be able to account for the emotional components of meaning in verbal interaction. Emotions accompany human verbal interaction, and impact its processing and comprehension in significant ways. Studying language as if communication systems were not perfectly orchestrated in yielding communicative meaning through different modalities – voice, face, gesture, semantic meaning, seems no longer justified, if linguists are to meet the neuroscience’ challenges. It seems that paradigm shift that neuroscience initiated, makes it clear that it is no longer enough to explain meaning in terms of linguistic signs and structures. Linguists need to be able to explain meaning in terms of the mind (mental processes underpinning communication and comprehension), and the brain (the electricity and chemistry of neuronal communication). Speech and communication start at the neuronal level. So, we are talking here about how language gets originated in any single act of communication. Keeping this perspective in mind, and respecting the multimodal nature of verbal communication, will help to reestablish the central role of linguistics in the current research agenda.

References

Journal of Cognitive Neuroscience 23,9:2400–2414. Chwilla, Dorothee J., Daniele Virgillito, and Constance T. Visser. 2011. “The Relationship of Language and Emotion: N400 Support for an Embodied View of Language Comprehension”.23,9:2400–2414. Dolan, Raymond J., John S. Morris, and Beatrice de Gelder. 2001. “Cross-modal binding of fear in voice and face”. Proceedings of the National Academy of Sciences of the United States of America 98, 17: 10006-10010. Hinojosa, Jose. A., Manuel Martin-Loeches, Francisco Munoz, Pilar Casado, and Miguel A. Pozo 2004. “Electrophysiological evidence of automatic early semantic processing”. Brain and Language 88,1: 39-46. Hoffman, Marcus J., Lars Kuchinke, Sacha Tamm, Melissa L. Vo, and Arthur M. Jacobs. 2009. “Affective processing within 1/10th of a second: High arousal is necessary for early facultative processing of negative but not positive words”. Cognitive, Affective and Behavioral Neuroscience 9,4: 389-397. Kissler, Johanna, Ramin Assadollahi, and Cornelia Herbert. 2006.”Emotional and semantic networks in visual word processing: Insights from ERP studies”. Progress in Brain Research 156, 147-183. Kreifelts, Benjamin, Thomas Ethofer, Wolfgang Grodd, Michael Erb, and Dirk Wildgruber 2007. “Audiovisual integration of emotional signals in voice and face: An event-related fMRI study”. Neuroimage 37, 4: 1445-1456. Moreno, Eva M., Carmelo Vázquez. 2011. “Will the glass be half full or half empty? Brain potentials and emotional expectations”. Biological Psychology 88: 131– 140. Nathalie, George. 2013. “The facial expression of emotions”. In: Jorge Armony and Patrik Vuilleumier (eds.), The Cambridge handbook of Human Affective Neuroscience. Cambridge: Cambridge University Press, 171-197. Pell, Mare D., Laura Monetta, Silke Paulman and Sonja A. Kotz. 2009. “Recognizing emotions in a foreign language”. Journal of Nonverbal Behavior 33, 2: 107-120. Pourtois, Gilles, Beatrice de Gelder, Jean Vroomen, Bruno Rossin, and Marc Crommelinck. 2000. “The time-course of intermodal binding between seeing and hearing affective information”. Neuroreport 11,6: 1329-1333. Pratt, Nikki L. and Spencer D. Kelly 2008. “Emotional states influence the neural processing of affective language”. Social Neuroscience 3,3-4: 434-442. Schirmer, Annett and Sonja A. Kotz 2006. “Beyond the right hemisphere: Brain mechanisms mediating vocal emotional processing”. Trends in Cognitive Science 10, 1: 24-30. Scott, Sophie K., Disa A. Sauter, and Carolyn McGettigan. 2009.”Brain mechanisms for processing perceived emotional vocalizations in humans”. In: Stefan M. Brudzynski (ed.), Handbook of Mammalian Vocalizations: An Integrative Neuroscience Approach. Oxford: Academic Press, 187-198. Skrandies, Wolfgang. 1998. “Evoked potential correlates of semantic meaning – A brain mapping study”. Brain Research: Cognitive Brain Research 6,3: 173-183. Trauer, Sophie M., Soren K. Andresen, Sonja A. Kotz, and Matthias M. Muller. 2012. “Capture of lexical but not visual resources by task-irrelevant emotional words: A combined ERP and steady-state visual evoked potential study”. Neuroimage 60,1: 130-138. van Berkum Jos, Dieuwke de Goede, Petra M. van Alphen, Emma R. Mulder, and José H. Kerstholt. 2013. “How robust is the language architecture? The case of mood.”. Frontiers in Psychology 4: 1-19. van Berkum, Jos. A., Bregje Holleman, Mante Nieuwland, Marte Otten, and Jaap Murre. 2009. “Right or wrong? The brain’s fast response to morally objectionable statements”. Psychological Science 20, 1092 -1099. Vuilleumier, Patrik, and Gilles Purtois. 2007. “Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging”. Neuropsychologia 45,1: 174-194. Wambacq, Ilse, Kelly Shea-Miller, and Abuhuziefa Abubakr. 2004. “Non-voluntary and voluntary processing of emotional prosody: An event-related potentials study”. Neuroreport 15, 3: 555-559. Woods, David L. 1995. “The component structure of the N1 wave of the human auditory evoked potential”. Electroencephalography and Clinical Neurophysiology. Supplement 44: 102-109.

**********************

Eitan Grossman: “Linguistics across its (internal and external) borders”

I was asked only very recently to participate in this round table discussion, replacing Caterina Mauri from Pavia. Moreover, I was asked to talk from my own research perspective, so what I’ll say isn’t some objective diagnosis, but rather the point of view of someone who works on language description, language change, and cross-linguistic comparison.

From the mid 20th century on, two big questions have guided much of linguistic thinking. The first, which Chomsky called ‘Plato’s Problem,’ is basically “What does it mean to know a language?” The second, which Martin Haspelmath has recently called ‘Greenberg’s Question,” is “Why are are languages the way they are?” These questions, even if we’re usually occupied with antipassives, velar nasals, or scrambling, are what informs our everyday work. Today, I would like to talk about some ways in which these questions have come to intersect in contemporary linguistics, and to point to some emerging horizons of research.

I’d like to start with “Greenberg’s Question,” and go back to Greenberg himself. Greenberg argued that in order to explain why languages are the way they are, synchronic descriptions of individual languages is the first and absolutely necessary step. Synchronic typology of languages tells us about the cross-linguistic distribution of language structures, and possibly about universals of various sorts, whether absolute, statistical, unconditioned or implicational. Interestingly, and crucially, the next stage in explanation is diachrony, under the assumption that a good answer to the question ‘why are languages the way they are’ is a historical one: languages are they way they are because they became that way through processes of language change. However, processes of language change themselves require explanations, so linguists have made a lot of effort to understand and explain language change, both from language-specific and cross-linguistic perspectives. Now, it has become pretty clear that language change, whether we are talking about innovation or diffusion, is ultimately rooted in online synchronic interaction between speakers and listeners, or usage.

Understanding language change can be approached in a lot of ways, including the comparative method, internal reconstruction, corpus-based studies, and even computational modelling. One avenue of research that both betters our understanding of language change and fosters communication within linguistics and between linguistics and other disciplines is that of modelling language change in experimental settings. This has been done, with a lot of success, in the domain of sound change, by phoneticians and laboratory phonologists, working in what has come to be called ‘usage-based approaches.’ These have made an enormous amount of progress in proposing plausible phonetic accounts of observed sound changes. Moreover, many lab phonologists have paid careful attention to research in experimental psychology, and have framed their work in terms of explicit models of memory and processing, like exemplar-based approaches. And I think that there has been a lot of mutual feedback between the work of usage-based linguists like Joan Bybee and parts of the cognitive science field. So here we see one type of convergence between Plato’s Problem and Greenberg’s Problem: ultimately, the explanation for language structures takes us through language change and back to knowledge of language.

But what about grammar, which is after all what interests a lot of linguists? The field of grammaticalization studies has turned up a massive amount of data on the regularities of change that result in grammatical structures. It has also turned up counterexamples and rarer types of change that can result in grammatical structures. Of course, cross-linguistic study and research on historical change in languages with real documented historical corpora can help us to evaluate the hypotheses of grammaticalization research, but I would like to point to another avenue of research that bridges disciplinary boundaries, forging a link between the work of ‘unhyphenated’ linguists and experimentalists.

Most theories of grammaticalization end up based largely on pragmatic explanations. For example, the notion of ‘bridging context,’ as framed by Bernd Heine and others, is essentially a pragmatic notion, in which both the older meaning of a contruction and the newer meaning are both possible readings of an utterance, but the newer meaning is more plausible in a given context. However, since it is defeasible or cancellable, it still isn’t a coded meaning. Other theories of grammaticalization use explicitly Gricean or Relevance Theory accounts to understand how inferred meanings ultimately become coded meanings. This is what has been called the ‘linking problem’ of language change – how do performance factors end up encoded in grammars and lexicons?

Now, what’s interesting about this in the present context is that there is an emerging field of experimental pragmatics. Experimental pragmatics uses a range of psycholinguisic and neurolinguistic methods to evaluate hypotheses raised in unhyphenated pragmatics, whose data is mostly linguists’ introspection. There was recently a workshop called Historical Linguistics Meets Experimental Pragmatics, and the experimentalists were excited to have new ideas, data, and hypotheses to work on (beyond scalar implicatures), and the historical linguists were also happy to see whether there is some experimental way to evaluate their hypotheses. Of course, as I said, experimental pragmatics is still an emerging field, so only time will tell if this whole endeavour will bear fruit. But I think it’s a good example of how interdisciplinary dialogue and cooperation can take our field in new directions. It also really highlights another challenge facing us today – the need for linguists to formulate hypotheses that can be tested by experimentalists.

I should also talk about corpus-based approaches to language change, and the need for free and easily accessible historical corpora for languages beyond those of western Europe, but instead, I’ll turn to another perspective, namely that of language contact research in the 21st century, and in doing so, turn to another view of linguistics in the 21st century.

It’s becoming increasingly clear, thanks to work like that of Johanna Nichols and Balthasar Bickel, that Greenberg’s question is actually “What’s where why?” In other words, linguistics is concerned with explaining linguistic diversity. As Nichols and Bickel have stressed, this question ultimately makes interdisciplinary cooperation absolutely necessary, because language structures can be the result not only of universal preferences but also ‘geographical or genealogical skewings,’ and ‘asking where’ often results in probabilistic theories stated over sampled distributions. Typological distributions usually show what has been called ‘universal areality’ – almost no linguistic feature is distributed evenly in the world.

In many parts of the world, geographical biases are often the result of shared descent but can also be the result of language contact. Now, since most of the world’s languages don’t have documented histories, and linguists have to make inferences about whether a given structure is the result of contact, we would like to know if there are cross-linguistic generalizations about what can be borrowed.

The literature on language contact is full of proposals for such generalizations or ‘universals,’ often framed in terms of ‘borrowing hierarchies,’ but they are mainly based on aggregated anecdotal evidence, since it is only recently that linguists have been working on large-scale and really worldwide cross-linguistic samples and databases of borrowing in different domains. We already have Jan Wohlgemuth’s pioneering work on verb borrowing, and recently Frank Seifart’s Database of Affix Borrowing has gone online, and of course, there’s the relatively new Atlas of Pidgin and Creole Language Structures.

Now, the point here is that in some cases, it turns out that the outcome of borrowing doesn’t allow us to make nice predictions based on plugging in the respective structures of a donor language and a target language. Rather, the outcomes of borrowing also tend to show strong areality – based on a worldwide sample, we see that the borrowing of adpositions, for example, works quite differently in South America and in Central Asia. This implies that actual sociohistorical situations of contact, actual historical events and processes, determine to some extent the outcomes of language contact. This means that linguists who want to explain language structures are going to end up working with historians, prehistorians, archaeologists, and other scientists outside our field. However, since borrowing often begins with code-switching, or in any event the behavior of multilingual speakers, we have to pay attention to online synchronic knowledge – and usage – of language.

I’d like to finish with an issue that is less fraught with theoretical baggage, and could be seen as trivial, but which is nonetheless the main challenge for linguistics in the 21st century. Since most of the world’s languages, and many of the world’s language families, are still poorly described or totally undescribed, the main challenge for linguistics is to document and describe these languages. In order to meet this challenge, however, every linguistics program should have a field methods course, which means that linguistics departments should give jobs to field linguists, alongside the more usual calls for syntax, semantics, and phonology. Linguists should describe a language, or least a part of one. And linguists should think and care deeply about the ongoing loss of linguistic diversity, since we often learn really new things about the possibilities of language from newly described languages, whether from colloquial spoken varieties of well-known languages, from endangered languages faltering in the face of shift to sociopolitically dominant languages, or from languages from the more inaccessible parts of the world. In other words, we still have a long way to go before we have the ‘what’ part of the ‘what’s where why’ question. This seems to me to be the major challenge of linguistics in the present and in the foreseeable future.