One prominent way of expressing the goal of what is often called “grammatical theory” (or “linguistic theory”) is to say that it aims to establish an innate architecture and a set of features and categories that are rich enough to account for everything we find in the world’s languages, but restrictive enough to explain the gaps in what we see and to explain why we can acquire languages despite the poverty of the stimulus. I always found the first goal absolutely compelling (of course each language must be cognitively representable), and the second and third goals at least coherent: Yes, it could be that the limitations on diversity that we find are due to innate representational constraints, and yes, it could be that language acquisition is guided by the same kinds of constraints.

For example, it could be that a property of the innate UG (a lexicalist architecture) specifies that syntactic rules cannot “look inside” words, and that this not only explains why word parts are never accessed by syntactic rules, but also that we can readily learn the morphological and syntactic sytems of our languages.

But how do we make progress in understanding cross-linguistic diversity and the possibility of language acquisition? It seems that progress would consist in finding more and more constraints of UG and finding more and more evidence for them, converging from different domains. So are we making progress? Strangely, few people seem to be asking this question.

I do not see much evidence for more and more restrictiveness in the generative literature – on the contrary, I keep seeing papers that argue for a richer UG that is less restrictive. Here are a few cases in point:

Baker (2015) argues that a case feature can either be assigned via agreement (as in traditional Chomskyan syntax since the 1980s) or through a novel powerful mechanism called “dependent case” that considers only the configuration of nominals (see Haspelmath 2018 for a critical review and more discussion of restrictiveness)

that considers only the configuration of nominals (see Haspelmath 2018 for a critical review and more discussion of restrictiveness) Bruening (2018) argues that the Lexicalist Hypothesis , according to which the grammar of words and the grammar of sentences are separate, is wrong, thus removing one possible source of restrictivenss. (Of course, there are many other authors that have argued the same point over the last two decades, but few of them so directly; my 2011 paper makes the same point but does not assume an innate UG to begin with.)

, according to which the grammar of words and the grammar of sentences are separate, is wrong, thus removing one possible source of restrictivenss. (Of course, there are many other authors that have argued the same point over the last two decades, but few of them so directly; my 2011 paper makes the same point but does not assume an innate UG to begin with.) Citko (2005) argues that in addition to external Merge and internal Merge, there should be a third type, Parallel Merge .

. G. Müller (2007) argues that Distributed Morphology should include not only a mechanism of impoverishment, but also the opposite mechanism of enrichment .

. G. Müller (2017) argues that generative syntax should include not only a mechanism of Merge, but also the opposite mechanism of Structure Removal . (I have not looked into the details, but it seems that this is quite similar to David Pesetsky’s mechanism of exfoliation .)

. (I have not looked into the details, but it seems that this is quite similar to David Pesetsky’s mechanism of .) In phonology, there have been various proposals for gradient constraint effects, especially Boersma & Hayes’s Stochastic OT, and more recently Smolensky & Goldrick’s Gradient Symbolic Representations.

Jenks & Rose (2015) argue that the Kordofanian language Moro shows that phonological constraints can indeed precede morphological placement rules, thus making the phonology-morphology interaction less restricted.

More generally, van Oostendorp (2018) found in his overview of the last 25 years of Optimality Theory that most proposals for modifying OT have amounted to making the system less restrictive.

That the idea of macroparameters that was prevalent in the 1980s and 1990s was basically given up around 2000 was already noted in an earlier paper of mine (Haspelmath 2008: §2.4).

Nobody can blame scientists that they rarely admit openly that an approach that they followed for a long time is not working. Things are rarely so clear-cut, and who knows, maybe there are successes around the corner after all.

But I would still like to see more discussion of the central point: What does it mean that generative grammatical theory is (apparently) getting less and less restrictive? In what sense does this constitute progress, if at all? Isn’t this rather a gradual retreat from the idea of an innate UG that explains acquisition and limits on cross-linguistic variation?

I have always been open to the idea of innate UG constraints (most recently, I have discussed three constraint types explaining cross-linguistic patterns, among them innate representational constraints, although the paper primarily focuses on distinguishing between functional-adaptive and mutational constraints). I also understand that linguistics works in terms of communities, and that in some subcommunities, formalisms of the generative sort are an unquestioned part of the community standard.

But we also want to do objective science and move closer to the truth as a discipline. I don’t see how this can work if we don’t say clearly what is the goal of (framework-bound) “grammatical theory”. A very large amount of the discipline seems to be doing business as usual, even though the foundations of the enterprise are less and less clear (at least to me).

Postscript: After sketching a draft of this paper, I had a conversation with two prominent generative syntacticians of the younger generation whose work I had criticized, telling me that they were not committed to the idea of an innate UG, or that at least they wanted to separate their work on grammar from innateness claims. But without the restrictiveness claim (and without restrictions coming from the innate knowledge), I don’t see why one needs all the specificities of universal frameworks. I am truly puzzled.

References

Baker, Mark C. 2015. Case. Cambridge: Cambridge University Press.

Citko, Barbara. 2005. On the Nature of Merge: External Merge, Internal Merge, and Parallel Merge. Linguistic Inquiry 36(4). 475–496. doi:10.1162/002438905774464331.

Haspelmath, Martin. 2008. Parametric versus functional explanations of syntactic universals. In Theresa Biberauer (ed.), The limits of syntactic variation. Amsterdam: Benjamins.

Jenks, Peter & Sharon Rose. 2015. Mobile object markers in Moro: The role of tone. Language 91(2). 269–307. doi:10.1353/lan.2015.0022.

Müller, Gereon. 2017. Structure removal: An argument for feature-driven Merge. Glossa: a journal of general linguistics 2(1). doi:10.5334/gjgl.193. http://www.glossa-journal.org//articles/abstract/10.5334/gjgl.193/

van Oostendorp, Marc. 2018. History of Phonology: Optimality Theory. http://ling.auf.net/lingbuzz/003827