Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain‐general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C‐induction), rather than to model the structure of the natural world (N‐induction). We argue that, of the two, C‐induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far‐reaching implications for evolutionary psychology.

1. Introduction In typical circumstances, language changes too slowly to have any substantial effect on language acquisition. Vocabulary and minor pronunciation shifts aside, the linguistic environment is typically fairly stable during the period of primary linguistic development. Thus, researchers have treated language as, in essence, fixed, for the purposes of understanding language acquisition. Our argument, instead, attempts to throw light on the problem of language acquisition, by taking an evolutionary perspective, both concerning the biological evolution of putative innate domain‐specific constraints, and more importantly, the cultural evolution of human linguistic communication. We argue that understanding how language changes over time provides important constraints on theories of language acquisition; and recasts, and substantially simplifies, the problem of induction relevant to language acquisition. Our evolutionary perspective casts many apparently intractable problems of induction in a new light. When the child aims to learn an aspect of human culture (rather than an aspect of the natural world), the learning problem is dramatically simplified—because culture (including language) is the product of past learning from previous generations. Thus, in learning about the cultural world, we are learning to “follow in each other’s footsteps”—so that our wild guesses are likely to be right—because the right guess is the most popular guess by previous generations of learners. Hence, considerations from language evolution dramatically shift our understanding of the problem of language acquisition; and we suggest that an evolutionary perspective may also require rethinking theories of the acquisition of other aspects of culture. In particular, in the context of learning about culture, rather than constraints from the natural world, we suggest that a conventional nativist picture, stressing domain‐specific, innately specified modules, cannot be sustained. The structure of the paper is as follows. In the next section, Language as shaped by the brain, we describe the logical problem of language evolution that confronts traditional nativist approaches, which propose that the brain has been adapted to language. Instead, we argue that language evolution is better understood in terms of cultural evolution, in which language has been adapted to the brain. This perspective results in a radically different way of looking at induction in the context of cultural evolution. In C‐induction and N‐induction, we outline the fundamental difference between inductive problems in which we must learn to coordinate with one another (C‐induction), and those in which we learn aspects of the noncultural, natural world (N‐induction). Crucially, language acquisition is, on this account, a paradigm example of C‐induction. Implications for learning and adaptation shows: (a) that C‐learning is dramatically easier than N‐induction; and (b) that while innate domain‐specific modules may have arisen through biological adaption to deal with problems of N‐induction, this is much less likely for C‐induction. Thus, while Darwinian selection may have led to dedicated cognitive mechanisms for vision or motor control, it is highly implausible that narrowly domain‐specific mechanisms could have evolved for language, music, mathematics, or morality. The next section, The emergence of binding constraints, provides a brief illustration of our arguments, using a key case study from language acquisition. Finally, in Discussion and implications, we draw parallels with related work in other aspects of human development and consider the implications of our arguments for evolutionary psychology.

5. The emergence of binding constraints The problem of binding, especially between reflexive and nonreflexive pronouns and noun phrases, has for a long time been a theoretically central topic in generative linguistics (Chomsky, 1981); and the principles of binding appear both complex and arbitrary. Binding theory is thus a paradigm case of the type of information that has been proposed to be part of an innate UG (e.g., Crain & Lillo‐Martin, 1999; Reuland, 2008), and it provides a challenge for theorists who do not assume UG. As we illustrate, however, there is a range of alternative approaches that provide a promising starting point for understanding binding as arising from domain‐general factors. If such approaches can make substantial in‐roads into the explanation of key binding principles, then the assumption that binding constraints are arbitrary language universals and must arise from an innate UG is undermined. Indeed, according to the latter explanation, apparent links between syntactic binding principles and pragmatic factors must presumably be viewed as mere coincidences—rather than as originating from the “fossilization” of pragmatic principles into syntactic patterns by processes such as grammaticalization (Hopper & Traugott, 1993). The principles of binding capture patterns of use of, among other things, reflexive pronouns (e.g., himself, themselves) and accusative pronouns (e.g., him, them). Consider the following examples, where subscripts indicate co‐reference and asterisks indicate ungrammaticality: 1 That John i enjoyed himself i /*him i amazed him i /*himself i .

2 John i saw himself i /*him i /*John i .

3 *He i /he j said John i won. Why is it possible for the first, but not the second, pronoun to be reflexive, in (1)? According to generative grammar, the key concept here is binding. Roughly, a noun phrase binds a pronoun if it c‐commands that pronoun, and they are co‐referring. In an analogy between linguistic and family trees, an element c‐commands its siblings and all their descendents. A noun phrase, NP, A‐binds a pronoun if it binds it; and, roughly, if the NP is in either subject or object position. Now we can state simplified versions of Chomsky's (1981) three binding principles: Principle A. Reflexives must be A‐bound by an NP. Principle B. Pronouns must not be A‐bound by an NP. Principle C. Full NPs must not be A‐bound. Informally, Principle A says that a reflexive pronoun (e.g., herself) must be used, if co‐referring to a “structurally nearby” item (defined by c‐command), in subject or object position. Principle B says that a nonreflexive pronoun (e.g., her) must be used otherwise. These principles explain the pattern in (1) and (2). Principle C rules out co‐reference such as (3). John cannot be bound to he. For the same reason, John likes John, or the man likes John do not allow co‐reference between subject and object. Need the apparently complex and arbitrary principles of binding theory be part of the child’s innate UG? Or can these constraints be explained as a product of more basic perceptual, cognitive, or communicative constraints? One suggestion, due to O’Grady (2005), considers the possibility that binding constraints may in part emerge from processing constraints (see Section 2.2.2). Specifically, he suggests that the language processing system seeks to resolve linguistic dependencies (e.g., between verbs and their arguments) at the first opportunity—a tendency that might not be specific to syntax, but which might be an instance of a general cognitive tendency to resolve ambiguities rapidly in linguistic (Clark, 1975) and perceptual input (Pomerantz & Kubovy, 1986). The use of a reflexive is assumed to signal that the pronoun co‐refers with an available NP, given a local dependency structure. Thus, in parsing (1), the processor reaches That John enjoyed himself… and makes the first available dependency relationship between enjoyed, John, and himself. The use of the reflexive, himself, signals that co‐reference with the available NP, John, is intended (c.f., Principle A). With the dependencies now resolved, the internal structure of the resulting clause is “closed off” and the parser moves on: [That [John enjoyed himself]] surprised him/*himself. The latter himself is not possible because there is no appropriate NP available to connect with (the only NP is [that John enjoyed himself]) which is used as an argument of surprised, but which clearly cannot co‐refer with the himself. But in John enjoyed himself, John is available as an NP when himself is encountered. By contrast, plain pronouns, such as him, are used in roughly complementary distribution to reflexive pronouns (c.f., Principle B). It has been argued that this complementarity arises pragmatically (Levinson, 1987; Reinhart, 1983); that is, given that the use of reflexives is highly restrictive, they are, where appropriate, more informative. Hence, by not using them, the speaker signals that the co‐reference is not appropriate.3 Thus, we can draw on the additional influence of pragmatic constraints (Section 2.2.4). Finally, simple cases of Principle C can be explained by similar pragmatic arguments. Using John sees John (see [2] above), where the object can, in principle, refer to any individual named John, would be pragmatically infelicitous if co‐reference were intended—because the speaker should instead have chosen the more informative himself in object position. O’Grady (2005) and Reinhart (1983) consider more complex cases related to Principle C, in terms of a processing bias toward so‐called upward feature‐passing, though we do not consider this here. The linguistic phenomena involved in binding are extremely complex and not fully captured by any theoretical account (indeed, the minimalist program [Chomsky, 1995]; has no direct account of binding but relies on the hope that the principles and parameters framework, in which binding phenomena have been described, can eventually be reconstructed from a minimalist point of view). We do not aim here to argue for any specific account of binding phenomena; but rather to indicate that many aspects of binding may arise from general processing or pragmatic constraints—such apparent relations to processing and pragmatics are, presumably, viewed as entirely coincidence according to a classical account in which binding constraints are communicatively arbitrary and expressions of an innate UG. Note, in particular, that it is quite possible that the complexity of the binding constraints arises from the interaction of multiple constraints. For example, Culicover and Jackendoff (2005) have recently argued that many aspects of binding may be semantic in origin. Thus, John painted a portrait of himself is presumed to be justified due to semantic principles concerning representation (the portrait is a representation of John), rather than any syntactic factors. Indeed, note too that, we can say: Looking up, Tiger was delighted to see himself at the top of the leaderboard where the reflexive refers to the name “Tiger,” not Tiger himself. And violations appear to go beyond mere representation—for example, After a wild tee‐shot, Ernie found himself in a deep bunker, where the reflexive here refers to his golfball. More complex cases, involving pronouns and reflexives are also natural in this type of context, for example, Despite Tiger i ’s mis‐cued drive, Angel j still found himself (j’s golfball) 10 yards behind him (i’s golfball) . There can, of course, be no purely syntactic rules connecting golfers and their golfballs; and presumably no general semantic rules either, unless such rules are presumed to be sensitive to the rules of golf (among other things, that each player has exactly one ball). Rather, the reference of reflexives appears to be determined by pragmatics and general knowledge—for example, we know from context that a golfball is being referred to; that golfballs and players stand in one‐to‐one correspondence; and hence that picking out an individual could be used to signal the corresponding golfball. The very multiplicity of constraints involved in the shaping of language structure, which arises naturally from the present account, may be one reason why binding is so difficult to characterize in traditional linguistic theory. But these constraints do not pose any challenges for the child—because these constraints are the very constraints with which the child is equipped. If learning the binding constraints were a problem of N‐induction (e.g., if the linguistic patterns were drawn from the language of intelligent aliens; or deliberately created as a challenging abstract puzzle), then learning would be extraordinarily hard. But it is not: it is a problem of C‐induction. To the extent that binding can be understood as emerging from a complex of processing, pragmatic, or other constraints operating on past generations of learners, then binding will be readily learned by the new generations of learners, who will necessarily embody those very constraints. It might be argued that if binding constraints arise from the interaction of a multiplicity of constraints, one might expect that binding principles across historically unrelated languages would show strong family resemblances (as they would, in essence, be products of cultural co‐evolution), rather than being strictly identical, as is implicit in the claim that binding principles are universal across human languages. Yet it turns out that the binding constraints, like other putatively “strict” language universals, may not be universal at all, when a suitably broad range of languages is considered (e.g., Evans & Levinson, 2008). Thus, Levinson (2000) notes that, even in Old English, the equivalent of He saw him can optionally allow coreference (apparently violating Principle A). Putative counterexamples to binding constraints, including the semantic/pragmatic cases outlined above, can potentially be fended off, by introducing further theoretical distinctions—but such moves run the real risk of stripping the claim of universality of real empirical bite (Evans & Levinson, 2008). If we take cross‐linguistic data at face value, the pattern of data seems, if anything, more compatible with the present account, according to which binding phenomena results from the operation of multiple constraints during the cultural evolution of language, than the classical assumption that binding constraints are a rigid part of a fixed UG, ultimately rooted in biology. To sum up: Binding has been seen as paradigmatically arbitrary and specific to language; and the learnability of binding constraints has been viewed as requiring a language‐specific UG. If the problem of language learning were a matter of N‐induction—that is, if the binding constraints were merely a human‐independent aspect of the natural world—then this viewpoint would potentially be persuasive. But language learning is a problem of C‐induction—people have to learn the same linguistic system as each other. Hence, the patterns of linguistic structure will themselves have adapted, through processes of cultural evolution, to be easy to learn and process—or more broadly, to fit with the multiple perceptual, cognitive, and communicative constraints governing the adaptation of language. From this perspective, binding is, in part, determined by innate constraints—but those constraints predate the emergence of language (de Ruiter & Levinson, 2008). In the domain of binding, as elsewhere in linguistics, this type of cultural evolutionary story is, of course, incomplete—though to no greater degree, arguably, than is typical in genetic evolutionary explanations in the biological sciences. We suggest that viewing language as a cultural adaptation provides, though, a powerful and fruitful framework within which to explore the evolution of linguistic structure and its consequences for language acquisition.

Acknowledgments Nick Chater was supported by a Major Research Fellowship from the Leverhulme Trust and by ESRC grant number RES‐000‐22‐2768. Morten H. Christiansen was supported by a Charles A. Ryskamp Fellowship from the American Council of Learned Societies. We are grateful to Kenny Smith and two anonymous reviewers for their feedback on a previous version of this paper.