In the Introduction to his book “R for SAS and SPSS Users” (Springer 2009) Robert Muenchen offers ten reasons for learning R if you already know SAS or SPSS. All ten reasons say something important about R. However, his fourth reason: “R’s language is more powerful than SAS or SPSS. R developers write most of their analytic methods using the R language; SAS and SPSS developers do not use their own languages to develop their procedures” is fundamental. To me, this expresses something about R that should speak to anyone who does statistical modeling no matter what tools she or he may be using. What is so compelling about R’s powerful language? I think that there is a direct analogy here with natural language. Every language enables thoughts peculiar to the culture in which it developed. If you speak more than one language, how many times have you labored to say something in another language that just comes so naturally in your mother tongue? Whether by design, or historical accident, some languages are just better than others for saying certain things. I propose that in the same way that English is the language of business , and that French may still be the language of diplomacy, R is the language of Statistics. I don’t just mean that R “is spoken” by many or even most statisticians. R’s superiority for statistics is deeper than that. R is a language with syntax and structure that have been explicitly designed to formulate expressions about statistical objects. At this time, it may be le premier langue for statistical thinking that enables the formulation of ideas, and notions about statistical models and data that are difficult to express succinctly in other languages including mathematical notation. For example, suppose you want to discuss multiple regression. A mathematical exposition might begin with the equation:

(1) Y = X(beta) + epsilon

A statistician will naturally interpret this as an expression of the regression model, but (1) is primarily a statement about the relationship of random variables, abstract mathematical entities not statistical models. In contrast, the R expression

(2) model <- lm(y ~ x)

is a statement about the linear model that relates the data structures x and y. For a person who “speaks” some R, (2) “means” the model object coefficients, residuals etc. that result from fitting a linear model to the data structures x and y. Moreover, (2) actually makes “model” an object, packed with information that describes the regression and can be thought about as a whole and “talked about” with other R expressions. There is certainly some overlap, but expressions (1) and (2) are really about different concepts. As another example of the expressive power of the R language, consider how difficult it is to formulate multi-level hierarchical models in standard mathematical notation. With the aid of the notation “j[i]” which is used to encode group membership (j[10] = 2 means the tenth element in the data indexed by i belongs to group 2) Gelman and Hill (Cambridge 2007: a must read for anyone new at multi-level modeling) write the simple varying intercept model with one additional predictor as yi =( alpha)j[i] + (beta)xi +(epsilon)i. This nonstandard notation gets messy quickly as complexity increases, and as Gelman and Hill point out it doesn’t enable the unique specification of the model. (They discuss five ways to write the same model.) By way of contrast, their R code using the lmer function (now lme4):

model <- lmer(y ~ x + (1 | group) )