Is the left over-represented within academia?

Data from Understanding Society provides a better basis than a self-selecting Times Higher Survey

[Note: this post developed from conversations with Siobhan McAndrew, who helped me to understand the Understanding Society data, but who bears no responsibility for any mistakes in here, and who has produced a much better blog post which discusses how to respond to “over-representation compared to the general population”]

At the start of March, the Adam Smith Institute published a report entitled “Lackademia: Why Do Academics Lean Left?”.

The report was based, to an uncomfortable extent, on the findings of two self-selecting surveys run by the Times Higher Education: one survey from before the 2015 general election and one survey from before the EU membership referendum.

Because self-selecting surveys are generally not a reliable way of ascertaining public or group-specific opinion on an issue, criticism of the report focused on the use of the THE surveys.

A much stronger source of data is the Understanding Society (US) survey, which contains information both on closeness to political parties and on occupation. This data also shows that left-wing opinions are over-represented within academia, compared to the general population. Here I discuss some of the information held on each of these variables.

Which respondents work in academia?

Understanding Society records information on respondents’ current job, and codes responses using the International Classification of Occupations 1988 standard.

This allows me to identify all respondents whose current job fell under the heading “College, University, and Higher Education teaching professionals”. The number of respondents in this category varies over successive waves of the Understanding Society survey, but is never lower than 178.

Which parties are respondents close to?

Understanding Society asks a number of questions about respondents’ party affiliation:

Respondents are first asked whether they support a particular political party.

If they are not a supporter of a particular political party, they are asked whether there is one political party to which they feel closer than all others.

If they cannot identify their closest party, they are asked which party they would vote for in a hypothetical general election to be held tomorrow.

I've collapsed all of these responses into a single party preference. I've have dropped all responses for which there was no party indicated.

How should respondents be weighed?

Understanding Society provides a large number of survey weights. In the analyses that follow, I’ve used their cross-sectional weights, so that estimates from each wave should represent the general population at that time.

For subgroups — that is, for the respondents who work in higher education — I've experimented with (a) no weighting and (b) the cross-sectional weights. One would use no weighting if the ways in which the sample was generally unrepresentative were no longer applicable when focusing on a subgroup. One would use cross-sectional weighting if the ways in which the subgroup was unrepresentative were similar to the ways in which the sample as a whole was unrepresentative. I’ve used the cross-sectional weights, but the proportions using no weights are very similar, and do not affect the overall picture. Weights specific to higher education could be constructed on the basis of HESA staffing returns, but I don't have the time to do this.

What do the data show?

Figure 1 shows the proportions of respondents from Wave 6 of Understanding Society who indicated closeness to the six largest parties in the UK. Wave 6 was fielded between 2014 and 2016, encompassing the period of the general election.

The figures for the general population are quite different to the results of the 2015 general election. This may be a result of

the dates of fieldwork,

the weighting scheme used,

the treatment of those who don’t know, or

the result of differential turnout.

These figures however can still be useful if we are interested in the relative position of academics, rather than their absolute levels of support for particular parties.

The figure shows that academics support the Conservative party and the UK Independence Party at lower rates than the general population. The reverse is true for Labour, the Liberal Democrats, and the Green Party.

The figures are in certain respects similar to those from the THE survey. 11% of respondents to both surveys said that they were closest to the Conservatives. Whilst Labour does better in Understanding Society than in the THE, the Greens do worse.

We can extend this to multiple waves of Understanding Society. This is shown in Figure 2, which shows that under-representation of those closest to the Conservative party has been a constant feature of the data, even if the degree to which Labour is over-represented has changed over time. Note that UKIP identifiers are include under "Other parties" in the first four waves of the data, a coding decision which now appears rather eccentric.

There is, therefore good evidence — which is not derived from a self-selecting sample — to suggest that left-wing opinions are over-represented in academia when compared to the general population.

If this data is reliable — if I haven't made a gross mistake in my interpretation of Understanding Society data, or a gross mistake in my code — then the question becomes, "how (if at all) ought we behave differently given differences in the rates at which academics and the general population vote for different parties?". That's a question to which I will hopefully return- but hopefully only after Siobhan has had a go.