Science and the Left

Once upon a time the relation between science and socialism was pretty direct and self-explanatory. Not in vain, Marxists call ourselves scientific socialists, when we can stop our unceasing and largely fruitless factional struggles, that is. Unfortunately, and here I think it fair to lay the blame at the feet of the so-called New Left and the influence of the Frankfurt School, this relation has become more problematic over time. The uptake of certain attitudes from the counter-cultural milieu, as well as an ill-conceived and reflexive regard for spontaneity, with its corresponding distrust of institutions and organisation, received from anarchist and ultra-left circles often hiding behind Rosa Luxemburg’s corpse, have led many in the movement to disdain the “pharmaceutical establishment”, particle physics, or biotechnology, with an animus hitherto reserved for reactionary institutions such as the military or the churches.

It’s undeniable that, both in the capitalist countries and under really-existing socialism, science has been instrumentalised for the pursuit of profit and the production of ever more destructive weapons. Nonetheless, an inability to separate the epistemological grounds for scientific knowledge from the particular conditions under which research takes place would seem as misguided as attributing the working conditions of industrial capitalism to machinery and efficient organisation of labour. We can no more set aside the discoveries made by biology or the potential of nuclear power, than we could smash our way into revolution, one cog at a time.

Of course, the bourgeois consensus has itself turned towards science–as what else could it do?–but in its inevitably deficient, commonsensical, empiricist, positivist way. These are such bywords for liberalism that it would scarcely contest them. From a schematic and inadequate understanding of science, and seeking to arrogate its aura of incontrovertible factuality for themselves, arise such disciplines as neoclassical economics, Taylorism, or the science of public administration, adorned by superficially scientific attributes, such as a mathematical formalism, a reductionist approach, or a statistical foundation, respectively, but lacking a truly scientific commitment to explore the hypothesis space and follow what conclusions result from research, even at the expense of initial assumptions. So the problem of ignoring science has its just-as-evil twin in scientism: trying to pretend non-scientific conclusions have the weight of science behind them, as in the cases above, or disregarding all extra-scientific considerations when evaluating a decision. (There is an argument to be made that nothing should be extra-scientific, but the fact is we do not, here and now, have a developed science of aesthetics, for instance.)

It’s true that there are elements in the hard-right, whether moved by religious motivations or otherwise, far more averse to science than left movements. Of course, right and left being somewhat nebulous terms, particular deviations vary by region. Still, a school of thought calling itself scientific socialist, which foundational theoretical work–Capital–was dedicated to Darwin, should have something better to say on its record on science than “the insane conservative people of the world are worse”. We need to change, and fortunately are changing, the general attitude we bring to scientific concerns, break away from a pervasive and sterile methodological anarchism, and start taking ourselves and the world a little more seriously. A good example of this necessary and ongoing recalibration is the resolution passed by the 10th Assembly of United Left against public funding of “alternative medicine”.

When we give cover to so-called alternative therapies; to a reflexive Aristotelian anti-Darwinist horror of artificially transgenic organisms, stuck to an outdated view on species which makes them akin to rigid categories; or to an alarmist, groundless, and frankly dangerous rejection of nuclear power, we give intellectual ammunition to those who claim the working class lacks the discernment to govern itself, not to mention the real problems from the application of policies arising from those views. Reflexology, acupuncture, homeopathy, or the whole cluster of pseudo-science linked to Anthroposophy (including biodynamic or so-called organic farming, Steiner schools, and so on) may not in themselves cause much harm, but the legitimation of such positions within the norm of socialist discourse sets us epistemologically adrift and allows any kind of nonsense to be proclaimed as though it were a proven theorem. Not that bad medicine, suboptimal agriculture, or foot massages with pretensions have no bad consequences in themselves: accurate treatment requires clarity around what works and what doesn’t, the efficient production and distribution of food is a duty while people are hungry, and so on.

At the same time, we can’t retreat to meaningless noises about evidence-based or scientific approaches if we don’t have a clear notion of what science is and is not. References to specific experimental procedures are unlikely to be universal, and suffer from being too bound with the existing state of scientific knowledge. For example, demanding only medical practices grounded on double-blind randomised studies be used, would have led us to oppose efficacious procedures for which evidence existed, though not of that rigour, such as variolation. Hence, we need to examine what we’re referring to when we talk about science.

So what is science anyway?

Here lies the truly thorny problem, of course. An answer as naïve as it is questionable would be that science is a body of knowledge which has been obtained by following the scientific method. Of course, this position forces us to determine what the scientific method would be, and if you ask a sufficiently diligent schoolboy, he may be able to produce something somewhat like this, probably under the name of the hypothetico-deductive model:

Observe a phenomenon

Formulate a hypothesis

Design and perform experiments to test it

Infer a law

Construct a theory

There are many specific problems with this model: the role of corroborating evidence, the matter of statistical or probabilistic theories, confirmation holism, unobservables, and so on, though perhaps its primary difficulty lies in the assertion that there is a scientific method. It’s often used as a model to explain science, but like much taught in school it is hardly more than a toy.

Beyond the realm of highschool, philosophers of science have been trying to answer this question for a long time. Given that, as we have seen, science can’t be said to be defined by a single method, how can we tell if something is, or is not, science? This is often referred to as the demarcation problem: how and where to set the boundary between science and non-science, and, particularly, between science and pseudo-science. This problem runs together with central questions in the philosophy of science, such as whether statements about unobservables are true–or even meaningful–or whether observations can force us to believe or disbelieve a theory. I say they run together because, to a certain extent, the way these questions are answered sets up certain ontological and epistemological commitments (assertions about what exists and about what is justifiable to believe, respectively) which form a whole and need to be examined together.

There have been many controversies in history on the question of the relation of what’s in the mind, commonly expressed in language, and reality. This question can present itself as ontological (a question about what exists) or epistemological (a question about what we can know), although I would state that the distinction cannot always be made with perfect clarity. Examples of such controversies would include nominalism and realism, rationalism and empiricism, and, most on-point, logical positivism and scientific realism. A review of all these controversies in detail is beyond the scope of this article. The position I take to be most accurate in describing science is scientific realism, which is characterised by claiming that statements about unobservables are meaningful and can be true or false, that unobservables exist (or do not) independently of belief, and that it is reasonable to believe scientific statements about unobservables are accurate, or at least approximately true in the case of ideal scientific theories.

A first stab at the demarcation problem was given by logical positivism. This position held that scientific theories are characterised by asserting meaningful statements (which under this view are logical predicates which terms must refer to sense data) subject to be true or false. Such a theory can be compared to reality by performing experiments and collecting sense data, comparing it to the predicates it contains. Corroborating evidence would verify the theory, and contrary evidence would falsify it. This whole approach to science had some serious significant failings: the project to reduce all scientific theories to observables failed, and it was deeply counter-intuitive to most practicing scientists, but it also could not succeed in its own terms.

The first major criticism issued against logical positivism came from Popper. He held that, since science is necessarily based on inductive activity, and it must contain synthetic statements, a theory could never be verified by corroborating evidence. Nothing, after all, could assure that a future observation couldn’t contradict it, refuting it. This position is commonly called falsificationism, and it is very popular in the so-called sceptic circles (new atheists and their ilk), many lay people, and some practicing scientists. It’s popular enough that it’s common to hear that “x is not scientific because it contains no falsifiable statements”. While a better approach than logical positivism in its verificationist flavour, it, too, has serious defects.

Quine shattered falsificationism on his seminal article Two Dogmas of Empiricism. I strongly recommend reading it, especially if you hold to the common falsificationist view that scientific theories are made of logical predicates which terms refer to sense data which make testable predictions falsifiable through experiment. The two fundamental problems Quine raises are, on the one hand, the problem of the synthetic-analytic distinction (how to determine which statements are logically necessary and which are true because of matters of fact), and, on the other hand, the question of confirmation holism, which is much more important. The problem here lies in the fact that, even given a theory which as falsificationists would have it makes testable predictions, no observation of series of observations can ever force us to consider it falsified, because those observations can be explained by additional predicates. For example, there can be experimental or measurement error, calculation error when determining what the theory’s prediction would be, or actual hallucination. Hence, Quine asserts that we can’t speak of specific predicates of a theory being falsified independently, but that the theory must stand or fall as a whole. This criticism might seem academic, but for the fact that all scientific theories have their failures of prediction. Some of them are, so to speak, legitimate problems with observation (an error with experimental design, faulty machinery, human error, etc) but others are simply due to the fact that an otherwise useful theory may not predict all things, and not with total accuracy.

Lakatos was inspired by Popper and Quine, but his criticism of falsificationism went deeper. He rejected the notion that scientific theories must be reducible to statements about sense data, and admited that the process of observation is itself theory-laiden. Hence, we can speak of devising a new procedure or machine to measure something better or more accurately, which was highly problematic under the positivist-inspired limits. Furthermore, Lakatos held that a single contradictory datum is not sufficient to assert that a theory has been falsified. He came up with the notion of research programmes, which are characterised by certain core statements which cannot be abandoned without giving up on the programme altogether, and which are held as first principles by researchers within the programme. Data which is not predicted by those core statements could be dealt with by auxiliary hypotheses, so long as these don’t contradict the core statements themselves. For a research programme to be scientific it must make predictions about novel facts that aren’t known at the time. A programme is progressive when these predictions are verified, and degenerate when they are not. This corresponds much better with actual scientific practice: when a well-established and corroborated theory exists, contradictory data is often suspect. Lots of possibilities can be explored: improper experimental design, failure in the apparatus, and so on, and there are cases when, in a conflict between a very powerful theory and a single contradicting datum, the rational thing is to disregard the datum.

Another aspect Lakatos had to deal with was the critique by Kuhn about paradigm changes, which holds that science operates in a normal mode most of the time, correcting theories only gradually, until enough evidence accumulates in contradiction of the theory that a complete change is necessary. In that case, a new paradigm, often through generational replacement, will appear to substitute the old one, being able to explain the new phenomena which contradict the old theory. This criticism of science held that paradigm shifts were irrational events, which Lakatos tried to address. Having a research programme which can explain and predict given facts is valuable in itself, even if it cannot deal with reality as a whole, so Lakatos had a lot more of a nuanced view on when such a programme should be rejected. Datapoints which would contradict the auxiliary hypotheses could be dealt with by adjusting the latter, but even problems with the core statements could and should be ignored, so long as there is not a better research programme in terms of predictive power. For Lakatos, a research programme should continue, even when it is known that it is fundamentally wrong, until there’s something better to substitute it, and scientific progress can happen even under those conditions by improving the auxiliary hypotheses and making the theory fit reality better. This approach allowed for what Kuhn called revolutionary science, or paradigm shifts, only when a better competing research programme was available.

Regarding demarcation, the fundamental difference between a scientific research programme and pseudo-science would, according to Lakatos, lie in the fact that a scientific research programme predicts novel facts. Whether it does so correctly or not determines its progressive or degenerate status, but the relevant distinction is that it has generative power. Pseudo-sciences, on the contrary, limit themselves to explaining or describing previously known facts, and are incapable of generating new predictions. Amusingly, Lakatos (like me) regarded neoclassical economics as a psudo-science, though he thought the same about Soviet Marxism.

In conclusion, demarcation is a difficult problem. It can’t be done mechanically, by asking nature about a given statement and observing if it says no, as falsificationists often put it. Yet there are certain criteria which are highly indicative of a research programme being unscientific, or at least degenerate:

Epicycles: the number and complexity of auxiliary hypotheses keeps rising to explain novel data

Stagnation: the scope of facts the theory can predict or explain remains constant

Self-centrism: the theory avoids comparing itself to competing ones on the criteria of predictivity or effectiveness

Indifference: the parts of reality which don’t fit the theory well are ignored or their importance minimised

There is no simple way to distinguish good from bad science, but it is possible to use these heuristics to reach reasonable conclusions. Dialectics is sometimes invoked in obscure ways by Marxists, but I believe this is a case where it actually holds. We cannot simply assess theories by examining their logical statements, finding contradictions, and running tests. We must compare theories as a whole: their explanatory power, their predictive scope, their degree of generativity, and reach conclusions on which is preferable. Thus, we can dispose of such notions as homeopathy, neoclassical economics, or the zero threshold linear model for radioactive harm. True open-mindedness and scepticism should not lead us to accept any random flavour of statement within our social milieu, though it also cannot result in dropping any theory which is the least methodologically suspect.