X Privacy & Cookies This site uses cookies. By continuing, you agree to their use. Learn more, including how to control cookies. Got It!

Advertisements

Remember the stunt pulled by Peter Boghossian and James Lindsay, that embarrassment to the skeptic community which shall forever be known as the conceptual penis hoax? (See also this panel discussion about the incident.) Well, this post isn’t about that, but it is about how to do serious (as opposed to farcical) criticism of feminist philosophy.

You may recall that Boghossian and Lindsay set out to prove that gender studies is a hopeless field of inquiry, condemned to nonsense by way of a priori ideological commitments. But B&L also proudly declared that they have actually never read a paper in gender studies. So, in order to show them and others how this ought to be properly done, I asked a close friend of mine, who works on feminist epistemology and gender studies (the two are closely allied fields), to provide me with a short list of her favorite papers to submit to my critical reading as an outsider. The remainder of this post is an analysis of one such paper, Elizabeth Anderson’s “Use of value judgments in science: a general argument, with lessons from a case study of feminist research from divorce,” published in the Winter 2004 issue of Hypatia, the leading feminist journal. The full paper can be downloaded here.

[Yes, this is a case of n=1, just like B&L’s hoax. But, unlike them, I’m not saying that Anderson’s paper is representative of the field. It would take too much work on my part to even approximate the level of readings necessary to make such a claim. However, while B&L targeted the lowest of the lowest journals in the field, or close to it, I am picking on a leading journal, and a leading author. Also, unlike B&L, I actually put several hours into this and read the thing from top to bottom, annotating it furiously. Enjoy the result.]

Briefly, here is what Anderson set out to do: “The underdetermination argument establishes that scientists may use political values to guide inquiry, without providing criteria for distinguishing legitimate from illegitimate guidance. This paper supplies such criteria. Analysis of the confused arguments against value-laden science reveals the fundamental criterion of illegitimate guidance: when value judgments operate to drive inquiry to a predetermined conclusion. A case study of feminist research on divorce reveals numerous legitimate ways that values can guide science without violating this standard.”

In other words, Anderson, typically for a feminist epistemologist, rejects the entrenched idea (among both scientists and most philosophers of science) that science ought to be conducted in a value neutral fashion. She acknowledges that this is problematic in some cases, but thinks she has a general answer to the issue, which clearly separates such problematic cases from those where values legitimately guide scientific research and in fact enhance its results. Notice that Anderson is not talking about values internal to science, such as truth, objectivity, etc.. Rather, she is referring to political values, attempting to defend the claim that, for instance, medical and psychological research carried out from a feminist perspective is better than (allegedly) value-neutral science.

While broadly speaking I actually reject Anderson’s position, no knee-jerk reactions along the lines of “this is sheer nonsense” will do. Anderson is a serious scholar, and some of her arguments are difficult to rebut, though I maintain that this can and should be done. So let’s get into a bit more detail.

Instead of following the progression of the rather substantial paper (which interested readers should consult on their own), let me jump straight to Anderson’s case study, which makes her case rather clearly. The topic is the effects of divorce on the wellbeing of the affected parties (parents as well as children), and she contrasts research carried out by a team led by Abigail Stewart, who published a report back in 1997 entitled “Separating together: how divorce transforms families,” with a few other papers reporting research carried out in what Anderson calls a “traditionalist” framework about the nature of the family (examples of these authors include Barbara Whitehead, George Gilder, and James Wilson).

Anderson provides a detailed comparison between what henceforth I will simply refer to as the feminist vs the traditionalist perspectives, in terms of eight criteria (pp. 12-18 of the paper):

Orientation to background interests

Framing the research questions

Conceiving of the object of inquiry

Deciding what types of data to collect

Data sampling

Data analysis

Deciding when to end an analysis

and Drawing conclusions

Let’s take a brief look at each in turn. In terms of background interests, Anderson claims, reasonably, that traditionalists frame things in terms of their own view of what a family ought to be: “The wife’s role is to be mother to her husband’s children; the father’s role is to be the husband of his children’s mother. According to its proponents, this arrangement is in the best interest of the children, and probably also the parents. Alternative family arrangements are judged progressively worse the further they depart from this ideal.”

The problem with this, as we shall see, is that it automatically orients the researcher toward certain lines of inquiry, mostly in terms of psychologically negative effects on the children, while ignoring or downplaying any positive aspects, not only on the children themselves, but on the mother, for instance. By contrast, “feminists approach divorce with greater ambivalence. Although feminists are critical of the patriarchal family, Stewart’s team was initially unsure how to assess divorce from the standpoint of opposition to sexism.”

Which leads us to the second point: framing the research questions. “Traditionalists, viewing married parents as the ideal, are apt to ask: does divorce have negative effects on children and their parents?” By contrast, “Stewart’s team was skeptical of this approach, on both methodological and normative grounds. Methodologically, it is virtually impossible to distinguish the effects of divorce from the effects of the problems in the marriage that led to divorce … Even when families with divorce are compared with families without divorce, but experiencing similar problems (for example, high spousal conflict), the two types of families always differ in other respects. … Stewart also had normative objections to the traditional research question. Focusing on negative outcomes reduces the possibility of finding positive outcomes from divorce.”

Consider the above carefully. Even if I will ultimately reject Anderson’s (and hence Stewart’s) approach, it is hard to argue that she (they) make very good points. If we simply read a technical paper, published in a psychology journal, on the negative effects of divorce on children, without being aware of the ideological biases of the authors, we are prone to take the results on board while simply assuming that the research has been done objectively. In reality, though, when it comes to research on politically and socially relevant human issues, there simply is no such thing as ideology-free and “objective” science. In this sense, then, to clearly state that one is carrying out the research from a particular standpoint (feminist, traditionalist, or whateverist) is helpful to the reader in order to better evaluate the results. A disclosure of ideological bias does not, by the way, automatically licenses the knee-jerk rejection of the findings, precisely because we all have biases, especially when it comes to these sort of issues.

Third, the conception of the object of inquiry: “The conception of divorce drawn from a clinical perspective focuses on the individual’s problems with an event in the past, stressing its negative aspects. Divorce is conceived in terms of ‘trauma’ and ‘loss’; it is seen as a ‘life stress’ that puts children ‘at risk’ for problems later in life. The phrases in quotations use what is known as ‘thick evaluative concepts’ — concepts that simultaneously express factual and value judgments.”

This is clearly problematic, not just from a feminist perspective. It may very well be that there are negative consequences to divorce, but if one sets up one’s entire inquiry in those terms, then one is guaranteed to find nothing but negative effects. By contrast, Anderson points out that Stewart’s team also had in place a “thick” conception of divorce, but this was a conception that was open to the possibility of positive, and not just negative, effects on the children and the mother. (Presumably, also the father, actually, though we are talking about a feminist approach here.)

How does this work? For instance, in the following way: “ from the point of view of at least one spouse [and hence not necessarily the woman], the marriage has typically been failing for years before divorce. To them, divorce is not an event, but a long process of coming to grips with that failure. The conception of divorce as a ‘loss’ represents the post-divorce condition as lacking some good that was present prior to the divorce. It fixes attention on the significance of divorce in relation to the past.” Again, it seems to me hard to argue against this broadened perspective of the conception and effects of divorce. The perspective adopted by Stewart’s team was one in which divorce was not conceived as the breaking up of a family, but rather as a transformation that ends up separating the parental from the spousal roles. This is of course a perfectly reasonable alternative to the traditionalist view.

Fourth, given the above, what type of data should researchers collect? “Stewart’s team gathered data on subjects’ post-divorce feelings and interpretations of changes they underwent, in addition to reports of more objective phenomena. This provided crucial data confirming the conception of divorce as an opportunity for personal growth. Women especially found this to be so, with 70 percent judging that their personalities had improved since divorce.”

Here, one could reasonably object to the inclusion of subjective first-person reports like the one just described, as they are not as quantifiable and “objective” as, say, statistics about school grades comparing the children of divorced vs non-divorced parents. Fair enough, but as I learned as a biologist, skewing things toward the quantitative often simply means that one ends up measuring what is easily measured, as opposed to what is really interesting or important. Moreover, again, we are talking about human experiences here, so a degree of subjective judgment simply comes along with the subject matter (unlike, say, my research as a biologist on weedy and invasive plants).

In terms of data sampling, Anderson again makes an interesting point: “A sample drawn from psychological clinics [as is standard in traditionalist approaches] will be biased toward those experiencing great difficulties coping with divorce, or misattributing their difficulties to divorce, and against those who find divorce liberating. Wallerstein’s work on divorce has been criticized on this ground. Her error lies not in adopting a value-laden conception of divorce, but in failing to draw a random sample of cases. Stewart’s team, by contrast, drew a less biased sample of cases from the divorce dockets.”

I find this contrast rather illuminating, but it’s easy to see how an assumption of objectivity on the part of the researchers would not easily lead someone to question why research data should not be drawn from psychological clinics, a choice in turn subtly informed precisely by a conception of divorce as a negative event that must lead to bad psychological outcomes.

Moving on to data analysis, Anderson distinguishes between what in statistics are called main and interaction effects. In my experience, and I find this unfortunate, lots of researchers focus on the “main” effects (notice what they are called!), meaning on the average, across the board effects of whatever variables they have been studying. This is because the so-called interaction effects (variable 1 x variable 2; variable 1 x variable 3; and so on, to include third and higher order interactions, when feasible) are more difficult to interpret, and require very large sample sizes in order to properly study them in terms of statistical significance.

But Anderson adds an ideological twist to this general problem: “The decision to focus on main effects, or to look for interaction effects, reflects background values. A main effects analysis accepts the average outcome as representative of the group, discounting individual variation. This makes sense if one believes that a single way of life is best for everyone. But for researchers who doubt this, attention to within-group heterogeneity is imperative.” Indeed: the way you do statistics may reflect your personal ideological biases about the subject matter you are allegedly objectively studying.

Next to last: when do we stop with our analysis? “The great temptation is to stop an analysis as soon as it reaches findings pleasing to the researchers, but to continue analyzing displeasing findings in the hope of explaining them away. To be sure, it is almost impossible to accept unwelcome findings at face value. Stewart’s team [for instance] found that some children appeared to suffer from regular visitation by their noncustodial fathers. Unhappy with this result, the team engaged in further analysis and discovered that high levels of post-divorce parental conflict interacted with regular father visitation to produce their finding. For parents still fighting after the divorce, regular visits were the occasion for regular arguments, which the children presumably anticipated with anxiety.”

This is an interesting and novel finding, but it came about precisely because of the ideological biases of the researchers, who where “unhappy” with the prima facie results. Of course Anderson is well aware that this is a slippery slope, but at the onset of her paper she clearly states that one’s own values ought to inform various aspects of one’s research, except the results themselves. To put it differently, one can be as unhappy as one likes, but if a reasonable alternative explanation cannot be found one still has to accept the verdict of the evidence. This, naturally, is harder to pull off in practice than it sounds in theory, which is one reason I will, below, end up disagreeing with Anderson and, by implication, the whole feminist epistemological approach.

Finally, what conclusions should researchers draw from their studies? “The main point of divorce research, as of much other research in the social sciences, is to answer evaluative questions on the basis of empirical evidence. Are children better off if parents who want a divorce stay together? What coping strategies make divorce go better or worse for the affected parties? The enterprise of answering these questions on the basis of evidence would make no sense if science were value-neutral in implication — that is, if ethics were science-free. It is not.”

This is a crucial point, and again one with which I find myself in broad agreement with Anderson. While other types of scientific research may be value neutral (though I do think this is a continuum), research on human subjects on issues of import to our social policies and moral choices are inextricably evaluative. There just is no way to do the research without having ideological biases. Our only option is to hide them and pretend they are not there or to wear them on our sleeves and make them clear to the world.

After having given Anderson her due, let me explain why I still disagree with the feminist approach to epistemology, and yet I do not fall back on the more classical idea that science is value-neutral and ought to be carried out in an unbiased fashion.

Throughout much of the paper, Anderson makes a large use of Helen Longino’s work on epistemology and the nature of science, particularly her 1990 book, Science as Social Knowledge, which I also highly recommend. (She wrote a more recent one, also worth checking out: The Fate of Knowledge. A shorter, accessible overview of her take on the nature of science is her article for the Stanford Encyclopedia of Philosophy: “The social dimensions of scientific knowledge.”)

But Longino’s view of science isn’t quite feminist in the sense advocated by Anderson. Indeed, it is much closer to a school of thought often referred to as “perspectivism” in philosophy of science. Longino takes seriously the above mentioned idea that scientists are never objective and value-free, for the simple reason that they are human beings, and modern cognitive science shows us that we are all (some more, some less) biased, consciously as well as unconsciously. But the answer provided by Longino is a bit more nuanced than simply “let’s do overtly feminist science and be done with it.” Rather, the idea is that quasi-objectivity is a property not of individual scientists (or even groups of scientists) but of science as a dynamics process.

This means not that scientists should approach their research in expressly biased terms, but rather that we should guarantee that scientific research is carried out by the broadest possible set of individuals, making sure that we include as many personal, ideological, political, and even religious perspectives as possible. Why? Because they will tend, in the long run, and on average, to cancel each other.

We can’t have research on divorce done by feminists only, because that would bias things toward one particular conception of family and divorce. But it can’t be done just by researchers who embrace a more traditionalist view of those topics either. Instead, let’s have those as well as many other perspectives represented by different researchers and schools of thought, which will then correct each other biases during the pre- and especially post-publication process of peer review. That’s how quasi-objectivity emerges: as the outcome of a process of social construction of science (in the benign, not post-modernist, sense of the word). This is not ideal, but it is by far the most realistic solution, given that science is done by human beings.

As you can see, then, it took me several hours of study of a single paper, and about 3,000 words of explanations here, in order to properly assess one particular study in feminist epistemology. That’s why the penis hoax thing is a joke, and it’s a joke on those skeptics that embraced it, not on feminist or gender studies. If you want to criticize academic scholarship you have to engage with it, seriously and charitably. And if you want to go from the critique of a single paper to that of an entire field, then you ought (ethically!) to devote hundreds or thousands of hours to it. Or the joke is on you.