Posted on August 27, 2011

Some things are so predictable that you can very nearly set your clock by them. High on the list of predictable happenings in my own life is the regular and repeated receipt of e-mails or Facebook messages whose authors insist that in my passion for the elimination of racial inequities, I am wasting my time, having sadly overlooked certain immutable laws of nature.

In particular, these missives insist upon the iron truth of genetically endowed intelligence, under which black people are simply inferior beings, cursed with substandard IQ, and thus, even the best of efforts to make things more equitable will fail.

Sometimes these messages emanate from sources whose own command of the English language leaves more than a little something to be desired, and whose complement of grammatical and spelling errors suggests that their certitude about white racial superiority is more about projecting feelings of inadequacy onto others than it is about genuine confidence in the claim being made. These are the persons for whom cries of “white power” rise in amplitude exactly so much as is needed to drown out the screams of self-doubt and frustration that would otherwise overwhelm them. In short, most of the people who speak the loudest about white supremacy are, themselves, losers of the first order whose accomplishments are essentially non-existent.

But occasionally those who proclaim the scientific connection between racial identity and native intelligence demonstrate a keen — if yet jaded and incomplete — command of the literature on the subject, and are able to proffer their arguments in a sophisticated and even erudite manner. They have read all the right books and scholarly journal articles (at least if by “right” we mean, those books and articles whose authors share their hereditarian and/or racialist views), and can quote them chapter and verse, much as an evangelical Christian might cite Scripture. Rather than calling themselves racists, they have locked upon the term “racial realists,” so as to suggest that those of us who reject the notion of race as a valid category are as unrealistic as those who cling to the idea of a flat Earth.

Race is Real, No it’s Not, Yes it Is!: Reflections on Science and its Discontents

The primary arguments made by the so-called racial “realists” are as follows:

Race is a scientifically valid category of human difference; Racial differences are not only real but meaningfully connected at the biological and/or genetic level with important human traits, most notably, intelligence; Intelligence is measurable using standardized IQ batteries and other mechanisms; Blacks are generally less intelligent than whites and Asians, and this is due to biological and/or genetic differences between the races.

Flowing from these premises, the racial “realists” argue that social policy should take these “truths” into account. This means that we should cease all efforts to create greater social or economic equity between the races, since they are inherently unequal in their abilities. It also means that personal biases on the basis of these truths — even those that perpetuate deep racial inequities in the society — are not unfair or unjust but rational. It is rational, for instance, for employers to favor white job applicants for high-level jobs, since they are more likely to possess the talents necessary to do those jobs well. So, in this sense, discrimination should not be prohibited. It should be tolerated, and seen as a logical choice given the science of racial difference. And certainly we should not take racial disparities in income, wealth, occupational status, or educational outcomes to suggest the presence of racism; rather, these gaps merely reflect the persistent human inequalities that cluster along racial lines.

Typically, when confronted with these kinds of arguments, those of us on the left have answered with the same basic retorts for the better part of the past half-century. Those arguments are, roughly as follows:

Race is a social construct, not a scientifically valid category of human differentiation; Whatever differences do exist between so-called racial groups are so minimal as to be essentially meaningless; Intelligence is a vague and culturally contingent concept with no clear meaning that we can easily test, and those tests we have developed for the purpose (such as IQ batteries) are inherently biased and flawed; and, Whatever differences in academic performance, and/or observed “intelligence” exist between different so-called racial groups are the result of environmental factors, not genes.

Flowing from these premises, we on the left have typically held that public policy must correct for these environmental disadvantages to which persons of color have been subjected, so as to create a fair and just society.

Needless to say, I far and away believe that the four premises of the anti-hereditarian, anti-racialist position are much closer to the truth than those of the so-called racial realists. Biologists like Joseph Graves, for instance, have utterly eviscerated the work of hereditarian racialists (who have no biology or genetics training whatsoever) like J. Phillippe Rushton, with scientific evidence and reasoning that the latter has never seen fit to rebut. Others like Richard Nisbett have presented further evidence indicating the malleability of intelligence and its tenuous link to racial heritage. Although the hereditarians have replied to Nisbett (in ways they have ignored Graves), they have done so selectively at best. Other research, such as that of Joel Myerson, Mark Rank, Fredric Raines and Mark Schnitzler at the University of Washington in St. Louis, which found that black IQ scores are boosted four times as quickly as white IQ after attending college — and thus, a mere four years of college can cut racial IQ gaps in half — has been altogether ignored by the racial realists, and for obvious reasons: if IQ gaps can be halved in less than half a decade of intensive educational instruction, genes cannot possibly be the culprit.

Additional evidence indicates spectrum-wide IQ gains among African Americans relative to whites, on the order of about 5-6 points in just the past 30 years, irrespective of college education. Such evidence indicates that the claims of Rushton and his contemporaries — that racial IQ gaps have been steady for a century — are utterly nonsensical. Rushton, of course, says little or nothing about this most recent evidence of black IQ gains; rather, he merely repeats the same arguments he has made for two decades: arguments that, in the end, are barely more scientifically sound than when he used to go into Toronto shopping malls and ask passersby how far they could ejaculate because he believed ejaculatory distance (and for that matter, penis size) to be inversely related to intelligence. Since one can only assume Phil Rushton considers himself quite intelligent, one can then glean fairly easily what he has told us about his manhood, given his own quackish theories.

Likewise, the intrinsic flaws with IQ testing and the hereditarian claims regarding intelligence are laid bare in further scholarly work, such as that of Claude Fischer, et.al., in their 1996 rejoinder to the best-selling volume, The Bell Curve, which was released in 1994.

All that said, however, I have come to the conclusion that arguing for racial equity on the grounds that race is non-scientific and unrelated to intelligence, or that the notion of intelligence itself is culturally biased and subjective, is the wrong approach for egalitarians to take. By resting our position on those premises, we allow the opponents of equity and the believers in racism to frame the discussion in their own terms. But there is no need to allow such framing. The fact is, the moral imperative of racial equity should not (and ethically speaking does not) rely on whether or not race is a fiction, or whether or not intelligence is related to so-called racial identity.

Indeed, I would suggest that resting the claim for racial equity and just treatment upon the contemporary understanding of race and intelligence produced by scientists is a dangerous and ultimately unethical thing to do, simply because morality and ethics cannot be determined solely on the basis of science. Would it be ethical, after all, to mistreat individuals simply because they belonged to groups that we discovered were fundamentally different and in some regards less “capable,” on average, than other groups? Of course not. The moral claim to be treated ethically and justly, as an individual, rests on certain principles that transcend the genome and whatever we may know about it. This is why it has always been dangerous to rest the claim for LGBT equality on the argument that homosexuality is genetic or biological. It may well be, but what if it were proven not to be so? Would that now mean that it would be ethical to discriminate against LGBT folks, simply because it wasn’t something encoded in their biology, and perhaps was something over which they had more “control?”

The point is, scientific knowledge is never complete; and as we learn more about the human genome it is entirely possible that scientists will discover larger group-based differences than we previously recognized. Indeed, this has already happened. Several years ago, for instance, geneticists discovered clear patterns of DNA markers that correlated with different ancestral groups and the geographic regions within which they mostly developed. Although the longstanding position of those of us who reject race as a scientific category has been that population groups are not the same as “races,” and thus, this research still doesn’t justify elevating race to a category of scientific fact, the population groups identified by the DNA markers did in fact bear a remarkable relationship to the self-identified racial groups of the persons whose genes were tested. In other words, differences were found, they were reasonably large in terms of the distinctiveness of the patterns discovered, and they did correlate with so-called racial categories, however those categories may themselves have been socially constructed.

Does this mean that race is real? Well, it depends on what you’re looking for. It certainly means that there are persistent and real genetic differences that cluster within so-called racial groups, and more so than many have heretofore believed. Yet these differences still fall far short of indicating sub-speciation, which is the normal standard used by biologists to indicate different “races” or breeds of a larger species. And this research, however interesting, did suggest that the DNA differences discovered (which were distinct based on ancestry or “race”) were found mostly if not entirely in genetic “microsatellites,” and what is known colloquially as “junk DNA”. This is potentially important since microsatellites (short tandem repeat sequences of DNA base pairs) are unrelated to any known human attributes, abilities or characteristics like intelligence, and junk DNA does not code for virtually any behavior, trait or human tendency. So, at most we can say that there are real and definable differences between the so-called races, which allow us to look at DNA and pretty well determine the origin of a person’s ancestry; but these differences are to be found in a largely irrelevant part of the genome. This then brings us back to the critique of hereditarianism, which says that the ways in which human races differ are not important, and the important ways in which humans differ are not racial.

In the end, whether or not you think the research is sufficient to prove that race is real as a category of human difference, will likely depend on your larger worldview, and the assumptions you bring to the conversation ahead of time. This is another reason why it is so dangerous and absurd to rest a moral claim about how a society should be structured, upon the claims of scientists. Simply put, there are always different scientists who have different evidence, which evidence can be interpreted in several different ways by people in the same discipline, let alone the lay public. If you think science can settle ethical and moral questions, or public policy disputes, you need only look at the debate over climate change, or what were long considered settled matters — like evolution — to see how wrong such an assumption can be.

If we rest our ethical and moral claims on scientific knowledge, and then the scientific knowledge changes, we are duty bound to change our ethics and morality — not a very good recipe for the creation of a lasting or just society. So while science can inform our ethical considerations, it cannot be the end-all, be-all for such considerations, any more so than theological faith.

Science Versus Ethics: The Irrelevance of Racial “Realism”

I think it is vitally important, and ethically incumbent upon those of us who reject racism and seek racial equity to sever our position from the longstanding arguments about race being an illegitimate category of human difference. Even if it were, or were proven to be at some point a truly scientific notion, this would say nothing at all about how we should treat one another or fashion a society. Likewise, whether or not there are relationships — biological or genetic — between race and something like intelligence, is utterly irrelevant (or at least should be) to the questions of how we should treat one another, or how society should be structured. In other words, from a philosophical perspective, the scientific claims of the racial realists are irrelevant, even if they were entirely true.

There are a number of reasons for this position, which I will now explore in depth.

1. Genetic inheritance is morally arbitrary; thus to base public policy on matters of genetic inheritance is morally indefensible.

With regard to the issue of intelligence: If IQ is genetically determined in large measure, to then reward those with high IQ for a trait they merely inherited (perhaps by providing them with better educational or job opportunities) would make no more moral sense than to reward persons with blonde hair, green eyes, freckles or lactose intolerance. And to punish those with lower IQ (which condition they couldn’t have helped but inherit if the determinists are correct) by withholding enriched opportunities from them, would be no more justifiable than to punish those with the blood type O, or those with phenylketonuria. In all cases, these would amount to inherited traits, none of which say anything about how a person should be treated. They are morally arbitrary conditions, and to reward or withhold opportunity from people on the basis of any of them would be a monstrous and profoundly unfair departure from anything remotely approaching justice and decency.

To this, the hereditarians would respond that intelligence is different than those other traits I mentioned above, which could be inherited. Whereas eye color has no correlation with one’s ability to contribute to a society’s net worth — no link, for instance, to one’s ability to discover cures for disease, or manage successful businesses, or create great works of art — intelligence is so correlated. So, on this view, structuring the society to provide enhanced opportunities to persons with higher intelligence (and perhaps groups with higher average IQ) makes sense. Indeed, to attempt to provide significant opportunities (especially, God forbid, equal ones) to those with lower IQ, would make little sense at all, given the amount of effort required to raise such persons merely to the level of average, let alone to place them on a par with the cognitive elite. As one person who wrote to me recently argued, there are “diminishing returns” with such attempts at equalizing outcomes, and thus, they are hardly worth the time, money or effort involved.

But if the goal is merely to enhance people’s cognitive strength and abilities up to an average level, somewhere around the mean for the society, there is little reason to doubt that such an outcome is possible for millions with the proper environmental interventions. Indeed, the copious research in Nisbett’s book makes this exceedingly clear. So even if we were to accept the notion that intelligence is largely influenced by genetic inheritance, the ability to influence the part that is conditioned and developed environmentally could still prove significant. More than that, to not seek to improve the conditions to which persons were subjected, so as to boost that portion of ability that we can control (or at least influence) would stand as a uniquely evil oversight. It would amount to allowing biology to become destiny without regard for any notion of social responsibility or sense of fair play. Although not all interventions would work, to decide ahead of time to forego making any substantial efforts at such interventions, just because they might prove inadequate, would be to shirk our moral responsibility to provide to each person in the society as much opportunity as possible to develop to their fullest potential.

For instance, if we know (and we do) that cognitive ability is directly impacted by exposure to lead and other toxic chemicals, and that children of color are disproportionately exposed to these dangers, would it be moral to forego efforts to improve housing and community environmental quality in black and brown neighborhoods, simply because such persons might have had somewhat lower cognitive ability anyway? If we know (and again, the evidence is clear) that there is a direct link between racial discrimination and negative health outcomes that can affect childhood cognitive development — like disproportionate low-birthweight for black and brown children, or emotional stress that affects the limbic system, and can thereby depress what is known as “fluid intelligence” later in life — what difference does it make that the persons negatively impacted by these things might have “naturally” had IQs that were a certain number of points lower than others? What moral weight should such science have in that situation? Would that scientific “fact” of different native ability mean that we ought not seek to reduce the discrimination that brings about that independent result? Should we not intervene where we can, when we can? To conclude as such makes no sense whatsoever.

This is especially true because of a second point:

2. A logical value-added analysis would suggest that interventions should focus on raising the average or below-average in society, rather than favoring those at the top.

It could be argued, and I would indeed argue, that social intervention on behalf of the intellectually gifted is actually highly inefficient and wasteful, and far more so than interventions on behalf of the average or even below-average. After all, if those with high IQ are actually more talented and capable of creating, producing and developing things of value, then presumably their talents would shine through and manifest such outcomes even without encouragement or assistance from government. In fact, the marginal gains produced by such interventions on their behalf (orchestrated as they would be by fairly average policy planners and bureaucrats, presumably) would likely be too small to amount to much in the way of added-value. On the other hand, for those of average intelligence but significant character and determination, or for those who were cognitively impaired but imbued with substantial perseverance and drive, interventions by the state could be the difference between academic failure and success, between employment that can support oneself and one’s family and work that cannot, between a fulfilling, autonomous life and one of dependence and hardship. Given this choice, can anyone rationally argue we would be better off subsidizing the intellectual have-mores — for whom such interventions would be redundant and wasteful at best, and which at worst would only deepen pre-existing social inequities — while ignoring or eschewing such interventions on behalf of the have-lessers?

At the top of the IQ pyramid, interventions and subsidies can make very little difference — if the theories of the IQ-ocracy are accurate — because at such a cognitive pinnacle the ability to nudge up one’s intelligence, output or productivity is limited. But in the middle and lower end of the IQ scale, slight improvements in opportunity or education can make a profound difference in someone’s ability to contribute to the society in which they live. If you imagine the SAT for example, added educational enhancements for the so-called gifted could only improve their scores so much — after all, if one already has a 2360, there are only an additional 40 points theoretically obtainable on the test — while enhancements for quite average or even below average students could provide a significant boost, raising their scores sufficiently to allow for college enrollment, or enrollment at a more selective school than otherwise would have been possible. Simply put, there is always more room for improvement in the middle and at the bottom than at the top. If any arrangement produces “diminishing returns” it is not intervention on behalf of the poor or people of color (thought by the IQ-ocracy to be inferior in regards to intellect), but rather, intervention on behalf of the already advantaged.

Applying a moral and philosophical standard that prioritizes the larger society’s well-being, there is little doubt but that the position of the racial realists makes no sense. Not only would paring back interventions on behalf of those in the middle and the bottom of the social and economic structure result in punishing them for their presumed genetic destiny, it would also do considerable damage to the larger polity by overlooking untapped talent. So, for instance, even if we could say with certainty that group x on average is more “intelligent” than group y, this would not obviate the fact that there would be millions of people in group x who were far less intelligent than millions of others in group y. The only way to do justice by each individual — and also to maximize the well-being of the society, which needs the talents and creativity of as many people as possible in order to prosper — is to level the playing field to the greatest extent possible, so that individuals can prove themselves without the unfair and arbitrary advantages or disadvantages provided either by nature or environment.

As an example of how this works, and why it is so important to make the effort, consider the results of educational policy changes at South Side High School in Rockville Centre, Long Island. A little over a decade ago, less than a third of black and Latino students at the school were graduating with full New York Regents Diplomas, compared to 98 percent of whites and Asians. After the school eliminated their rigid “ability tracking” policies (which tended to relegate black and Latino students to lower level classes), and replaced them with heterogenous grouping, in which all students were exposed to high-level material, the racial disparities in full graduation rates virtually disappeared. Now, roughly 95 percent of all racial and ethnic groups graduate from the school with full diplomas. Obviously, black and Latino genes didn’t change in the course of ten years; neither, for that matter did their cultures (another culprit sometimes identified by the right as explaining achievement gaps between whites and Asians on the one hand and blacks and Latinos on the other). What changed was the assumption as to what black and brown students could handle, and the policies governing that to which they would be exposed. Under a racial realist regime, tracking would have been maintained, under the assumption that the students who were doing worse were simply, on balance, incapable of doing any better. As a result, thousands of real kids in the past ten years — at just that one school — would have seen their life chances dimmed, not because of genes, but because of racism, plain and simple.

As a final note here, the hereditarians would also invert the incentive structure for hard work and determination, to the detriment of the larger society, by rewarding native talent and ability, even though persons possessed with such talents may not have worked very hard to develop them; meanwhile, they would say to those who had had to work especially hard to achieve — because they were not, perhaps, naturally gifted at a particular thing — that they would just have to settle for their station in life, or hope for some lucky break, because to provide them with the kind of targeted opportunities that might allow them to flourish was simply wasteful. Surely this would amount to a profound injustice, regardless of whatever science may have to say about genes, race and intelligence.

But there is more:

3. Intelligence as formally measured has no necessary correlation to the traits we hope to maximize in society, and may well be highly correlated with certain dysfunctional traits and tendencies; as such, we should not fetishize raw intelligence, nor seek to favor those in society (either individuals or groups) who happen to possess more of it, for whatever reason.

Although the hereditarians and racial “realists” take as a given the notion that society should maximize its raw intelligence, and that doing so will make that society better and more productive — thus it should provide educational enrichment mostly to those individuals or groups that possess more of it, and do less for those individuals or groups who lack the same — there is no reason for such thinking. After all, although there is little doubt that aggregate intelligence is beneficial to societal well-being, there is little reason to believe that intelligence, defined in the strict and typical way, is necessarily correlated to other human traits that are equally (or perhaps more) important to social well-being. Among these would be character, compassion, kindness, perseverance, empathy, generosity, humility, or the ability to cooperate and collaborate with others.

How many of us have known people (perhaps even in our own families) who were of rather average accomplishment, with little formal education, but whose insights and true wisdom about life and how to live it, far surpassed anything we learned in a formal classroom? From simple country folk, to desperately poor residents of public housing in New Orleans, some of the most morally developed and decent people I’ve ever met failed to even graduate from high school; and were they tested on a formal IQ battery, or given the SAT, I have little doubt but that they wouldn’t do all that well. Yet, I would venture to guess that when it comes to those traits listed above, they would likely demonstrate an abundance of them, relative to even the most highly educated persons in the culture.

Indeed, one might even say that there is a tipping point, beyond which too much formal intelligence may actually be inversely related to some of those other traits. Take the ability to cooperate and collaborate with others, for instance: Most human resource specialists would argue that among the most important skill sets in the 21st century economy, the ability to work collaboratively, to rethink one’s assumptions and to approach a problem from multiple perspectives would rank near the very top. Yet there is no known, or even logical correlation between these skills, and formal, testable IQ. Indeed, some research suggests that higher-IQ individuals are often less flexible in their approach to problems — in part, perhaps because their feelings of superior intelligence lead them to doubt those they view as mental inferiors — and less likely to manifest the teamwork-related talents so desired by virtually any company or institution for which one might be working in years to come.

Likewise, however important IQ may be to scientific, artistic or industrial innovation (as claimed by the “racial realists” and hereditarians), there is little doubt but that it could be of benefit to those seeking to engage in fraud, subterfuge, deceitful dealings with others, or effective criminality. Those with extremely high IQs are likely to be those who are “intelligent enough” to effectively deceive others, manipulate them, and to often cover up their misdeeds for longer periods of time. Whether cheating on one’s spouse or engaging in financial wrongdoing in one’s company, those with high IQ would be the very persons for whom getting away with their actions might prove much easier. But is this something to which we should aim our society? Corporate criminals, after all, are usually highly educated, and probably would score highly on just about any standardized test you chose to give them. And what of it? Virtually all the stock manipulators, unethical derivatives traders and shady money managers on Wall Street, whose actions have brought the economy to its knees of late — and who it might be worth noting are pretty much all white men — would likely do well on the Stanford-Binet or Wonderlich Industrial Aptitude Test. They probably were above-average students. But what are we to make of these facts? Clearly they say little about the value of such persons to the nation or the world. The Unabomber was a certified genius and Ted Bundy was of well-above-average intelligence, as were, no doubt, virtually all the men who invented Napalm for Dow Chemical, or who killed thousands thanks to their malfeasance at Bhopal, or who have been responsible for most of the ecological damage done to the land base upon which they and others depend. But I’m having a hard time discerning what we should conclude about these truths, in terms of how much emphasis we place on intelligence, as opposed to other human traits. If narcissism and predatory sociopathy may also be correlated with intelligence, then perhaps we need less of it, and not more.

Interestingly, and as a side note, there have been few if any studies that have sought to determine the potential connections between high intelligence and certain pathological tendencies. And the reasons for this oversight are not hard to discern: after all, why would researchers into the so-called “science” of intelligence, who are themselves well-educated and likely of above-average IQ, wish to explore the potential downsides of people like them? In other words, the research that gets done in the first place on this subject tells you all you need to know about the inadequacy of scientific inquiry and the biases implicit to the same. While educated people love to study the inadequacies of the less-educated, they rarely turn the lens upon themselves.

Yet, we do have some information to suggest that those whom the racial “realists” and hereditarians would credit with greater intelligence and ability, are perhaps lacking in the kind of ethical integrity and civic-mindedness that as a society we should seek to maximize. So, for instance, consider the findings reported by sociologist Judith Blau, in her book Race in the Schools. Therein, Blau presents survey data showing that black students are more likely than white students to report working on various social projects with church groups, helping disadvantaged families, working with homeless people, or helping illiterate adults learn to read or write.

Using a particularly strong research method known as Multiple Classification Analysis, so as to determine personal and group “integrity” — as measured by students’ feelings about cheating, skipping class, being disrespectful to teachers or disobeying school rules — Blau discovered that black and Latino students exhibited far greater integrity than their white counterparts; and low income students exhibited far higher scores on the integrity scale than the affluent. Indeed, affluent white students demonstrated the least academic integrity of any group: they were far more likely to endorse cheating and corner-cutting so as to get ahead, as well as flouting rules and regulations. As Blau explained:

“Being white, and therefore advantaged in the United States, promotes a casual indifference about rules and conventions. White teens, I suggest, tend to consider themselves exempt from the rules that apply to others.”

Blau also notes that white students have higher rates of discipline problems than students of color, once other factors like economic status are held constant — so that students being compared are coming from similar backgrounds — and that white students are more likely than their counterparts of color to abuse various drugs or alcohol. So, whatever advantage such students may possess in terms of heritable IQ seems largely irrelevant if we value other important traits like integrity. In the end, the question the hereditarians and racial “realists” refuse to engage is the most important one of all: namely, what kind of society do we want? One in which collaboration and cooperation, empathy, compassion and integrity are paramount? Or one in which people are really good at standardized tests and abstract reasoning, and where those imbued with advantages in these categories feel themselves entitled to the best life has to offer, to hell with everyone else? To ask the question is to answer it.

Ultimately then, if you’re happy with the way mostly rich white men are running the United States, and particularly its corporations and its banks, then by all means, you should probably embrace the worldview and vision for the future endorsed by the racialists and hereditarians who fetishize IQ. If, as is more likely, you find their leadership and direction to be just a tad more problematic than that, then perhaps it is time to consign this nonsense about formal intelligence to the waste bin of scientific history, where it rightly belongs.

The Irrelevance of Racial Science to Civil Rights Law and Theory

Yet with all this said, the racial “realists” would no doubt insist that at the very least, if their theories are correct about race and intelligence, it would require a major rethink of civil rights law and antiracist theory. First, we would have to relinquish the belief that evidence of disparate outcomes between whites and blacks on various indicia of social well-being proved that the culprit for said gaps was racism. Then, we would have to jettison disparate impact jurisprudence in the legal system, which views significant disparities as prima facie evidence of discrimination, and forces employers to justify the policies, practices or procedures that bring about said disparities. Finally, we would most assuredly need to end policies like affirmative action, since attempts at racial balancing within companies or colleges rest on the assumption of equal capacity between the races. If capacities are not equal, then racial balancing would mean that scientifically less capable people were being given opportunities for which they were not qualified, at the expense of the more talented. Not only would such an arrangement violate the principle of colorblindness (a more traditional objection to such efforts), but even worse, it would contravene the very laws of nature!

Indeed, the racial “realists” would no doubt argue that my own claim above, where I note that genetic endowment is morally arbitrary, and thus, we ought not reward or punish people on the basis of that genetic endowment, is itself an admission that things like affirmative action are unjust. After all, such policies are based on the genetic endowment of racial identity or color. How can I (and others on the left) rationalize using race and its connection to a history of discrimination, so as to expand opportunity for people of color, while opposing the use of race and its connection to raw intelligence so as to expand opportunity for whites? Isn’t it all or nothing? If we can consider race in one direction, shouldn’t we consider it for both directions?

While at first glance these arguments may sound reasonable, even intuitive, upon closer reflection they suggest a fundamental misunderstanding of left antiracist theory, modern legal jurisprudence, the rationales behind affirmative action, and the actual real-world implementation of civil rights policies. Let’s examine these arguments one at a time.

1. There is direct evidence of racism and discrimination in the job market: Antiracist theory does not simply rely on aggregate gaps in various indicators of well-being.

While it is certainly true that those of us on the left view significant racial disparities in income, wealth, occupational status or educational attainment as the result of a long history of unequal opportunity, such an assumption is not ipso facto necessary to the antiracist position. Indeed, when I have discussed these matters, I have steered clear of using simple aggregate data (on white versus black family income, for instance), in favor of data that compares only like with like.

So, for instance, it is one thing to present data that says “black income is only 69% of white income, on average,” and quite another to demonstrate (as recent labor department data does) that even when whites and blacks are doing the same kinds of jobs (even when those jobs are in management and financial categories, requiring similar levels of high education), whites earn as much as 30 percent more than their black counterparts. Likewise, even highly-educated Chinese Americans working in professional and managerial positions, only earn about 56% as much as their white counterparts. These kinds of data, by comparing similarly qualified and capable persons, tend to control for things like differential merit. It is simply not likely that the IQs of whites and blacks doing the same kind of jobs, with the same educational backgrounds, are all that different (especially if, as the hereditarians have long argued, certain IQs are actually needed to be able to perform certain types of jobs in the first place). Even if we accepted the position of the racial “realists” that black IQ and actual intelligence was, in the aggregate, lower, this could not explain why blacks working in the same kinds of jobs with the same kinds of education, were still earning far less, or suffering higher rates of unemployment.

And of course, even better than raw data and the inferences we might draw from it, are direct tests for discrimination, which have been performed in dozens of industries and over many years. These tests send out prospective employees who are matched for credentials, demeanor, communication style, age and experience, and whose only real differences are color. In test after test, whites are treated more favorably, as I describe in depth in my book, Colorblind. There is literally no way to interpret this research except as evidence of racism and discrimination.

2. Disparate impact jurisprudence would remain legitimate, even if the racial realists were correct.

Recently, I received an email from a so-called racial realist, in which he crowed about a new study, just a few weeks old now, which purports to demonstrate that variations in intelligence are roughly 50 percent due to genetic factors, and that there are multiple genes associated with intelligence, across the human genome, which can be identified. Though his approach to me was typically smarmy and self-assured, I responded with interest and asked him to send the supporting information for the full study, rather than merely the abstract he had first sent along. He did so, and I read it. We then engaged in a couple of rounds of e-mails in which we discussed the implications of the study’s findings for civil rights law, and particularly disparate impact jurisprudence.

Before explaining why his position on disparate impact was incorrect, I should note that there were several problems with the way he was seeking to use this study, not the least of which was his attempt to read a racialist argument into a study that had nothing to do with race at all. First, the study was done only on whites of almost entirely “Norwegian origin.” This matters, because any biologist can tell you that within-group heritability of any trait (intelligence, height, weight, or anything else) says nothing about the heritability of the differences between different groups. In other words, the fact that intelligence might be 50 percent heritable and traceable to genetic causes, for humans generally, and within the white group being studied (and perhaps the same would hold for blacks), does not mean that 50 percent of the difference in measured IQ between whites and blacks is due to genetic causes. To understand why in-group heritability says nothing about inter-group heritability, consider the following example.

Let’s say that intelligence is, as the authors of this study suggest, 50 percent due to heritable factors like genes. And let’s say this is true for all groups or races of humans. So far so good. But then let’s say that one group — let’s call them the Klingons — were subjected to overt oppression on the basis of their identity. They were denied schooling, regularly subjected to violence and forced to eat lead paint chips twice a week. Since we know that lack of schooling, exposure to violence and lead paint can all impact cognitive function, we would expect that the Klingons would, over time, manifest significantly traduced intelligence, however we might choose to measure that concept. Even if the genes of both groups were substantively the same, and intelligence were 50 percent due to genetics, can anyone doubt but that the Klingon’s intelligence deficit, relative to the dominant group, would be largely due to environment?

Likewise, imagine that you took two random handfuls of perfectly mixed seeds from a bag of wheat, and proceeded to sow one handful in barren ground and another in highly fertile soil. Even though the seeds were mixed perfectly, there may still be some variation in the size and health of the plants that emerged, since the seeds (though perfectly mixed) are still not identical. Given the environmental consistency to which each set of seeds was subjected (be it good or bad), any differences in the emergent plants would be entirely heritable within that set. However, since the genetic stock from which the seeds came was similar, and since both sets were perfectly mixed and thus, would have a similar proportion of healthy and unhealthy seeds, any differences between the two sets would have nothing to do with heritability or genetics; rather, they would owe entirely to the different soil conditions. In other words, in-group differences can be 100 percent genetic while inter-group differences can be 100 percent environmental.

So the study in question says absolutely nothing about racial differences in intelligence, their causes, their malleability, or their implications for anything. Indeed, there is nothing in the study to suggest that whatever gene frequencies the authors were identifying as being related to intelligence were any more or less to be found in whites as opposed to non-whites. This is why the authors never mentioned it, and would likely be appalled at the way their research is being twisted by racists.

That said, the individual who sent the study my way felt otherwise, and noted that at the very least this kind of finding would require a rethink of disparate impact jurisprudence and government guidelines for so-called “adverse impact” in hiring. After all, current guidelines proffer that if a particular company hires blacks, for instance, at a rate that is less than 80 percent the rate at which it hires whites, discrimination may be occurring and an investigation may be warranted. With such a gap discovered, plaintiffs can identify a cause of the disparity — such as a standardized test score or other required credential — and flip the burden of proof to the employer, who then must demonstrate a non-racial, job-related justification for the offending criteria. Such a burden, according to my electronic interlocutor, is unfair, especially given what we “know” about genes and intelligence.

But this argument is mistaken on several levels. First, the adverse impact standard is merely a potential trigger for looking more deeply into an employer’s practices to ensure that discrimination is not the culprit. It does not prompt an automatic investigation, let alone a lawsuit, let alone a finding for the plaintiff. In fact, the way that disparate impact has been adjudicated in the courts has been quite balanced. On the one hand, yes, the employer may have to identify a job-related rationale for a particular qualification requirement — the so-called “business necessity” defense — but is that really a burden? If a test or other credential is truly related to doing the job in question, why shouldn’t the employer be able to demonstrate that? One would think that employers would want to validate their selection instruments ahead of time, so as to ensure that they were really getting the best employees. If they haven’t done this, it not only might well perpetuate injustice against certain persons and groups, it might actually work to the detriment of the offending employer. And of course, if the employer offers a non-racial, job-related reason for a particular credential (and the courts have been very lenient on what constitutes “business necessity”), the burden then flips back to the plaintiff to show that the job-related rationale is bogus, and usually to demonstrate that there are other, equally valid measures of merit, which would have a less disparate impact.

In other words, those alleging discrimination still have to make a strong showing of fact and demonstrate by the preponderance of the evidence that the disparate impact is unjustifiable. It is not as if the mere fact of disparity is enough to prove unlawful discrimination in court. In fact, it is rare for disparate impact cases to prevail at all. Not to mention, those of us on the left have long relied on stringent “availability analysis” to make the case for occupational discrimination in given companies or industries. These analyses, which compare the percentages of persons of color hired to the numbers of available and qualified persons of color in a given community — and which often find substantial evidence of discrimination — mitigate against the kind of haphazard and unfair burdens that the racial realists accuse the left of supporting.

So there is no reason why the claims of racial realists — even were they proven to be true beyond any doubt — should require a rethinking of the existing legal rules. For instance, if having a certain IQ is really critical for performing a particular task, that should be something that an employer could prove in court. Although the hereditarians make this claim by simply pointing to research about the “average” IQ of people doing certain jobs, this would not demonstrate that such an IQ was actually needed to do the job in question. It would merely show that certain people tend to cluster in certain jobs for reasons that may include IQ, but also might include network connections, nepotism, and other non-IQ related factors. Let the racial realists propose the nation-wide and perhaps world-wide IQ testing of all persons, in all jobs, and then the longitudinal assessment of those persons over, say, a twenty year period to evaluate their respective performances, and then the rigid controlling for all other factors that might have impacted performance (including racial bias, health factors, age, living conditions, gender bias, family size, geographic location, level of education, inherited wealth and status among others), and then and only then — assuming they could actually prove anything with this research — would it be fair to say that the disparate impact standards might be unfair. Something tells me that they would neither propose such a thing, nor gain the compliance of most corporate executives were they to do so.

3. Affirmative Action is legitimate even if the racial realists are correct.

Contrary to the claims of the racial “realists,” affirmative action programs and policies would not be invalidated, even were it proven that some of the racial gaps in academic performance and intelligence were due to genetic or biological differences between racial groups. Although many presume that affirmative action is simply a matter of plugging in x percentage of blacks, or Latinos (or whatever) into particular jobs or college slots, that is not, in fact, how such efforts operate. Typically, affirmative action entails deliberate outreach to otherwise underrepresented communities, out of a concern that persons in those communities might be overlooked, as well as a more holistic assessment of credentials, which takes into consideration the specific obstacles (based on race) that individuals may have experienced. In other words, affirmative action, properly understood, is simply a policy that seeks to look more deeply at college applicants or job applicants, so as to make sure that talent is not being passed over. It does not force the hiring or admission of anyone, and indeed, research indicates that persons hired with the help of affirmative action, actually perform equal to or better than their white male counterparts, on average.

What this suggests is that other than strict proportional quotas — which are already outlawed in almost all instances — the evidence on race and intelligence would be irrelevant to affirmative action efforts. Doing targeted outreach and considering the impact that racial identity may have had on a job or college applicant’s on-paper credentials (making them appear more or less qualified than they really were) would still be valid exercises.

Morally speaking, making group-based considerations under affirmative action would be far different than the group-based considerations that the racial realists would have us undertake. To consider one’s group status for the purpose of expanding access — by noting that people of color have been collectively prevented from opportunities available to whites (which is inarguable), and thus, deserve targeted efforts to allow them to compete more equally — is fundamentally different than considering one’s group status to limit access, as with those who would make generalizations based on average group IQ. In the former case, policymakers would be seeking to ensure that talent is not overlooked, as it must be under a system of racial subordination. In the latter case, policymakers would be saying “tough” to those who were unlucky enough to belong to a supposedly inferior group. They would be foreclosing the ability of individuals to live up to their individual potential, whereas policies like affirmative action do not limit the ability of whites to reach ours: they may make our success less guaranteed, by requiring a broader competition than that to which we otherwise would have been subjected. But there is no injustice in that.

In order for people to live up to their potential as individuals — a potential that we cannot know for certain based on group averages — they must be afforded equal opportunities, bereft of barriers based on their status as members of historically subordinated groups. As such, even if the IQ-ocracy were correct, it would be supremely unjust to assume that because blacks as a group have lower average “intelligence,” individual blacks should therefore be limited to remedial education, presumed less capable, and steered into lower level jobs. Highly capable persons — whites, mostly, in the eyes of the white supremacists — would be expected to rise to the top in any society where effort and talent were rewarded, even if there were countervailing programs like affirmative action in place. But in a system without such efforts, members of subordinated groups would face long odds indeed. It is more likely that black and brown talent would be overlooked in an IQ-driven system, than it is that whites would be overlooked in an affirmative action regime.

In other words, in order to uphold the notion that people should be treated like the individuals they are — not merely as individuals in the abstract — considering the way that racial identity may have limited opportunities for job or college applicants (and thus, taking affirmative action to look more deeply at what goes into an applicant’s presumed and visible “merit”) would be morally requisite. And yet, making assumptions about individual IQ based on group averages, and then doling out the goodies accordingly would be morally repugnant. Both look at group identity, but for very different reasons, with very different levels of ethical justification, and with very different practical results.

Conclusion: Making Policy as If People Matter

The IQ religionists are advocating (in their minds at least) a society of excellence, in which superiority is rewarded. Yet this formulation still begs the question: what kind of excellence, and what kind of superiority? Are we to limit our understanding of excellence to a notion of abstract testable intellect, and superiority to one of an ability to perform more highly on said tests? Or do we want to promote excellence in various human traits that show no necessary connection to the kinds of traits for which psychometricians can produce a standardized battery: compassion, caring, empathy, thoughtfulness, perseverance, moral judgment, and humility among them?

And should we construct a society around the notion of rewarding people for the traits they merely inherited, and which, even if taken for granted, they could not help but possess? Or should we place a higher premium on effort and perseverance, such that those who begin with a skill deficit and yet put forth maximum exertion to achieve their goals become those we reward and seek to emulate? These are not easy questions for which obvious and pat answers can suffice. But one thing is for sure: the answers to these questions are principally derived from philosophical and ethical considerations, not from the laboratory results of scientists, whatever their findings ultimately prove to be.