Raymond Wolters, American Renaissance, May 2011

Steven Farron, The Affirmative Action Hoax: Diversity, the Importance of Character and Other Lies, New Century Books, 2010, 349 pp., $22.95.

Steven Farron, author of this relentless critique of racial preferences, was a professor of Classics at the University of the Witwatersrand in Johannesburg, South Africa, until 2001. That year he resigned his academic position in order to study American affirmative action and other policies that grew out of the unequal success of different groups. Prof. Farron is a man of strong views; this is reflected in the title of his book: The Affirmative Action Hoax.

There is much debate about the real motives for reverse racial discrimination. In his book Equality Transformed, one of the best-informed observers, historian Herman Belz, has written that during the 1970s, against a background of race riots in many American cities and the “fragging” of white Army officers in Vietnam, American elites redefined “discrimination” as “disparate impact.” They then implemented affirmative discrimination as “the price society had to pay to prevent further violence in the black community.” Rather than explain their rationale candidly, however, America’s leaders proffered one falsehood after another. The Affirmative Action Hoax relentlessly exposes these falsehoods, and Prof. Farron argues that dissembling has been so extensive it amounts to a deliberate hoax.

Prof. Farron concentrates on affirmative action in American higher education, and does not hesitate to name the guilty. He demonstrates, for example, that an article by Eugene Garcia, the dean of Berkeley’s School of Education, was full of “blatant lies.” He shows that one of the best known defenses of affirmative action, The Shape of the River (1998), by Derek Bok and William Bowen, was filled with so many falsehoods that the distinguished authors — one a former president of Harvard and the other a former president of Princeton — deserve the appellation Prof. Farron bestows on them: “liars.” Prof. Farron also shows that justices of the US Supreme Court have endorsed egregious sophistries.

Prof. Farron takes particular pains to expose the pioneering misrepresentations about “diversity” that Justice Lewis H. Powell included in a concurring opinion in an especially important case, University of California v. Bakke (1978). Justice Powell wrote that the US Constitution prohibits government agencies and the recipients of government grants from discriminating on the basis of race. However, he added that the Constitution allows colleges and universities to foster intellectual debate by seeking a “diverse” student body and faculty that include “a wide variety of interests, talents, backgrounds, and career goals.”

Justice Powell took the unusual step of discussing and applauding what he called Harvard’s “illuminating example.” He accepted that Harvard was in good faith when it claimed it considered each student as an individual, adding that “the race of an applicant may tip the balance in his favor just as geographic origin or a life spent on a farm may tip the balance in other candidates’ cases . . . [but] the [Admissions] Committee does not set target-quotas.” “Tipping the balance” suggested only a slight edge for individuals from underrepresented groups. In fact, race was a tremendous advantage for black and Hispanic applicants, and the consistent admission, year after year, of approximately the same number of poorly qualified minorities showed that Harvard was clearly filling quotas.

Justice Powell’s comments on “diversity” served as the rationale for many universities and for the majority of the Supreme Court, in Grutter v. Bollinger (2003) in justifying affirmative racial preferences in academe.

Jews and gentiles

Prof. Farron provides an especially interesting account of the origins of “diversity” and other non-academic considerations for university admissions. He writes that before 1920, Ivy League institutions “admitted students almost entirely on the basis of academic criteria.” By 1919, however, “the proportion of Jews at elite American colleges was several times the proportion of Jews in the American population: for example, 20 percent at Brown and Harvard, nearly 25 percent at the University of Pennsylvania, and 40 percent at Columbia” (in 1920, Jews were 3.4 percent of the US population).

In response, Ivy League schools began to use scholarships to attract gentile students, even if they did not have the most outstanding academic qualifications. To boost the proportion of gentiles further, the elite colleges also considered applicants’ participation in music, athletics, debating, school publications, and student government. Some schools proposed a new goal: creating a “student body [that] will be properly representative of all groups in our national life” by “building up a new group of men from the West and South and, in general, from good high schools in towns and small cities.” Other schools emphasized the importance of “character” and “personality.” In the 1930s, Stanford assigned a 40 percent weighting to these attributes.

Prof. Farron shows that the purpose of promoting “diversity” — as an alternative to strict academic qualifications — was to limit the enrollment of Jews. At Columbia, administrators wanted Jews to be no more than 20 percent; at Harvard, 15 percent; at Yale 10 percent; at Stanford, 3 percent. The leaders of these institutions, however, came to recognize that quotas were at odds with widespread opposition to explicit discrimination. “My [original] plan [quotas] was crude, and its method . . . unwise,” the president of Harvard wrote to the president of Amherst in 1923. In 1945, an administrator at Yale confided, “[T]he Jewish problem continues to call for the utmost care and tact.” The solution was indirect discrimination under the guise of “diversity” or “character” rather than open quotas.

Prof. Farron writes that by embracing “diversity” these schools “saved themselves from Jewish inundation.” “During the 1930s, the proportion of Jews at Harvard varied between 14 and 16 percent (five times the proportion of Jews in the American population), which nearly perfectly matched [the] original proposed quota of 15 percent.” Beginning in the 1920s and for four decades, the Jewish proportion of undergraduates at Yale amounted to no more than 12 percent, “just marginally more than [the original] goal of 10 percent.”

The dean of Yale medical school explained in 1934 that “the number of Hebrews admitted . . . has never been more than 10 percent,” although “from 50 to 60 percent of the applicants . . . each year are Hebrews.” At Cornell Medical School, the proportion of Jewish students was reduced from 40 percent to 10, while Columbia reduced its proportion of Jewish medical students from 50 percent to 20. The proportion of Jews at Columbia Law School was reduced to 11 percent, while the proportion of Jews in engineering, dental, pharmacy, and veterinary schools declined by 24 percent, 35 percent, 45 percent, and 70 percent, respectively.

By recounting this history, Prof. Farron demolishes Justice Powell’s contention that the Ivies had not sought quotas but were fostering intellectual diversity. However, Prof. Farron does not explore what might have been lost by removing all barriers to Jewish admission. Many Ivy administrators believed that Jewish students would not assimilate the values of the Anglo-American mainstream unless the proportion of Jews was limited. The Jewish students were said to live at home, eat their lunches from brown paper bags, and retain cliquish loyalties they had formed in ethnic neighborhoods. They were said to remain only half assimilated. Summarizing this argument, the New Republic declared in 1922, “Five Jews to the hundred will necessarily undergo prompt assimilation. Ten Jews to the hundred might assimilate. But twenty or thirty — no. They would form a state within a state.”

By the 1960s, significant discrimination against Jews was a thing of the past, but some questioned the extent to which Jews had assimilated. Carl Bridenbaugh touched on this in his 1962 presidential address to the American Historical Association. Bridenbaugh began by noting that modern historians had lost “the priceless asset of a shared culture.” He noted that by the 1960s “many of the younger practitioners of our craft, and those who are still apprentices, are products of lower middle-class or foreign origin . . . They find themselves in a very real sense outsiders in our past and feel themselves shut out.”

Bridenbaugh wondered if the rising generation of alienated young scholars would appreciate the values of those who had led America in the past. Or would a new generation of self-consciously ethnic historians transform academic American history into a critique of the nation’s shortcomings? It is possible to argue that Bridenbaugh was on to something, and that it was these initial inroads that led to the present trend of viewing history from the cramped perspective of “race, class, and gender” rather than as the story of a nation.

Prof. Farron also neglects to make a crucial comparison between the earlier discrimination against Jews and today’s “affirmative action.” Admissions officers in the Ivy League were discriminating against a group they considered alien and unassimilated in favor of applicants who were gentile, like themselves. This was a classic case of in-group favoritism (though it still allowed Jews access to America’s top universities in numbers far disproportionate to their percentage of the population).

The “affirmative action” that followed was completely different: White administrators discriminated in favor of racial minorities and against whites like themselves. The public justification — the promotion of “diversity” — may have been the same, but the effect was to punish gentile whites rather than advantage them. Prof. Farron does not even take notice of this crucial difference, much less offer an explanation for what motivated white admissions officers, in effect, to discriminate against their own children.

The problem of IQ

Prof. Farron is what might be called “an IQ absolutist.” Early in his book, he quotes Arthur Jensen: “If there is any unquestioned fact in applied psychometrics, it is that IQ tests have a high degree of predictive validity . . .” He also emphasizes that “scores on standardized tests are the best measures of knowledge and aptitude,” and that “innumerable extensive studies have demonstrated without exception the predictive accuracy of grades, the SAT, LSAT, etc.”

Prof. Farron shows that in modern times the “magnitude of preference” for black and Hispanic candidates is enormous: generally in excess of one standard deviation. To mention just two of Prof. Farron’s many, many examples: in 1995 the law school at Berkeley accepted every black applicant with an undergraduate grade-point average between 3.25 and 3.49 and a LSAT score between the 70th and 75th percentiles, while rejecting every white and Asian in the same GPA and LSAT range. At the same time, the average MCAT (Medical College Admission Test) scores of black and Hispanic students enrolled at Harvard Medical School were 100 points (approximately one standard deviation) below the average score of whites who were rejected by all American medical schools.

In 1963, at the beginning of the era of desegregation, a psychology professor at the University of Georgia, Robert Osborne, predicted that double standards eventually would lead to “differential marking and evaluation systems [for] the two groups.” Prof. Farron shows that this has come to pass. America’s colleges and universities have accommodated non-Asian minority students with a much-publicized “grade inflation.” Many press reports have called attention to the increase in the proportion of “A” grades, but Prof. Farron maintains that “the most important effect has been a dramatic decrease in the failure rate.”

This means that as more minorities benefit from preferences, the test-score and class-rank gaps between black and white students have increased, but the disparity in graduation rates has narrowed. This does not, however, eliminate differences in competence. In 1983, in four states (California, Texas, Florida, and Arizona) about 75 percent of white candidate teachers passed teacher competence tests on the first try, as compared to about 25 percent of blacks. Among all medical school graduates who took the National Board Examination for the first time in 1988, the pass rate was about 87% for whites, 83% for Asians, 64% for Hispanics, and 49% for blacks.

Many people were angry to learn of the extent and the effects of affirmative discrimination. In California, Michigan, and Washington, the state constitutions were amended to forbid racial discrimination by state agencies, and other states enacted statutes to the same effect. By and large, however, America’s colleges and universities continued their discriminatory policies. This persistence was so determined as to amount to a second era of “massive resistance” (which usually refers to white resistance to public school integration in the 1950s and 1960s), although the mainstream media rarely labeled it as such. In 1995, when asked what Berkeley would do about the impending prohibition of racial discrimination, Chancellor Chang-Lin Tien candidly replied, “We can come up with some tricks.”

The Affirmative Action Hoax describes the many “tricks” American universities have used. One of the simplest was to stop using standard admission tests. The number of four-year colleges that stopped requiring applicants to take either the SAT or the ACT increased from 100 (out of some 2,000 four-year colleges and universities) in 1994 to 730 in 2005. Another “trick” was to assign twice as much weight to achievement tests as to aptitude tests — and then allow students who were reared speaking a foreign language to take an achievement test in their native language, say, Chinese, Korean, or Spanish.

Since the language tests did not benefit blacks, many colleges also turned to “holistic” assessments. This usually involved hiring additional admissions officers to scrutinize applications and give extra credit to black and Hispanic candidates who had been reared by a single parent and done fairly well (or sometimes just managed to survive) in a high-crime, gang-infested neighborhood.

Another “trick” was to admit all students who graduated in the top 10 percent of their class. This allowed the University of Texas to admit non-Asian minority graduates of predominantly black and Hispanic high schools, even if their combined Verbal and Math SAT scores were in the 800s (out of 1,600), while rejecting white and Asian applicants with SAT scores in the 1400s.

Prof. Farron reports, however, that holistic admissions officers rarely took account of poverty, since this did not boost the number of blacks or Hispanics. Prof. Farron explains that poverty did not “help” because “poor whites and Asians are much more academically able than poor (and even rich) blacks and Hispanics.”

The Affirmative Action Hoax is the most thorough and outspoken of the many books that have criticized affirmative action. It is, in fact, a lawyer’s brief against reverse racial discrimination. If our society welcomed dissenting points of view, these qualities would ensure publication by a major trade press for, as one literary agent recently reminded me, a book about public policy must be argumentative, since “today all books about policy are argument books.”

What the agent said is generally true, but readers of American Renaissance know that there are limits to argument. The agent’s wisdom does not apply to works that lie outside the boundaries of conventional discourse. Since The Affirmative Action Hoax is such a work, readers are indebted to the Seven Locks Press for publishing the original edition in 2005 and to the New Century Foundation for publishing this newly revised edition in 2010.