By ProAsian Editorial Staff (Visit our new site: ProAsian Voice)

This article is written in response to a data chart created by Karthick Ramakrishnan and Janelle Wong titled, “Asian American Support for Affirmative Action,” which attempts to attribute an alleged decline in support for affirmative action to Chinese Americans. (See also: http://aapidata.com/blog/asianam-affirmative-action-surveys/).

This data chart is problematic for a number of reasons. First, amid major media coverage of Asian American attitudes towards affirmative action in regard to the SHSAT test in New York City and the Harvard lawsuit, this data chart was widely used to blatantly demonize the Chinese population. Take, for example, New York Times Deputy National Editor Jia Lynn Yang’s recent tweet. Though the data chart makes no mention of differentiating between recently-naturalized Chinese immigrants and those who have been in the United States for a longer period of time (or even Chinese immigrants from Mainland China as opposed to Chinese immigrants from other parts of the world), she told her 7,000+ followers, “The loudest Asian voices against affirmative action are often recent immigrants from Mainland China. Most Asian Americans actually support affirmative action.” In tweeting this, she effectively contributed to the overall sinophobic atmosphere in recent months perpetuated both by progressive liberals and the Trump Administration. Directly under her tweet, Professor Jennifer Lee of Columbia University strategically posted this data chart, which was also shared on her own page. She wrote, “The MAJORITY of Asian Americans SUPPORT affirmative action. The only exception are Chinese Americans.”

But does the data support such a simplistic conclusion?

The data chart is based on three separate surveys conducted in 2012, 2014, and 2016. Although the chart and the accompanying blog post state that the source is from “registered voters,” from all three years, this is inaccurate, as the 2012 survey specifically stated, “The listed samples include those not registered as well as those who are registered.” A closer look at all three studies reveals that the blog post contains further inaccuracies and raises concerns about whether any of the data is reliable at all.

Issue No. 1: The sample sources varied with each survey.

To measure changes over time, consistency in survey methods and sample sources is crucial. Neither of these happened here. We first discuss the different makeup in the sample sources of each survey.

In simple terms, “Asian American” meant one thing in 2012 but another in 2014 and 2016.

Common sense dictates that sampling the attitudes of one population in Year 1 and different population in Year 5 does little to measure changes in attitude over time. It may be, after all, that the groups had diverging opinions to begin with. One major problem with the AAPI report is that the sample sources were not even made up of the same ethnic groups. In 2012, 10 ethnic groups were surveyed: Chinese, Indian, Filipino, Vietnamese, Korean, Japanese, Cambodian, and Hmong. Native Hawaiian and Samoans were also interviewed but categorized as Pacific Islander and not Asian American. Of the 3,034 Asian American individuals who were surveyed in this study, 428 or 14% belonged to ethnic groups (specifically, Hmong and Cambodian) that were subsequently omitted from 2014 and 2016 sample populations. Furthermore, respondents in the 2012 survey were interviewed in English, Mandarin, Cantonese, Hindi, Hmong, Japanese, Khmer, Korean, Thai, Tagalog, and Vietnamese. However, 6 of these 11 language-speaking groups (Hindi, Hmong, Japanese, Khmer, Thai and Tagalog) were likewise subsequently omitted from the 2014 and 2016 sample populations. Taken together, the latter two surveys omitted 2 ethnic groups and 6 language-speaking groups that were originally included in the first survey.

Additionally, the surveys did not control for register vs. non registered voters. While the 2012 survey measured the attitudes of both registered voters and those not registered, the 2014 and 2016 surveys measured only registered voters.

Since “Asian American” was made up of different groups in each survey, it is misleading to state that the report analyzed changes in the attitudes of “Asian Americans” over time.

Issue No. 2: The sample sizes for the surveys were too small.

In measuring the attitudes of any group, the sample size is signifiant. The larger the sample size, the more closely the sample group can reflect the larger population. In 2012, as previously discussed, 3034 individuals were interviewed. In 2014 and 2016, however, despite the rapid population growth of Asian Americans, less than half of that number were interviewed.

In 2014, 1,337 individuals were interviewed.

In 2016, only 1,112 individuals were interviewed.

By 2014, the estimated population of Asian Americans had exceeded 20 million. A reliable sample size should have been at least 2,401. AAPI Data did not come close to that number. Statistics aside, would you rely on a sample of a select 1,337 or 1,112 to determine the attitudes of 20,000,000 Asian Americans? It is unclear to us why the original 2012 survey was not simply replicated and why the sample population continued to shrink with each survey.

Issue No. 3: The surveys failed to control for important variables such as socioeconomic status and area of residence.

The purpose of this data chart was quite obviously to draw some correlation between support for affirmative action and ethnicity. The most effective way to do this would have been to control for other variables that might affect the findings which include but are not limited to: age, socioeconomic status, area of residence.

It appears that little to no research or consideration was put into what factors outside of one’s ethnic background might influence an individual’s position on affirmative action. All three surveys (2012, 2014, 2016) appear to have failed to control for a number of important variables. Although the surveys asked about socioeconomic background and whether the individual resided at the address where he or she was registered to vote, neither of factors were considered in the final analyses. As laid out in methodology section of the the survey reports for 2012, 2014 and 2016, only following factors were weighted statistically to account for any demographic differences: size of group within state, educational attainment, genre and nativity.

Let’s flesh this out further. Consider the following scenario: Group A consists primarily of upper middle class individuals living in the suburbs. Group B consists primarily of individuals living in the rough part of an urban city. Any possible differences in attitudes on affirmative action may be due to their socioeconomic status and, correspondingly, where they live, and not ethnic background. Individuals of the same ethnic background but residing in another area or of a different socioeconomic status might feel very different. Absent these controls, no correlations should be drawn at all.

Issue No. 4: The analyses is silent on the quality assurances of the translations/ interpretations.

Over 40% of the individuals surveyed in the 2012, 2014 and 2016 surveys were interviewed in a language other than English. The 2012 survey, in which individuals were interviewed in 11 different languages, indicated that the interview translations were completed by a company called Accent on Languages of Berkeley, California and audited by bilingual staff in partner organizations. The 2014 and 2016 surveys, in which individuals were only interviewed in 5 different languages, are silent on how the interviews were translated and/or interpreted.

Most notably, all three surveys are silent on how the following:

whether the questions, once translated, held the same meaning in every language

who conducted the interviews in the Asian languages

the selection process for these interviewers to determine competency in the Asian languages

how these interviewers were trained

how the interviewers and interpreters were trained to handle clarifying questions.

All of the above are crucial in assuring that the answers to the questions were not the result of any varying nature in which the questions were asked. As the survey reports noted, individuals often gave different answers when the same questions were asked in a different way. So how can we know that, once translated and/or interpreted, the questions on affirmative action were not modified further?

Most challenging is that the concept “affirmative action” is largely a Western construction created to address societal racism. Not all Asian languages have a direct translation for this and, even if they now do, it is not part of common vernacular. Affirmative action may have been translated/interpreted in one way in Mandarin and a very different way in Vietnamese or Korean.

To exacerbate the issue, Cantonese, a Chinese dialect, is a primarily spoken language. Therefore, even if the questions were properly translated in Chinese, a Cantonese interviewer — if performing his duties correctly — would not be reading those words verbatim. How, then, did the surveys conduct quality assurance for the questions when they were given orally in Cantonese?

Finally, it would be important to determine how the interviewers were trained and whether and how they were permitted to answer clarifying questions. For example: Were the interviewers trained to use a neutral nonjudgmental tone in asking the questions? Were the interviewers trained to not ask the questions in a leading manner? Were the interviewers trained to “stick to the script” when asked a clarifying question? Or were they given some leeway to provide guidance to the interviewee? How many individual interviewers were assigned to each ethnic and language-speaking group? Answers to those questions would significantly change the validity and reliability of the data.

Issue No. 5: All three surveys asked close-ended questions and referenced the concept “affirmative action” with no definition of what affirmative action means.

This, perhaps is the most problematic issue.

The 2012 survey asked the following question regarding affirmative action: “In order to overcome past discrimination, do you favor or oppose affirmative action programs designed to help blacks, women, and other minorities get better jobs and education?”

The 2014 survey and 2016 survey asked the questions in a slightly different but similar way.

In 2014, the survey asked three yes/no question on affirmative action: 1) Do you favor or oppose affirmative action programs designed to help blacks, women, and other minorities get better jobs and education? 2) Do you favor or oppose affirmative action programs designed to help blacks, women, and other minorities get jobs and business contracts? and 3) Do you favor or oppose affirmative action programs designed to help blacks, women, and other minorities get better access to higher education?”

In 2016, the survey included two similarly-worded questions on affirmative action: 1) In general, do you think affirmative action programs designed to increase the number of black and minority students on college campuses are a good thing or a bad thing? and 2) Next, do you favor or oppose affirmative action programs designed to help blacks, women, and other minorities get better access to higher education?

All three surveys attempted to assess the individual’s attitude on affirmative action without giving any definition of what affirmative action is.

And, of course, when the questions were asked in just a slightly different way, the results were, not surprisingly, significantly different.

Consider the following results in the 2016 survey. When Chinese Americans were asked whether affirmative action programs…on a college campus are a good thing or a bad thing, only 23% answered that it would be a good thing. When the exact same question was asked in a slightly different way by replacing “college campuses” with “higher education,” the percentage of Chinese Americans in favor jumped to 43%. This should be a huge indicator that the questions are not measuring what they meant to do. This undermines the validity of any conclusion.

And herein lies the fundamental problem with the entire affirmative action debate. There is no universal understanding of affirmative action. There is no universal way that it is applied. The answers given by the various groups may very well correspond with just that group’s understanding of affirmative action but may not at all objectively capture whether one group actually favors affirmative action over the next. Each group, very likely, may be working off a different concept of affirmative action. This begs the greater question of why an organization like AAPI data did not conduct a more in-depth analysis, asking qualitative questions about beliefs and understandings surrounding affirmative action and circumstances that would influence one’s support or opposition thereof.

Finally, compared with the 2013 Gallup survey, it appears that more Chinese Americans favor race-based affirmative action than other Americans. This variance in results only signifies that greater research must be done before drawing any conclusions.

All of the above calls into question the reliability and validity of the surveys and the subsequent data chart. Left unanswered, nobody should be citing this chart as a reliable source.

As a clarification, we actually support race-based affirmative action and disaggregating data. However, we don’t support anti-Asian racism or the use of intellectual dishonesty to demonize any one Asian American group.

ProAsian Editorial Staff

Sources:

The Policy Priorities and Issue Preferences of Asian Americans and Pacific Islanders, 2012

Agenda for Justice and Contours of Public Opinion Among Asian Americans, 2014

Inclusion, Not Exclusion, Asian American Voter Survey, 2016