High-stakes standardized testing must be the most resilient phenomenon ever to exist on the planet. Joining high-stakes standardized testing in that (dis)honor would be the persistent but misleading claim that test scores are primarily achievement (and a growing future candidate for this honor is the claim that test scores by students, labeled “achievement,” are also credible metrics for “teacher quality”).

Let’s start with a couple statistical breakdowns of what test scores constitute:

• From Di Carlo:

But in the big picture, roughly 60 percent of achievement outcomes is explained by student and family background characteristics (most are unobserved, but likely pertain to income/poverty). Observable and unobservable schooling factors explain roughly 20 percent, most of this (10-15 percent) being teacher effects. The rest of the variation (about 20 percent) is unexplained (error). In other words, though precise estimates vary, the preponderance of evidence shows that achievement differences between students are overwhelmingly attributable to factors outside of schools and classrooms (see Hanushek et al. 1998; Rockoff 2003; Goldhaber et al. 1999; Rowan et al. 2002; Nye et al. 2004).

• And now from the Joseph Rowntree Foundation:

Just 14 per cent of variation in individuals’ performance is accounted for by school quality. Most variation is explained by other factors, underlining the need to look at the range of children’s experiences, inside and outside school, when seeking to raise achievement.

Next, consider this from the UK:

Differences in children’s exam results at secondary school owe more to genetics than teachers, schools or the family environment, according to a study published yesterday. The research drew on the exam scores of more than 11,000 16-year-olds who sat GCSEs at the end of their secondary school education. In the compulsory core subjects of English, maths and science, genetics accounted for on average 58% of the differences in scores that children achieved.

While the genetics claim is potentially dangerous, and certainly controversial, the article offers some important clarifications:

The findings do not mean that children’s performance at school is determined by their genes, or that schools and the child’s environment have no influence. The overall effect of a child’s environment – including their home and school life – accounted for 36% of the variation seen in students’ exam scores across all subjects, the study found…. Writing in the journal, the authors point out that genetics emerges as such a strong influence on exam scores because the schooling system aims to give all children the same education. The more school and other factors are made equal, the more genetic differences come to the fore in children’s performance. The same situation would happen if everyone had a healthy diet: differences in bodyweight would be more down to genetic variation, instead of being dominated by lifestyle. Plomin said one message from the study was that differences in children’s performance were not merely down to effort. “Some children find it easier to learn than others do, and I think it’s appetite as much as aptitude,” he said. “There is a motivation, maybe because you like to do what you are good at.” Genetics, he said, caused people to create, select and modify their environment, and so nature drives nurture, which in turn reinforces nature. A child with a gift for maths seeks friends who like maths. A child who learns to read easily might join a book club, and work through books on the shelves at home.

Additional points drawn from this research present some strong cautions about continued reliance on not only standardized tests, but also uniform national standards:

“Education is still focused on a one-size-fits-all approach and if genetics tells us anything it’s that children are different in how easily they learn and what they like to learn. Forcing them into this one academic approach is going to make some children confront failure a lot and it doesn’t seem a wise approach. It ought to be more personalised,” he said. “These things are as heritable as anything in behaviour, and yet when you look in education or in educational textbooks for teachers there is nothing on genetics. It cannot be right that there’s this complete disconnect between what we know and what we do.”

Finally, consider this research on the disconnect between test scores and student abilities:

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income. Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University. In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems…. Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, just by doing a lot of studying and being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

So should we be shocked when students passing high-stakes reading tests in Texas admit they cannot read?:

A female classmate of Tony’s says she can’t get through the stories she reads in school unless someone explains them to her. She’s passed all her state tests, too. How? She says she uses classroom-taught “strategies” on her English reading test and that if she underlines and highlights enough and narrows down her options, she has a better chance of guessing right by playing the odds. She failed her math state test because of the word problems, so she employed her English strategies there on the retry attempt and passed.

Or that the most recent analysis of the teaching of writing in middle and high schools has found that best practice in writing hasn’t occurred because of accountability and high-stakes testing?:

Overall, in comparison to the 1979–80 study, students in our study were writing more in all subjects, but that writing tended to be short and often did not provide students with opportunities to use composing as a way to think through the issues, to show the depth or breadth of their knowledge, or to make new connections or raise new issues…. The responses make it clear that relatively little writing was required even in English…. [W]riting on average mattered less than multiple-choice or short-answer questions in assessing performance in English…. Some teachers and administrators, in fact, were quite explicit about aligning their own testing with the high-stakes exams their students would face. (Applebee & Langer, 2013, pp. 15-17)

Our educational world has been turned over wholesale to testing, despite ample evidence that test scores are many things (markers of privilege, markers of genetic predispositions, markers of teaching-to-the-test), among the least of which are student achievement and teacher quality.

If we don’t have the political will to de-test our schools, the evidence is clear that the stakes associated with testing must be greatly lessened and that the amount of time spent teaching to the tests and administering the tests must also me reduced dramatically.