"[Some] assert than an individual's intelligence is a fixed quantity which cannot be increased. We must protest and react against this brutal pessimism."





Last week, I argued that our 21st century understanding of genetics invalidates the idea of fixed, innate abilities. Genes influence everything but determine almost nothing on their own.





What, then, is IQ? Conventional wisdom says that IQ scores reveal our native intelligence. According to this view, IQ tests are different from school grades, different from SAT scores, different from any other test you will ever take, because they somehow reveal the core, innate abilities of each person's brain: your clock speed, your RAM, your absolute limit.





That's what Stanford psychologist Lewis Terman wanted us to believe when he introduced the American version of the IQ test in 1916. (This was quite the opposite intention of the test's original co-inventor, Alfred Binet. But that's a history lesson we'll return to another time.)





What Terman had actually come up was a deceptively simple system for ranking academic progress. His Stanford-Binet tests measured many different skills, and then scored the results so that the median was always 100. If you had an IQ score of 100, it simply meant that half of the test-takers your age had done better and half had done worse.





These tests were impressively stable, which meant that, over time, most people ended up in roughly the same place in the pack. If you had tested in the 60th percentile at age 10, chances were pretty good that that you'd test close to the 60th percentile at age 12 and age 14.





But did this stability prove that the tests revealed innate intelligence?





Far from it. The reality is that students performing at the top of the class in 4th grade tend to be the same students performing at the top of the class in 12th grade, due to many factors that tend to remain stable in students' lives: family, lifestyle, resources, etc.



