This week, hundreds of thousands of students across New Jersey and the US took the PARCC exam, but Nathan Fallin wasn't one of them. Perhaps you’ve heard of PARCC. It's one of two major standardized tests that are being rolled out across the country this year, intended to measure whether students, classes, schools, and districts are meeting the ambitious new Common Core standards.

Like everything else about the Common Core, it has proven controversial. Defenders say that the tests provide the data necessary to determine whether kids are learning all that we want them to—a corrective to a system that too often lets failing students and teachers slide by. Critics argue that the test is too long, too difficult, too confusing; that the test's focus on math and reading leaves less time for science, art, and other valuable subjects that aren't part of the new testing regime; and that the pressure to perform leaves kids stressed-out and miserable.

Hope Fallin, Nathan's mom, wasn't concerned about her son's performance. He aced all his practice tests, and his high-achieving school in toney Ridgewood, New Jersey, was in no danger of losing funding. Still, she resented the notion that a standardized test might play such a large role in his education. “I did really poorly on my SATs, but I graduated top of my class in college and top 2 percent at law school,” she says. “Obviously that wasn't a great indicator of my intelligence or how I'd perform.” And so, on general principle, she decided to opt her son out of taking the PARCC, joining a growing movement that threatens to derail one of the fundamental underpinnings of the Common Core standards.

This kind of argument exasperates school reformers, who see the new tests as a fount of data that can give parents and teachers new insight into student performance. "To really understand why a student is progressing or not, you need a lot of information," says Alyssa Van Camp, policy director at SCORE, a Tennessee-based pro-reform nonprofit group. "End-of-year assessments can provide really great information for teachers about how their students collectively progress."

It's hard to argue that data shouldn't play a larger role in education. Big data is already a cliché in the business world, where it powers everything from Google search results to Netflix recommendations. It seems only reasonable that applying that same logic to schools would result in a more powerful and supple education system—one that gives teachers and parents as fine-tuned a look at their kids' academic profile as Netflix knows about each subscriber's idiosyncratic tastes. The problem is that as of right now, our best method for collecting that data is standardized testing. And, as a data-collection technology, standardized testing sucks.

Fuzzy Measurement

As a society, we've gotten really good at collecting data. We offgas tons of it every day—our phones track our location, our browsers track our surfing, Facebook tracks which stories we read and which we ignore, Twitter tracks the content of our updates. All of that collection happens as a natural consequence of our behavior. We don't fire up Google at the end of the day to submit a list of sites we've visited or search terms we've entered. Google collects all that data as we produce it, without our having to think about it.

But when it comes to education, our data-collection methods fundamentally haven't changed since the ACT was first introduced in 1959. Unlike our modern methods, tests are decoupled from the action that they are measuring. They don't track the process of learning, but the ability to demonstrate it at a later date. That can be valuable—what good is learning something if you can’t later show that you've learned it?—but it also creates fuzziness in what's being measured.

Take the familiar problem of "teaching to the test." Tests like PARCC aren't just dipstick-like check-ins; teachers and administrators actively adjust their lesson plans specifically to boost test performance. So the tests are measuring the teacher's ability to teach, not general knowledge, but that specific test.

Testing advocates respond that, if a test can measure the process of arriving at the correct answer, and not just the ability to fill in the right bubble, then teaching to the test isn't a problem any more. If you can't score well on it without understanding fundamental concepts, then teaching to the test means teaching the fundamental concepts themselves. Teaching to the test ceases to be a problem, and becomes just teaching.

But even if you accept that argument, you've still got the problem that tests don't just assess a student's facility with material, but the act of test-taking itself. If you've got a cold or your computer conks out or you freeze up under pressure or you just generally suck at taking tests, the test will measure that. And of course some parents might hire tutors or buy extra materials to further boost performance, in which case tests are measuring a family's ability to afford such extracurricular aids. (They are also measuring how many of those students a teacher might happen to have in her classroom.)

None of this is news. In fact, the critique is so well established as to have been codified into Campbell's Law. Named after sociologist Donald Campbell, it states that "the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." By that standard, tests like PARCC stand to become corrupting influences indeed. New York governor Andrew Cuomo recently announced a proposal to have test scores account for 50 percent of a teacher's evaluation. (New York doesn't use the PARCC, but a different standardized test.)

More Data, Not Less

If Campbell's Law states that a test's corrupting influence is proportional to its social power, then one way to fix the problem might be to make test scores less powerful—in other words, to make test performance a smaller factor in student, teacher, and school evaluations.

But ultimately, the solution isn't to rely less on data. It's to rely on more of it. Imagine if the process of data collection weren't decoupled from the act of learning—if tracking and measurement were a natural part of the learning process, rather than an artificial adjunct tacked on at the end of the year. Imagine if every learning activity were automatically recorded—each homework assignment, class discussion, group project. Over time, all those points would come together to paint a full picture of a student's intellectual life. Because that picture would be composed of so many data points, no one set would have outsized influence. And because it would be a record of actual learning, as it happens, it wouldn't be as gameable with fancy test prep. Parents wouldn't have to worry that their kid would be penalized because they couldn't sleep the night before the big test. And there wouldn't be teaching to the test, because the teaching would be the test.

To put it another way, tests like the PARCC are the equivalent of an annual medical check-up—a measurement, taken at one particular moment in time, that becomes an imperfect metaphor for our overall physical condition. With more data, we could build something more like an always-on fitness tracker, which compiles all of our activities into a complete picture of our health in real-time.

This isn't an original idea. Khan Academy has been pursuing something like this vision, recording students' activities as they complete its online courses so that parents and teachers can adjust their instruction accordingly. The University of Texas System's new TEx product also tracks students' work to "provide customized, just-in-time support and services." So far, these are generally seen as steps along the path to personalized learning, but they are also building out a data-rich profile of a student's learning activities—one that could eventually make testing irrelevant.

That won't happen quickly or easily. Many parents resist the idea of data collection in the classroom, and their concerns deserve to be taken seriously. But until we work our way through those issues, the data-driven reform movement will be shackled to an outdated technology that isn't up to the demands of the moment. Big data has revolutionized practically every other aspect of our lives. Shouldn't schools be able to benefit from it as well?