I WAS IN PHILADELPHIA WHEN the protests in Istanbul exploded, at a gathering called Data-Crunched Democracy, hosted by the Annenberg School for Communication at the University of Pennsylvania. It was supposed to be exciting, and a little contentious. But I’m also a scholar of social movements and new technologies. I’d visited Tahrir, the heart of the Egyptian uprising, and Zuccotti Square, the birthplace of the Occupy movement. And now new technology was helping to power protests in Istanbul, my hometown. The epicenter, Gezi Park, is just a few blocks from the hospital where I was born.

So there I was, at a conference I had been looking forward to for months, sitting in the back row, tweeting about tear gas in Istanbul.

A number of high-level staff from the data teams of the Obama and Romney campaigns were there, which meant that a lot of people who probably did not like me very much were in the room. A few months earlier, in an op-ed in the New York Times, I’d argued that richer data for the campaigns could mean poorer democracy for the rest of us. Political campaigns now know an awful lot about American voters, and they will use that to tailor the messages we see—to tell us the things we want to hear about their policies and politicians, while obscuring messages we may dislike.

Of course, these tactics are as old as politics. But the digital era has brought new ways of implementing them. Pointing this out had earned me little love from the campaigns. The former data director on the Obama campaign, writing later in the Times, caricatured and then dismissed my concerns. He claimed that people thought he was “sifting through their garbage for discarded pages from their diaries”—a notion he described as a “bunch of malarkey.” He’s right: Political campaigns don’t rummage through trashcans. They don’t have to. The information they want is online, and they most certainly sift through it.

What we do know about their use of “big data”—the common shorthand for the massive amounts of data now available on everyone—is worrisome. In 2012, again in the Times, reporter Charles Duhigg revealed that Target can often predict when a female customer is pregnant, often in the first 20 weeks of pregnancy, and sometimes even before she has told anyone. This is valuable information, because childbirth is a time of big change, including changes in consumption patterns. It’s an opportunity for brands to get a hook into you—a hook that may last decades, as over-worked parents tend to return to the same brands out of habit. Duhigg recounted how one outraged father, upset at the pregnancy- and baby-related coupons Target had mailed to his teenage daughter, visited his local store and demanded to see the manager. He got an apology, but later apologized himself: His daughter, it turned out, was pregnant. By analyzing changes in her shopping—which could be as subtle as changes in her choice in moisturizers, or the purchase of certain supplements—Target had learned that she was expecting before he did.

Personalized marketing is not new. But so much more can be done with the data now available to corporations and governments. In one recent study, published in the Proceedings of the National Academy of Sciences, researchers showed that mere knowledge of the things that a person has “liked” on Facebook can be used to build a highly accurate profile of the subject, including their “sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender.” In a separate study, another group of researchers were able to infer reasonably reliable scores on certain traits—psychopathy, narcissism, and Machiavellianism—from Facebook status updates. A third team showed that social media data, when analyzed the right way, contains evidence of the onset of depression.

Remember, these researchers did not ask the people they profiled a single question. It was all done by modeling. All they had to do was parse the crumbs of data that we drop during our online activities. And the studies that get published are likely the tip of the iceberg: The data is almost always proprietary, and the companies that hold it do not generally tell us what they do with it.

When the time for my panel arrived, I highlighted a recent study in Nature on voting behavior. By altering a message designed to encourage people to vote so that it came with affirmation from a person’s social network, rather than being impersonal, the researchers had shown that they could persuade more people to participate in an election. Combine such nudges with psychological profiles, drawn from our online data, and a political campaign could achieve a level of manipulation that exceeds that possible via blunt television adverts.

How might they do it in practice? Consider that some people are prone to voting conservative when confronted with fearful scenarios. If your psychological profile puts you in that group, a campaign could send you a message that ignites your fears in just the right way. And for your neighbor who gets mad at scaremongering? To her, they’ll present a commitment to a minor policy that the campaign knows she’s interested in—and make it sound like it’s a major commitment. It’s all individualized. It’s all opaque. You don’t see what she sees, and she doesn’t see what you see.

Given the small margins by which elections get decided—a fact well understood by the political operatives who filled the room—I argued that it was possible that minor adjustments to Facebook or Google’s algorithms could tilt an election.

I’m not sure if the operatives were as excited

by this possibility as I was afraid of it.

During a break, I cornered the chief scientist on Obama’s data analytics team, who in a previous job ran data analytics for supermarkets. I asked him if what he does now—marketing politicians the way grocery stores market products on their shelves—ever worried him. It’s not about Obama or Romney, I said. This technology won’t always be used by your team. In the long run, the advantage will go to the highest bidder, the richer campaign.

He shrugged, and retreated to the most common cliché used to deflect the impact of technology: “It’s just a tool,” he said. “You can use it for good; you can use it for bad.” (The scientist says he does not recall the conversation.)

“It’s just a tool.” I had heard this many times before. It contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral. Often, I asked the person who said it if they thought nuclear weapons were “just a tool.” Humans have always fought, but few would say it doesn’t matter if we fight with sticks, knives, guns, or nuclear weapons.

This time, I sighed and let it go. I wanted to get back to Twitter. I wanted to get back to my hometown.