It turns out that OkCupid has been performing some of the same psychological experiments on its users that landed Facebook in hot water recently.

In a lengthy blog post, OkCupid cofounder Christian Rudder explains that OkCupid has on occasion played around with removing text from people's profiles, removing photos, and even telling some users they were an excellent match when in fact they were only a 30 percent match according to the company's systems. Just to see what would happen.

OkCupid defends this behavior as something that any self-respecting Web site would do.

"OkCupid doesn’t really know what it’s doing. Neither does any other Web site," Rudder wrote. "But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work."

Rudder's defense of A/B testing — not to mention his disclosure of the experiments themselves — are already creating a stir.

OKCupid intentionally sent people on bad dates and lied to them about it in the name of "science." http://t.co/6T0l4CJ9nu — Justin Brookman (@JustinBrookman) July 28, 2014

Cue the moral outrage! But this time, it seems, there's also some willingness to give OkCupid the benefit of the doubt.

.@okcupid's blog is back! This is what made me want to become a data scientist. #bigdata #datascience http://t.co/1lMgaLuyJP — Jessica Kirkpatrick (@berkeleyjess) July 28, 2014

Here's what OkCupid found: When profile photos were removed, people were more likely to respond to messages, more likely to carry on conversations beyond just a few exchanges and more quick to exchange contact information. (As an aside, what really seems creepy here is that OkCupid can tell when you're trading contact information with a potential partner — because that means, presumably, they can read your chats.)

Another experiment found that profile pictures, when they're present, make a huge difference to viewers. Other profile content has almost no effect at all.

"Your picture is worth that fabled thousand words, but your actual words are worth... almost nothing," Rudder wrote.

Then OkCupid tried telling users who were poorly matched that in fact they were great matches, on the theory that perhaps couples wound up together simply because OkCupid said so. The service also told good matches that they were terrible for each other. Lying to users, it turns out, sometimes sparked meaningful online chats. Nearly one in five couples who were a 30 percent match but were told they were a 90 percent match wound up exchanging four messages or more — what OkCupid deems a meaningful "conversation."

"OkCupid definitely works, but that’s not the whole story," wrote Rudder. "The mere myth of compatibility works just as well as the truth."

If you found all that fascinating, we have a bigger problem on our hands: A problem about how to reconcile the sometimes valuable lessons of data science with the creep factor — particularly when you aren't notified about being studied. But as I've written before, these kinds of studies happen all the time; it's just rare that the public is presented with the results.

Short of banning the practice altogether, which seems totally unrealistic, corporate data science seems like an opportunity on a number of levels, particularly if it's disclosed to the public. First, it helps us understand how human beings tend to behave at Internet scale. Second, it tells us more about how Internet companies work. And third, it helps consumers make better decisions about which services they're comfortable using.

I suspect that what bothers us most of all is not that the research took place, but that we're slowly coming to grips with how easily we ceded control over our own information — and how the machines that collect all this data may all know more about us than we do ourselves. We had no idea we were even in a rabbit hole, and now we've discovered we're 10 feet deep. As many as 62.5 percent of Facebook users don't know the news feed is generated by a company algorithm, according to a recent study conducted by Christian Sandvig, an associate professor at the University of Michigan, and Karrie Karahalios, an associate professor at the University of Illinois.

Algorithms mediate your social connections, could help tilt elections, influence whether you find a partner. This is real power. — Zeynep Tufekci (@zeynep) July 28, 2014

OkCupid's blog post is distinct in several ways from Facebook's psychological experiment. OkCupid didn't try to publish its findings in a scientific journal. It isn't even claiming that what it did was science. Moreover, OkCupid's research is legitimately useful to users of the service — in ways that Facebook's research is arguably not.

People join OkCupid for a very specific reason, and that's to find dates. To the extent that knowing how profile pictures affect your likelihood of getting said dates, the research furthers users' own objectives. I find it fascinating, for instance, that black females reply to all users at nearly similar rates, no matter their racial background — whereas non-black females have an almost universal preference for white men.

That study, by the way, was performed all the way back in 2009. It shouldn't surprise anyone that OkCupid looks at the behavioral data of its users, although this is the first time we've heard of OkCupid actually intervening in the experience of users so that they respond to artificially created conditions.

But in any case, there's no such motivating factor when it comes to Facebook. Unless you're a page administrator or news organization, understanding how the newsfeed works doesn't really help the average user in the way that understanding how OkCupid works does. That's because people use Facebook for all kinds of reasons that have nothing to do with Facebook's commercial motives. But people would stop using OkCupid if they discovered it didn't "work."

If you're lying to your users in an attempt to improve your service, what's the line between A/B testing and fraud?