Vox Slack on a Friday is probably like a lot of office Slacks on Fridays: The chatter there can get a bit … unproductive. And one Friday a while back, it entailed a discussion of what Twitter thinks we’re into. A colleague had stumbled upon Twitter’s list of her “inferred interests” — basically, the things it believes she likes and who she is.

Twitter describes her as an “affluent baby boomer” and “corporate mom” with multiple kids. (She’s a 27-year-old single woman without children.) It lists dozens and dozens of car-related interests. (She doesn’t have a car — or even a driver’s license.)

She commented that though internet companies seemingly track her every move, Twitter, at least in her case, has a “hilariously misguided sense of who I am.”

Her discovery, naturally, sent a lot of other people — including me — to check out what Twitter thinks they’re into. My inferred interests weren’t so off-base. Twitter knows I’m a millennial, though it thinks I make more money than I do and have somehow managed to buy a home in New York City. It knows I like The Bachelor. But it also thinks I’m into stamps and coins, which, what?

As for Recode co-founder and prolific tweeter Kara Swisher, Twitter lists among her inferred interests “Maggie Haberman” and “Men’s Pants.”

Twitter has been letting users get a look at what it thinks they’re into since 2017, when it rolled out a series of privacy updates, including some improvements to transparency. Users can see what Twitter thinks they’re interested in as well as what Twitter’s partners — i.e., advertisers — think they like.

Seeing what Twitter thinks you like can be a fun activity — but it can also be an odd experience to see what the company infers about you from your online moves. The psychology around targeted advertising is complex. On the one hand, if we have to see ads, it’s probably better that they’re in line with our interests. On the other, knowing how much advertisers know can feel a bit, well, creepy. And what can be an even weirder experience is when we see an ad that doesn’t feel quite right but that isn’t unfathomably wrong, either — like a man in his 20s suddenly getting ads for hair loss products, or a woman in her 30s seeing ads to freeze her eggs.

Related Join the Open Sourced Reporting Network

“Our brain is able to process things that are relevant to us,” Saleem Alhabash, a professor at Michigan State University and co-director of its Media and Advertising Psychology Lab, said. “But what happens when the ads are suggesting things that are not relevant but that are slightly plausible?”

How to figure out what Twitter thinks your interests are

To find out what Twitter thinks about you, go to “Settings and privacy” > “Your Twitter data” > “Interest and ads data.”

There, you can see your “inferred interests from Twitter” — the interests Twitter has matched to you based on your profile and activity — and your “inferred interests from partners,” or what Twitter’s ad partners think about your hobbies, income, shopping interests, etc. That’s based on information collected from Twitter, both online and off.

The ad partners basically build “audiences” for advertisers to help them reach customers. The example Twitter gives on its website is that a pet food company might use an audience to find dog owners to try to sell them dog food.

Twitter’s ad partners have 15 interests for me. They think really think I’m into juice and ice cream, which, not so much, but they’re right on mustard and non-dairy milk. They also think I’ve got a pretty sick house.

As far as my Twitter interests go, it lists 190. I should probably spend less time looking up stuff on The Bachelor.

It’s important to note you can opt out of getting shown interest-based ads. You can shut it off using your Twitter settings or go to the Digital Advertising Alliance’s consumer choice tool to opt out there as well. And you can deselect interests if they’re not for you.

It’s weird to know what Twitter thinks you like

Vox Slack chatter exemplified how thought-provoking a tool like this “inferred interests” one can be. Multiple colleagues weighed in about their own discoveries — one found that Twitter listed multiple of his interests as a series of Bens (namely, Shapiro and Sasse); another said she has more than a dozen boxes for Broad City. And some interests were oddly specific — multiple boxes for Michael Cohen saying President Trump used racist language, or for a Rolling Stone article about Johnny Depp. It’s not clear what advertisers would do with information that granular, but it could be important for the Twitter algorithm for surfacing tweets.

We don’t really know exactly how the algorithms that try to figure out our interests work. Companies gather a ton of data about us all the time, and how they interpret and use that data isn’t entirely clear. “It’s a big mystery box,” Alhabash told me.

And the endeavor, as Twitter’s “inferred interests” shows, isn’t always a fruitful one: It gets some things right, but it gets a lot of things wrong.

People don’t necessarily mind ad targeting, but they do get kind of freaked out when it gets too creepy. Harvard Business School research published in 2018 found that transparency around their ad targeting can be good for platforms such as Twitter, but users become more wary when they think it goes too far. Wired wrote up the research last year:

The researchers say their findings mimic social truths in the real world. Tracking users across websites is viewed as an inappropriate flow of information, like talking behind a friend’s back. Similarly, making inferences is often seen as unacceptable, even if you’re drawing a conclusion the other person would freely disclose. For example, you might tell a friend that you’re trying to lose weight, but find it inappropriate for him to ask if you want to shed some pounds. The same sort of rules apply to the online world, according to the study.

And it’s not just when ads are right that they make us nervous; it’s also when they’re wrong, or at least we perceive them as such. As Alhabash pointed out, being shown an irrelevant ad can sometimes be as thought-inducing as being shown one that’s relevant, especially when it’s ad targeting as personalized as much of what we experience online. He called those ads “selectively irrelevant.”

They’re ads that aren’t applicable now but could be in the future, or ads that make you wonder whether tech platforms or advertisers know something about you that you don’t. Is your hairline about to start receding? Should you talk to your doctor about freezing your eggs?

This is a tool that retailers have long used. Target in 2012, for example, raised eyebrows when the New York Times published a story about it sending pregnancy-related offers to a teenage girl before her family knew she was pregnant. But thanks to the power of the internet and tech conglomerates such as Google and Facebook, companies now have a lot more data about us than before.

Facebook’s practices for gathering information about users has come under fierce scrutiny in recent years, and both Facebook and Google know more about us than we’d probably like to think. CNBC last year laid out what sort of data Facebook tracks and how it does it:

By now you’ve probably gathered that Facebook uses things like your interest, age and other demographic and geographic information to help advertisers reach you. Then there’s the stuff your friends do and like — the idea being that it’s a good indicator for what you might do and like. So, if you have a friend who has liked the New Yorker’s Facebook page, you might see ads for the magazine on your Facebook feed. But that’s just the tip of the iceberg. Facebook and advertisers can also infer stuff about you based on things you share willingly. For example, Facebook categorizes users into an “ethnic affinity” based on what it thinks might be their ethnicity or ethnic influence. It might guess this through TV shows or music you’ve liked. Often, Facebook is wrong — and while it’s possible to remove it, you can’t change it. There is also no “ethnic affinity” option for whites.

The fact that the platforms sometimes get the targeting data wrong probably doesn’t please advertisers. In Twitter’s case, that so many of the inferred interests were wrong could explain some of its problems in monetizing its business.

Business issues aside, seeing what Twitter knows you like — or thinks you like — can have some awkward implications that we still don’t completely understand.

“[Researchers] are trying to understand, how does the notion of relevance make people feel? How does it make them feel about themselves, about the advertisers and the product?” Alhabash said.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.