A study by the Pew Research Center suggests most Facebook users are still in the dark about how the company tracks and profiles them for ad-targeting purposes.

Pew found three-quarters (74%) of Facebook users did not know the social networking behemoth maintains a list of their interests and traits to target them with ads, only discovering this when researchers directed them to view their Facebook ad preferences page.

A majority (51%) of Facebook users also told Pew they were uncomfortable with Facebook compiling the information.

While more than a quarter (27%) said the ad preference listing Facebook had generated did not very or at all accurately represent them.

The researchers also found that 88% of polled users had some material generated for them on the ad preferences page. Pew’s findings come from a survey of a nationally representative sample of 963 U.S. Facebook users ages 18 and older which was conducted between September 4 to October 1, 2018, using GfK’s KnowledgePanel.

In a senate hearing last year Facebook founder Mark Zuckerberg claimed users have “complete control” over both information they actively choose to upload to Facebook and data about them the company collects in order to target ads.

But the key question remains how Facebook users can be in complete control when most of them they don’t know what the company is doing. This is something U.S. policymakers should have front of mind as they work on drafting a comprehensive federal privacy law.

Pew’s findings suggest Facebook’s greatest ‘defence’ against users exercising what little control it affords them over information its algorithms links to their identity is a lack of awareness about how the Facebook adtech business functions.

After all the company markets the platform as a social communications service for staying in touch with people you know, not a mass surveillance people-profiling ad-delivery machine. So unless you’re deep in the weeds of the adtech industry there’s little chance for the average Facebook user to understand what Mark Zuckerberg has described as “all the nuances of how these services work”.

Having a creepy feeling that ads are stalking you around the Internet hardly counts.

At the same time, users being in the dark about the information dossiers Facebook maintains on them, is not a bug but a feature for the company’s business — which directly benefits by being able to minimize the proportion of people who opt out of having their interests categorized for ad targeting because they have no idea it’s happening. (And relevant ads are likely more clickable and thus more lucrative for Facebook.)

Hence Zuckerberg’s plea to policymakers last April for “a simple and practical set of — of ways that you explain what you are doing with data… that’s not overly restrictive on — on providing the services”.

(Or, to put it another way: If you must regulate privacy let us simplify explanations using cartoon-y abstraction that allows for continued obfuscation of exactly how, where and why data flows.)

From the user point of view, even if you know Facebook offers ad management settings it’s still not simple to locate and understand them, requiring navigating through several menus that are not prominently sited on the platform, and which are also complex, with multiple interactions possible. (Such as having to delete every inferred interest individually.)

The average Facebook user is unlikely to look past the latest few posts in their newsfeed let alone go proactively hunting for a boring sounding ‘ad management’ setting and spending time figuring out what each click and toggle does (in some cases users are required to hover over a interest in order to view a cross that indicates they can in fact remove it, so there’s plenty of dark pattern design at work here too).

And all the while Facebook is putting a heavy sell on, in the self-serving ad ‘explanations’ it does offer, spinning the line that ad targeting is useful for users. What’s not spelt out is the huge privacy trade off it entails — aka Facebook’s pervasive background surveillance of users and non-users.

Nor does it offer a complete opt-out of being tracked and profiled; rather its partial ad settings let users “influence what ads you see”.

But influencing is not the same as controlling, whatever Zuckerberg claimed in Congress. So, as it stands, there is no simple way for Facebook users to understand their ad options because the company only lets them twiddle a few knobs rather than shut down the entire surveillance system.

The company’s algorithmic people profiling also extends to labelling users as having particular political views, and/or having racial and ethnic/multicultural affinities.

Pew researchers asked about these two specific classifications too — and found that around half (51%) of polled users had been assigned a political affinity by Facebook; and around a fifth (21%) were badged as having a “multicultural affinity”.

Of those users who Facebook had put into a particular political bucket, a majority (73%) said the platform’s categorization of their politics was very or somewhat accurate; but more than a quarter (27%) said it was not very or not at all an accurate description of them.

“Put differently, 37% of Facebook users are both assigned a political affinity and say that affinity describes them well, while 14% are both assigned a category and say it does not represent them accurately,” it writes.

Use of people’s personal data for political purposes has triggered some major scandals for Facebook’s business in recent years. Such as the Cambridge Analytica data misuse scandal — when user data was shown to have been extracted from the platform en masse, and without proper consents, for campaign purposes.

In other instances Facebook ads have also been used to circumvent campaign spending rules in elections. Such as during the UK’s 2016 EU referendum vote when large numbers of ads were non-transparently targeted with the help of social media platforms.

And indeed to target masses of political disinformation to carry out election interference. Such as the Kremlin-backed propaganda campaign during the 2016 US presidential election.

Last year the UK data watchdog called for an ethical pause on use of social media data for political campaigning, such is the scale of its concern about data practices uncovered during a lengthy investigation.

Yet the fact that Facebook’s own platform natively badges users’ political affinities frequently gets overlooked in the discussion around this issue.

For all the outrage generated by revelations that Cambridge Analytica had tried to use Facebook data to apply political labels on people to target ads, such labels remain a core feature of the Facebook platform — allowing any advertiser, large or small, to pay Facebook to target people based on where its algorithms have determined they sit on the political spectrum, and do so without obtaining their explicit consent. (Yet under European data protection law political beliefs are deemed sensitive information, and Facebook is facing increasing scrutiny in the region over how it processes this type of data.)

Of those users who Pew found had been badged by Facebook as having a “multicultural affinity” — another algorithmically inferred sensitive data category — 60% told it they do in fact have a very or somewhat strong affinity for the group to which they are assigned; while more than a third (37%) said their affinity for that group is not particularly strong.

“Some 57% of those who are assigned to this category say they do in fact consider themselves to be a member of the racial or ethnic group to which Facebook assigned them,” Pew adds.

It found that 43% of those given an affinity designation are said by Facebook’s algorithm to have an interest in African American culture; with the same share (43%) is assigned an affinity with

Hispanic culture. While one-in-ten are assigned an affinity with Asian American culture.

(Facebook’s targeting tool for ads does not offer affinity classifications for any other cultures in the U.S., including Caucasian or white culture, Pew also notes, thereby underlining one inherent bias of its system.)

In recent years the ethnic affinity label that Facebook’s algorithm sticks to users has caused specific controversy after it was revealed to have been enabling the delivery of discriminatory ads.

As a result, in late 2016, Facebook said it would disable ad targeting using the ethnic affinity label for protected categories of housing, employment and credit-related ads. But a year later its ad review systems were found to be failing to block potentially discriminatory ads.

The act of Facebook sticking labels on people clearly creates plenty of risk — be that from election interference or discriminatory ads (or, indeed, both).

Risk that a majority of users don’t appear comfortable with once they realize it’s happening.

And therefore also future risk for Facebook’s business as more regulators turn their attention to crafting privacy laws that can effectively safeguard consumers from having their personal data exploited in ways they don’t like. (And which might disadvantage them or generate wider societal harms.)

Commenting about Facebook’s data practices, Michael Veale, a researcher in data rights and machine learning at University College London, told us: “Many of Facebook’s data processing practices appear to violate user expectations, and the way they interpret the law in Europe is indicative of their concern around this. If Facebook agreed with regulators that inferred political opinions or ‘ethnic affinities’ were just the same as collecting that information explicitly, they’d have to ask for separate, explicit consent to do so — and users would have to be able to say no to it.

“Similarly, Facebook argues it is ‘manifestly excessive’ for users to ask to see the extensive web and app tracking data they collect and hold next to your ID to generate these profiles — something I triggered a statutory investigation into with the Irish Data Protection Commissioner. You can’t help but suspect that it’s because they’re afraid of how creepy users would find seeing a glimpse of the the truth breadth of their invasive user and non-user data collection.”

In a second survey, conducted between May 29 and June 11, 2018 using Pew’s American Trends Panel and of a representative sample of all U.S. adults who use social media (including Facebook and other platforms like Twitter and Instagram), Pew researchers found social media users generally believe it would be relatively easy for social media platforms they use to determine key traits about them based on the data they have amassed about their behaviors.

“Majorities of social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%), their hobbies and interests (79%), their political affiliation (71%) or their religious beliefs (65%),” Pew writes.

While less than a third (28%) believe it would be difficult for the platforms to figure out their political views, it adds.

So even while most people do not understand exactly what social media platforms are doing with information collected and inferred about them, once they’re asked to think about the issue most believe it would be easy for tech firms to join data dots around their social activity and make sensitive inferences about them.

Commenting generally on the research, Pew’s director of internet and technology research, Lee Rainie, said its aim was to try to bring some data to debates about consumer privacy, the role of micro-targeting of advertisements in commerce and political activity, and how algorithms are shaping news and information systems.

Update: Responding to Pew’s research, Facebook sent us the following statement: