The Interface is a daily column and newsletter about the intersection of social media and democracy. Subscribe here .

One popular criticism of Facebook and other tech platforms is that they never compensate users for their time, their data, or their contributions. Facebook is one of the richest companies in the world because of the data we hand over to it for free, the argument goes. Why doesn’t it pay up?

Today we learned that Facebook has heard these criticisms — and if you’re aged 13 to 35, it would like to give you a $20 gift card.

In exchange, all you have to give up is total access to all data on your phone, and also maybe screenshot your Amazon purchases and fork that over too. Josh Constine has the scoop in TechCrunch:

Since 2016, Facebook has been paying users ages 13 to 35 up to $20 per month plus referral fees to sell their privacy by installing the iOS or Android “Facebook Research” app. Facebook even asked users to screenshot their Amazon order history page. The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook’s involvement, and is referred to in some documentation as “Project Atlas” — a fitting name for Facebook’s effort to map new trends and rivals around the globe. We asked Guardian Mobile Firewall’s security expert Will Strafach to dig into the Facebook Research app, and he told us that “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats from in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location tracking apps you may have installed.” It’s unclear exactly what data Facebook is concerned with, but it gets nearly limitless access to a user’s device once they install the app.

As I wrote in my quick gloss on the story, Facebook was previously collecting some of this data through Onavo Protect, a VPN service that it acquired in 2013. The data has proven extremely valuable to Facebook in identifying up-and-coming competitors, then acquiring or cloning them. Facebook removed the app from the App Store last summer after Apple complained that it violated the App Store’s guidelines on data collection.

The Research app requires that users install a custom root certificate, which gives Facebook the ability to see users’ private messages, emails, web searches, and browsing activity. It also asks users to take screenshots of their Amazon order history and send it back to Facebook.

And as Constine reports, Facebook is using these enterprise certificates in ways that almost certainly violate Apple’s policies — at a time when tensions between Apple and Facebook are running at an all-time high:

Facebook claim that it doesn’t violate Apple’s Enterprise Certificate policy is directly contradicted by the terms of that policy. Those include that developers “Distribute Provisioning Profiles only to Your Employees and only in conjunction with Your Internal Use Applications for the purpose of developing and testing”. The policy also states that “You may not use, distribute or otherwise make Your Internal Use Applications available to Your Customers” unless under direct supervision of employees or on company premises. Given Facebook’s customers are using the Enterprise Certificate-powered app without supervision, it appears Facebook is in violation.

Will Strafach, who consulted on the TechCrunch story, said in a tweet that Facebook Research represented “the most defiant behavior I have EVER seen by an App Store developer. It’s mind blowing. ... I still don’t know how to best articulate how absolutely floored I am by Facebook thinking they can get away with this.”

Facebook told Constine that this was just a garden-variety focus group program like those run by Nielsen or ComScore — neither of which install root certificates on focus group members’ phones. But by the end of the evening, the company — while objecting to the characterization of the program as a secret spying campaign — agreed to end the program on iOS.

A generous reading of Facebook Research could be that the company is at least starting to realize the value of the data that users provide it, and is offering to compensate some of those users in exchange for very little work on the user’s part.

And yet when you consider the value of Onavo to Facebook, those $20 gift cards hardly seem adequate. Onavo was an early warning system about competitors put to great use by a company that embraced the mantra of Only The Paranoid Survive. It informed the decision to acquire WhatsApp and clone Snapchat stories. Following the initial success of Periscope and Meerkat, it spurred the company to launch a live video feature.

All of which would lead me to feel better if Facebook were offering its research subjects thousands of dollars a month, rather than hundreds. Certainly the company can afford it. But once again we find the company operating by its most time-tested growth strategy — doing whatever it can get away with.

Democracy

Facebook’s messaging merger leaves lawmakers questioning the company’s power

Lawmakers are asking questions about Facebook’s plans to merge the technical infrastructure underpinning its three core messaging apps, which I discussed here Friday. Makena Kelly reports:

“Good for encryption but bad for competition and privacy,” Sen. Brian Schatz (D-HI), the ranking member of the commerce subcommittee’s panel on technology, tweeted yesterday. “Once again, Mark Zuckerberg appears eager to breach his commitments in favor of consolidating control over people and their data,” Sen. Richard Blumenthal (D-CT) said in a statement to The Verge.

Ireland is questioning Facebook’s plan to merge Messenger, Instagram, and WhatsApp

The Irish Data Protection Commission is also asking questions about the infrastructure merger:

The commission, which regulates Facebook in the European Union, says it understands that the company’s plans are still in initial development and haven’t materialized yet. Still, the commission says it will be seeking “early assurances” that the plans will comply with the GDPR, the European Union’s far-reaching privacy regulation. In 2016, Facebook attempted to share personal user data gathered by WhatsApp with the larger business, but the plan was canceled after an investigation by the UK’s data protection watchdog.

Facebook to create ‘war room’ to fight fake news, Nick Clegg says

I missed this yesterday: Facebook’s plan to protect the platform during the 2019 European Parliament Elections will once again involve setting aside a conference room, Alex Hern reports:

In his first speech as Facebook’s top public face, [Nick] Clegg said the company would be setting up an “operations centre focused on elections integrity, based in Dublin, this spring”. The centre will build on the company’s previous experience running an “elections war room” in its US office, where it coordinated efforts to police the platform during the US midterm and Brazilian presidential elections.

Rep. Ocasio-Cortez rips into Facebook, Google, and Microsoft on climate

Democratic representatives in Congress are mad that tech giants are sponsoring events that promote climate change denialism:

On Monday, Reps. Alexandria Ocasio-Cortez (D-NY) and Chellie Pingree (D-ME) asked the heads of Facebook, Google, and Microsoft to pledge their companies’ support for the science suggesting that climate change has made a significant, negative impact on the environment. The lawmakers penned the letter after news broke last week that the companies sponsored a conference, LibertyCon, which promoted climate change denialism. At the libertarian-focused conference, a speaker from the CO2 Coalition gave a talk arguing that the environmental impact of climate change has been exaggerated.

I Cut Google Out Of My Life. It Screwed Up Everything

In week three of her fantastic series “Goodbye Big Five,”Kashmir Hill attempts to give up Google and hates it. This series is a master class in showing, rather than telling: by attempting to avoid just five companies for a week, Hill has proven that most Americans simply have no viable alternatives to using their services while remaining online:

In some cases, the Google block means apps won’t work at all, like Lyft and Uber, or Spotify, whose music is hosted in Google Cloud. The more frequent effect of the Google block though is that the internet itself slows down dramatically for me. Most of the websites I visit have frustratingly long load times because so many of them rely on resources from Google and get confused when my computer won’t let them talk to the company’s servers. On Airbnb, photos won’t load. New York Times articles won’t appear until the site has tried (and failed) to load Google Analytics, Google Pay, Google News, Google ads, and a Doubleclick tracker.

How Google’s Jigsaw Is Trying to Detoxify the Internet

Rob Marvin profiles Perspective, a project from the Alphabet subsidiary Jigsaw that is working to build content-moderation software powered by artificial intelligence. The company is using years of respectful debate on Reddit’s Change My View forum, which hosts some of the most nuanced discussions on the internet:

For the past six years, Turnbull and the other mods have been doing all of this manually from the queue of AutoModerator reports (flagged keywords) and user reports. Jigsaw used years of rule-violation notes from moderators, which they tracked through a browser extension, and built Perspective models based on that data combined with some of Perspective’s existing toxicity models. Throughout 2018, the CMV mods gave feedback on issues such as excess false positives, and Jigsaw tweaked the scoring thresholds while continuing to model more of CMV’s rules. […] Change My View is the only subreddit actively using Perspective ML models for moderation at the moment, although Adams said the team has received access requests from several others. The specific rule set of CMV made it an ideal test case, but Perspective models are malleable; individual subreddits can customize the scoring algorithm to match their community guidelines.

Deepfake videos: Inside the Pentagon’s race against disinformation

CNN has a slick package up letting you see the state of the art in deepfakes, along with interviews from experts on the potential threats they pose.

The Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is working with several of the country’s biggest research institutions to get ahead of deepfakes. But in order to learn how to spot deepfakes, you first have to make them. That takes place at the University of Colorado in Denver, where researchers working on DARPA’s program are trying to create convincing deepfake videos. These will later be used by other researchers who are developing technology to detect what’s real and what’s fake.

Elsewhere

Facebook’s Messenger Kids: child advocates call for shutdown of app

Advocacy groups are once again calling for Facebook to shut down Messenger Kids, this time over (basically unrelated?) revelations that some employees encouraged children to spend all their parents’ money on Facebook games back in the day, Queenie Wong reports:

“The documents appear to demonstrate that Facebook is willing to cause actual harm to children and families in its quest for profit,” the advocacy groups said in a Tuesday letter to Facebook CEO Mark Zuckerberg. “As such, Facebook is unfit to make any platform or product for children, especially one like Messenger Kids.”

These YouTubers are owed $1.7 million, and they’re probably never going to get it

Julia Alexander has a dispatch from the ongoing collapse of multi-channel networks on YouTube, focused on Defy Media:

Defy Media shut down in November, but stories from YouTube creators affected by the closure — including popular channels like Smosh and Adams — have continued to come out in the past few months. There are also multiple lawsuits currently filed against Defy Media, from both employees and investors, for numerous reasons, including deception.

TikTok is quietly testing ads

ByteDance is reportedly struggling to hit its revenue targets, and now Kerry Flynn reports that the company is already experimenting with app-install adds inside its 6-month old US app, TikTok:

The ad appeared shortly after a user launched the app and lasted about 5 seconds, according to Chris Harihar, a TikTok user and partner at Crenshaw Communications. Users could instantly skip the ad via a button at the top right of the screen, which is visible in this screenshot provided by Harihar.

“Learn to Code”: The Twitter Meme Attacking Media

Molly McHugh explores the coordinated campaign of Twitter death threats against recently laid off journalists, which is a real thing that is happening in this broken world:

It’s not only the timing of the obnoxious unsolicited advice that takes it to a place of abuse—it’s also the targeting. “It’s just straight up spamming them,” he says, in a way meant to be “cruel and hurtful.” Through this lens, tweeting “learn to code” can be viewed as similar to the alt-right use of parentheses to label Jewish people, or how racists turned Pepe the Frog into a hate symbol—a way to covertly harass someone in a manner that is difficult for Twitter to detect. Before writing off “learn to code” as a harmless joke, it might be important to remember that it’s being hurled at a profession the president of the United States has at best belittled and at worst supported violence against. “Learn to code” is not a viral phrase that’s being spammed to out-of-work journalists; it’s a targeted attack disguised as a meme.

Telegram turns go-to platform for test-prep in India but has a piracy problem

Telegram is surging in India partly due its reputation as a good place to cram for exams, Nilesh Christopher reports:

‘IBPSGuide’ is among thousands of community-driven exam preparation groups and channels that have sprung up on Telegram. Their memberships might seem insignificant when compared with the hundreds of thousands of subscribers for entertainment- and movies-focused groups and channels, but the most popular ones are almost always abuzz with activity. ‘India Bhai Channel,’ which focuses on preparing for civil services exams, has only 2,000 subscribers but the ‘Niti Aayog Strategy’ document that was shared on it had notched up more than 37,000 views. A ridiculously high subscriber-to-impression ratio.

Launches

CrowdTangle for Academics and Researchers

Yesterday I wrote about the balancing act Facebook has to do when researchers come seeking access to user data. I neglected to mention that Facebook-owned CrowdTangle, which makes social analytics tools, said this week that it would begin working with a handful of academic partners with hopes to expand access in the future. This is good news:

The University of California,Berkeley is using CrowdTangle to investigate the spread of misinformation in Myanmar and other countries. Duke University is measuring the impact of Facebook groups in North Carolina during recent emergencies. The University of Münster is tracking misinformation and elections integrity.The Atlantic Council utilized CrowdTangle to track claims of electronic voting fraud circulating in Brazil. Additionally, Pew Research has used CrowdTangle to track and research media trends. The Institute for Strategic Dialogue used CrowdTangle to track where misinformation came from, as well as identifying peaks in media coverage during the recent Swedish and German local elections. Core to their research was real-time tracking, a key feature of CrowdTangle.

Takes

Mark Zuckerberg’s Delusion of Consumer Consent

People are somehow still talking about Zuckerberg’s Wall Street Journal op-ed, in which he explained that Facebook is an advertising business on the internet. Here two professors say that their own research indicates that people hate personalized ads:

In one of our surveys, we asked 1,503 Americans four different questions: whether or not they wanted “the websites you visit” to show them (1) tailored ads for products and services, (2) tailored discounts, (3) tailored news and (4) tailored political ads. If a respondent answered yes to any of the above questions, we went deeper, asking whether the tailoring to their interests would be acceptable if based on the user’s behavior on the website the user was visiting, on the user’s browsing on other websites and on offline activities, such as store shopping or magazine subscriptions. Sixty-one percent of respondents said no, they did not want tailored ads for products and services, 56 percent said no to tailored news, 86 percent said no to tailored political ads, and 46 percent said no to tailored discounts. But when we added in the results of the second set of questions about tracking people on that firm’s website, other websites and offline, the percentage that in the end decided they didn’t want tailoring ranged from 89 percent to 93 percent with political ads, 68 percent to 84 percent for commercial ads, 53 percent to 77 percent for discounts, and 64 percent to 83 percent for news.

And finally ...

Billionaire Starbucks founder Howard Schultz is exploring a run for the presidency, and recently started a Twitter account to gauge interest in his campaign. Unfortunately for Schultz, people have been interested primarily in roasting him — and as HuffPo’s Ashley Feinberg noted, he may be the first Twitter user to be ratioed in every single one of his tweets. That is, everything he says generates more replies — most of which are negative — than hearts or retweets.

Here’s to healthy conversation on Twitter dot com!

does Howard Schultz have the first account to consist of nothing but ratios pic.twitter.com/NLftCj1Uv2 — Ashley Feinberg (@ashleyfeinberg) January 28, 2019

Talk to me

Send me tips, comments, questions, and relevant passages from Zucked: casey@theverge.com.