Apple says it’s building a new tool it calls Real User Indicator that could cut down on the number of bots secretly signing up for new accounts with mobile, desktop, and web services.

The feature, announced during the company’s Platforms State of the Union event at its Worldwide Developers Conference (WWDC), is designed to check for traits more consistent with bots than people. It then informs an app developer of the situation, so the developer can then take further action to verify the authenticity of the new account.

“It uses on-device intelligence to determine if the originating device is behaving in a normal way. The device generates a value without sending any specifics to Apple,” an Apple spokesperson explained onstage during the event. The value is then “boiled down to a single value shared with your app at account setup time,” and “depending on the value you receive, you can be confident your user is a real user or get a notice you should take a second look.”

Apple says its Real User Indicator can detect fishy on-device behavior

It’s not clear how Apple generates this value, especially considering it’s not doing so by cross-checking any of the inputted information — like an email address or phone number — with online data out of concern for user privacy. But there are a few possibilities. On-device intelligence could mean Apple is monitoring how fast certain fields are filled during the new account sign-up process, something that would indicate an automated process is behind the new account. Apple could also be looking at factors like the speed of account verification via email, or even analyzing typed-out information for the name or address fields of the sign-up process.

Regardless of how exactly it works, the Real User Indicator could have a big impact on the number of bot accounts proliferating across the internet. If it functions as designed and is in fact a strong indicator of fraudulent or suspicious digital behavior, we could see the internet begin to strike back against the fake activity pervading every corner of online life. We’re already seeing CAPTCHAs get better and better, thanks in part to the progress of artificial intelligence that’s rendered more simple bot tests obsolete. But if a developer never knows to put those more sophisticated tests in front of fishy new accounts, the bot will get through undetected.

New York Magazine’s Max Read grappled with the problem in an article published last December titled, “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.” In it, Read makes the convincing argument that we’re approaching, or have already passed, an event known as “The Inversion,” in which the largest tech companies’ fraud detection systems will mistake bot activity for the real deal and consider genuine activity from real human beings as fake.

That will be because the majority of activity — page views, video views, new accounts, comments, likes, retweets, and so on — will indeed come from bots, bot farms, and botnets rather than people. And without the ability to properly differentiate between real users and bots, and because vast swaths of the internet economy depends on advertisers not caring about the distinction anyway, the level of fake behavior will only intensify, Read theorizes.

The internet is overrun by fake activity and bots masquerading as people

“The internet has always played host in its dark corners to schools of catfish and embassies of Nigerian princes,” he wrote, “but that darkness now pervades its every aspect: Everything that once seemed definitively and unquestionably real now seems slightly fake; everything that once seemed slightly fake now has the power and presence of the real.”

Granted, a lot of big companies, like Facebook and Twitter, already have internal, AI-powered systems designed for squashing bots and stopping automated account creation, and those systems are improving. Yet in one example of the astounding magnitude of the issue, Facebook said earlier this month it had removed 2.2 billion fake accounts in just a three-month period between January and March of this year. A vast majority of those accounts caught by the company’s automated fraud systems before the accounts went live.

Facebook’s data suggests the bot growth is only getting worse, due to a number of factors ranging from the frequency and effectiveness of election interference and misinformation to the scores of illicit bot farms designed to juice analytics and sell fake marketing success.

It’s not clear Apple’s new developer feature will have a significant effect on this worrying trend. The company says it’s designing Real User Indicator not just for iOS apps, but also for apps designed for watchOS, tvOS ones, macOS, and even the web, which Apple said means the feature will extend to Android and Windows devices.