Online fakery runs wide and deep, but you don’t need me to tell you that. New species of digital fraud and deception come to light almost every week, if not every day: Russian bots that pretend to be American humans. American bots that pretend to be human trolls. Even humans that pretend to be bots. Yep, some “intelligent assistants,” promoted as advanced conversational AIs, have turned out to be little more than digital puppets operated by poorly paid people.

The internet was supposed to not only democratize information but also rationalize it—to create markets where impartial metrics would automatically surface the truest ideas and best products, at a vast and incorruptible scale. But deception and corruption, as we’ve all seen by now, scale pretty fantastically too.

According to ReviewMeta, an independent site that tracks the veracity of online feedback, there’s recently been a tremendous increase in Amazon reviews reviews written by users who have not made a verified purchase of the item they’re reviewing.

Surprise, surprise: Almost all of these unverified purchasers (98.2 percent) give the product five stars. Claims of fakery might also be fake. On Amazon, you can hardly shop for a simple sunscreen without encountering reviews claiming the product is counterfeit. Relieved to have been warned, you might be tempted to click away. But maybe that review itself was fake, planted by a competitor.

Big platforms like Google and Facebook earn their living mainly by taking money from advertisers and then delivering them eyeballs, with online ad brokers serving as middlemen. The idea is that these ads are precisely targeted and precisely measured, so that a brand is paying only for the eyeballs it wants to target and can actually surveil its audience to see exactly how long they’re watching ads. This advertising model is essentially the economic premise of the modern internet. But it’s one that has proven deeply susceptible to fraud—rife with fake views, fake clicks, and fake eyeballs.

In 2016, Facebook fessed up that, for two years, it had vastly overstated how long, on average, people were watching videos on the platform. The company characterized this as an “error” that hadn’t affected billings. But in 2018 a class action lawsuit brought by several small advertisers alleged that the social network had been inflating its figures by even more than it acknowledged—and that the company had known about it for longer than it let on.

The internet is becoming a low-trust society, where an assumption of pervasive fraud is built into the way things function.

Meanwhile, a flourishing business has cropped up to generate fake views. Last year it came to light that some apps on the ­Google Play store, including a photo-editing tool and some games, were Trojan horses for malware—botnets that busily clicked on ads in the background of your phone to boost ad metrics and income for the app developer. So who knows how many real people have actually watched a given video? Advertisers can only guess.

Platforms, too, can be conned out of their winnings. In 2017 an operation in Bulgaria reportedly scammed Spotify out of as much as $1 million by generating a bunch of 30-some-second songs (the time needed to count as a listen) and then setting up paid-for but fake automated accounts to play them, pocketing the difference between royalties and the amount it ponied up to Spotify for listening to its own tracks.

At some point, the typical response to this onslaught of falsehood is to say, lol, nothing matters. But when so many of us are reaching this point, it really does matter. Social scientists distinguish high-trust societies (ones where you can expect most interactions to work) from low-trust societies (ones where you have to be on your guard at all times). People break rules in high-trust societies, of course, but laws, regulations, and norms help to keep most abuses in check; if you have to go to court, you expect a reasonable process. In low-trust societies, you never know. You expect to be cheated, often without recourse. You expect things not to be what they seem and for promises to be broken, and you don’t expect a reasonable and transparent process for recourse. It’s harder for markets to function and economies to develop in low-trust societies. It’s harder to find or extend credit, and it’s risky to pay in advance.