“Tedious and helpless prose” is how, in 1881, a writer for The Atlantic described “Leaves of Grass,” Walt Whitman’s first volume of poetry. It was a view shared by other contemporary critics. One called the book “intensely vulgar, nay, absolutely beastly,” before bluntly refusing to tell readers where it might be bought. Whitman did not idly nurse his wounds. Instead, he anonymously wrote numerous flattering appraisals of his work, to even the balance. A three-thousand-word example, published in the September, 1855, edition of the United States Review, opens with the breathless declaration, “An American bard at last!” Whitman goes on to extravagantly applaud his dress sense (“manly and free”), his posture (“strong and erect”), his voice (“bringing hope and prophecy to the generous races of young and old”), and his compassion (“the largest lover and sympathizer that has appeared in literature.”)

The temptation to pose as an impartial reviewer of one’s own work will be familiar to many authors across history. But the Internet has, as with all vices, smoothed the transition from temptation to action. When presented with the user-review box on Amazon, it’s a simple matter for an author to tap on the five-star icon and offer an “Exquisite” or a “Triumph!” For those caught in the act, the humiliation can be severe. In 2012, the crime writer R. J. Ellory was accused on Twitter by one of his rivals, Jeremy Duns, of writing “long, purple tributes to his own work on Amazon.” Under the pseudonym Nicodemus Jones, Duns said, Ellory had described his book “A Quiet Belief in Angels” as “magnificent” and “poetic,” even going so far as to warn potential buyers to “ignore all dissenters and naysayers.” Ellory, Duns claimed, had not stopped at self-praise; he had also posted negative reviews of other authors’ work. In a statement issued to the Guardian, Ellory admitted guilt. “The recent reviews—both positive and negative—that have been posted on my Amazon accounts are my responsibility and my responsibility alone,” he wrote. “I wholeheartedly regret the lapse of judgment.”

Sock-puppeting, the act of posing as someone else on the Internet to artificially amplify a viewpoint, is, many writers claim, rife in publishing. The author Stephen Leather once reportedly admitted, at the Harrogate Crime Writing Festival, to building an online “network of characters” who would talk among themselves about his books in glittering terms.

Outside the publishing industry, the practice known as “review brushing” exists on a vast, industrial scale. In 2014, Haitao Xu, a thirty-year-old researcher now at Northwestern University, monitored five black-market Internet boards where companies and individuals advertise jobs posting positive reviews of their products and services, along with negative ones on those of their rivals. In just two months, Xu saw more than eleven thousand unique sellers post close to a quarter of a million jobs, paid at anywhere between “tens of cents, up to five dollars,” he told me. Since consumers typically see positive customer reviews as a more reliable indicator of quality than advertising, the effects can be major. “Stores using brushing services can increase their reputation ten times faster than normal seller stores,” Xu, who, in 2016, spent six months working in Alibaba’s fraud-detection team, told me. “A store with a high reputation is displayed higher up a Web page, attracting more customers and increasing sales.” Online sellers who do not employ brushing services, meanwhile, often find their products overlooked.

Late last year, Oobah Butler, a reporter at Vice, decided to test the effectiveness of brushing in restaurant reviews. Years ago, Butler told me, he used to write bogus reviews on TripAdvisor for money. He found his clients via freelance job postings, like those that Xu studied, and charged around twenty dollars a pop. “I’d look at the menu, pick something, and start lying,” he said. For the recent test, he created his own fake business, which he called the Shed at Dulwich. (It was named for his garden shed, in Dulwich, London.) He photographed plates of carefully arranged food (created using household products such as shaving cream and dishwasher tablets), bought a burner phone, and added the Shed to the site. Within four weeks, he had posted enough fake reviews to move the spectral establishment into the top two thousand restaurants in London. Eventually, it became the highest-rated restaurant in the city, and Butler was fielding scores of calls from people hoping to book a table. Such was the nonexistent restaurant’s success that it even attracted a one-star review, from what Butler assumes was a rival. “TripAdvisor removed the review on the grounds that it was fake,” he said.

For online retailers, the war on the fake-review industry is now a major part of the business. Today, when a review is submitted to TripAdvisor, it goes through a tracking system that examines hundreds of different attributes, from basic data points, such as the I.P. address of the reviewer, to more detailed information, such as the screen resolution of the device that was used to submit the review. “We use this data to create a map of reviews coming in for every property listed on the site, and it means we can quickly identify unusual patterns of review behavior,” James Kay, a spokesman for the company, told me. The tools, he said, have successfully identified offshore review farms. In 2015, TripAdvisor took legal action against sixty of them.

Increasingly, fake reviews are being leveraged not only for financial gain but to make a political point. Within hours of the release of Hillary Clinton’s campaign memoir, “What Happened,” last September, so many people had left one-star ratings on Amazon that the company deleted almost a thousand reviews. The same month, the San Francisco-based video-game company Campo Santo saw one of its titles, Firewatch, review-bombed on Amazon, Steam, and other sites. The campaign started after the Swedish YouTube broadcaster Felix Kjellberg, a.k.a. PewDiePie, posted a video in which he used a racial epithet. Campo Santo filed a takedown request under the Digital Millennium Copyright Act, preventing PewDiePie from streaming any of its games; that was enough to mobilize a small army of fake reviewers looking to punish the developer.

An Amazon spokeswoman explained that when a product is hit with many reviews in a short period of time, the company’s systems will automatically suppress all but “verified” purchases. But this preventative measure, too, can be easily gamed. “People would buy our game, not play it, leave the terrible review, and instantly request a refund,” Sean Vanaman, Campo Santo’s co-founder, told me. “It’s a well-worn tactic.” In his estimation, user-review systems such as those used by Valve, Steam’s developer, are so vulnerable to exploitation that they require as much moderation as social-media platforms. “The ethics and utility of these systems boil down to this: if a platform is going to have it, they have to be able to manage it to protect people from abuse and harassment, or they become responsible for that abuse,” Vanaman told me last September. (His studio, Campo Santo, was bought by Valve earlier this year.)

Butler, the reporter who created the fake restaurant on TripAdvisor, is skeptical that the war on fake reviews is being won. “Put it this way: I’m still not banned from TripAdvisor,” he said. Kay, the company’s spokesman, claims that the Shed at Dulwich was identified as fraudulent and removed from the site completely before Butler published the story about his escapade. Butler remains unconvinced. “One way or another, the platform needs to be more sophisticated if we’re going to trust it more than what’s in front of our eyes,” he said.