An example of a privilege attack demonstrated by the researchers. Here, Jerry posts an adversarial image to facebook that causes it to be blocked (in red) on every user’s browser.

Ad blockers were created for people who are sick of ads’ constant assault on their senses or who want to improve their internet security. Yet ever since the first ad blockers were released on the internet, there’s been an escalating conflict between adblock developers and advertisers creating ad blockers and ad blocker-blockers.

Last year, Princeton researchers created a perceptual ad blocker, which could visually locate advertisements on a webpage and filter them out. This plugin was supposed to be the “ad blocking superweapon” that would put an end to the ad-blocking arms race since it didn’t target ads based on their code, but on the way they looked on the page.

According to new research published this week on arXiv, however, an AI was able to defeat perceptual ad blockers. Moreover, the researchers warned, ad blockers will be on the losing side of the arms race and expose their users to new attack vectors in the process.

“Our attacks are not a step in a ‘quid pro quo’ arms race,” the researchers wrote. “Instead, contrary to prior beliefs, they are indicative of a clearly pessimistic outcome for perceptual ad blockers. Our results show that if deployed, perceptual ad blockers would engender a new arms race that overwhelmingly favors publishers and ad-networks.”

Ad blockers such as AdBlock or uBlock work by maintaining “filter lists,” which are sets of rules that tell the ad blockers what type of content to block. These lists can be custom-made or users can use one of the many publicly available lists such as EasyList. The problem with filter lists is advertisers are constantly figuring out ways to get around them and creating too extensive of a filter list can significantly slow down a web browser.

Perceptual ad blockers rectify these shortcomings by blocking ads based on how they look on a webpage. This includes looking for ad cues such as a “Sponsored” link or the close button on pop-up ads, as well as the legally required signifiers that an ad is in fact an ad. Perceptual ad blockers are much harder to subvert in principle because they require advertisers to fundamentally change the content and appearance of the ad. This may result in a less effective advertisement or violation of legal requirements about declaring advertising.

Only a year-and-a-half after perceptual ad blockers were heralded as the end of the ad-blocker arms race, however, a team of researchers from Stanford and CISPA Helmholtz Center for Information Security have discovered a “panoply” of vulnerabilities in perceptual ad blockers that undermine their efficacy and expose users to new attack vectors.

The researchers focused on two types of perceptual ad blocker, one called Perceptual Ad Highlighter and the other called Sentinel, which use neural networks, a type of machine learning architecture loosely based on the human brain, to recognize ads on webpages. These ad blockers rely on a visual classifier that is trained on thousands of screenshots from websites that have their advertisements labeled so that the algorithm can create models of what an ad looks like.

To highlight the vulnerabilities of these visual classifiers, the researchers created adversarial ad examples that were meant to fool and undermine the efficacy of Sentinel and Highlighter. The researchers tested their attacks on six different visual classifiers in total, three of which were already deployed as part of perceptual ad blocking software.

In these attacks, the researchers would make changes to advertisements that were nearly imperceptible to a human but capable of tripping up a machine learning algorithm. Among the most devastating attacks launched by the researchers was a “privilege-hijacking” attack that caused the perceptual ad block to block legitimate content on a webpage after mistaking it for an ad.

In other words, the researchers managed to turn the very tool that makes perceptual ad blockers effective into their greatest weakness. In fact, according to the researchers, “ad blockers operate in what is essentially the worst possible threat model for visual classifiers.”

An example of some of the false ads used by the researchers for their adversarial attacks. Here the AdChoices logo, the small blue triangle, is doctored to fool the ad blocker neural net. Image: arXiv

The researchers tried several different adversarial attacks on the perceptual ad blockers’ visual classifiers. One attack, for example, slightly altered the AdChoices logo that is commonly used to disclose advertisements to fool the perceptual ad blocker. In another attack, the researchers demonstrated how website publishers could overlay a transparent mask over a website that would allow ads to evade perceptual ad blockers.