Detecting Filter List Evasion With Event-Loop-Turn Granularity JavaScript Signatures Authors: Quan Chen (North Carolina Sate University), Pete Snyder (Brave Software), Ben Livshits (Brave Software), Alexandros Kapravelos. (North Carolina State University) IEEE Security & Privacy 2021 Content blocking is an important part of a performant, user-serving, privacy respecting web. Most content blockers build trust labels over URLs. While useful, this approach has well understood shortcomings. Attackers may avoid detection by changing URLs or domains, bundling unwanted code with benign code, or inlining code in pages. The common flaw in existing approaches is that they evaluate code based on its delivery mechanism, not its behavior. In this work we address this problem with a system for generating signatures of the privacy-and-security relevant behavior of executed JavaScript. Our system considers script behavior during each turn on the JavaScript event loop. Focusing on event loop turns allows us to build signatures that are robust against code obfuscation, code bundling, URL modification, and other common evasions, as well as handle unique aspects of web applications. This work makes the following contributions to improving content blocking: First, implement a novel system to build per-event-loop-turn signatures of JavaScript code by instrumenting the Blink and V8 runtimes. Second, we apply these signatures to measure filter list evasion, by using EasyList and EasyPrivacy as ground truth and finding other code that behaves identically. We build ~2m signatures of privacy-and-security behaviors from 11,212 unique scripts blocked by filter lists, and find 3,589 more unique scripts including the same harmful code, affecting 12.48% of websites measured. Third, we taxonomize common filter list evasion techniques. Finally, we present defenses; filter list additions where possible, and a proposed, signature based system in other cases. We share the implementation of our signature-generation system, the dataset from applying our system to the Alexa 100K, and 586 AdBlock Plus compatible filter list rules to block instances of currently blocked code being moved to new URLs.

Authors: Zain ul Abi Din, Panagiotis Tigas, Samuel T. King, and Benjamin Livshits

July, 2020

Online advertising has been a long-standing concern for user privacy and overall web experience. Several techniques have been proposed to block ads, mostly based on filter-lists and manually-written rules. While a typical ad blocker relies on manually-curated block lists, these inevitably get out-of-date, thus compromising the ultimate utility of this ad blocking approach.

In this paper we present PERCIVAL, a browser-embedded, lightweight, deep learning-powered ad blocker. PERCIVAL embeds itself within the browser’s image rendering pipeline, which makes it possible to intercept every image obtained during page execution and to perform blocking based on applying machine learning for image classification to flag potential ads.

Our implementation inside both Chromium and Brave browsers shows only a minor rendering performance overhead of 4.55%, demonstrating the feasibility of deploying traditionally heavy models (i.e. deep neural networks) inside the critical path of the rendering engine of a browser. We show that our image-based ad blocker can replicate EasyList rules with an accuracy of 96.76%. To show the versatility of the PERCIVAL’s approach we present case studies that demonstrate that PERCIVAL 1) does surprisingly well on ads in languages other than English; 2) PERCIVAL also performs well on blocking first-party Facebook ads, which have presented issues for other ad blockers. PERCIVAL proves that image-based perceptual ad blocking is an attractive complement to today’s dominant approach of block lists.

Who Filters the Filters: Understanding the Growth, Usefulness and Efficiency of Crowdsourced Ad Blocking Authors: Peter Snyder, Antoine Vastel, Benjamin Livshits SIGMETRICS 2020 Ad and tracking blocking extensions are among the most popular browser extensions. These extensions typically rely on filter lists to decide whether a URL is associated with tracking or advertising. Millions of web users rely on these lists to protect their privacy and improve their browsing experience. Despite their importance, the growth and health of these filter lists is poorly understood. These lists are maintained by a small number of contributors, who use a variety of undocumented heuristics to determine what rules should be included. These lists quickly accumulate rules over time, and rules are rarely removed. As a result, users’ browsing experiences are degraded as the number of stale, dead or otherwise not useful rules increasingly dwarfs the number of useful rules, with no attenuating benefit. This paper improves the understanding of crowdsourced filter lists by studying EasyList, the most popular filter list. We find that, over its 9 year history, EasyList has grown from several hundred rules, to well over 60,000. We then apply EasyList to a sample of 10,000 websites, and find that 90.16% of the resource blocking rules in EasyList provide no benefit to users, in common browsing scenarios. Based on these results, we provide a taxonomy of the ways advertisers evade EasyList rules. Finally, we propose optimizations for popular ad-blocking tools that provide over 99% of the coverage of existing tools, but 62.5% faster. De-Kodi: Understanding the Kodi Ecosystem Authors: Marc Anthony Warrior (Northwestern University), Yunming Xiao (Northwestern University), Matteo Varvello (Brave Software), Aleksandar Kuzmanovic (Northwestern University) WWW Conference 2020 Abstract: Free and open source media centers are currently experiencing a boom in popularity for the convenience and flexibility they offer users seeking to remotely consume digital content. This newfound fame is matched by increasing notoriety–for their potential to serve as hubs for illegal content–and a presumably ever-increasing network footprint. It is fair to say that a complex ecosystem has developed around Kodi, composed of millions of users, thousands of “add-ons”–Kodi extensions from 3rd-party developers—and content providers. Motivated by these observations, this paper conducts the first analysis of the Kodi ecosystem. Our approach is to build “crawling” software around Kodi which can automatically install an addon, explore its menu, and locate (video) content. This is challenging for many reasons. First, Kodi largely relies on visual information and user input which intrinsically complicates automation. Second, no central aggregators for Kodi addons exist. Third, the potential sheer size of this ecosystem requires a highly scalable crawling solution. We address these challenges with de-Kodi, a full fledged crawling system capable of discovering and crawling large cross-sections of Kodi’s decentralized ecosystem. With de-Kodi, we discovered and tested over 9,000 distinct Kodi addons. Our results demonstrate de-Kodi, which we make available to the general public, to be an essential asset in studying one of the largest multimedia platforms in the world. Our work further serves as the first ever transparent and repeatable analysis of the Kodi ecosystem at large.

Filter List Generation for Underserved Regions Authors: Alexander Sjosten, Peter Snyder, Antonio Pastor, Panagiotis Papadopoulos, Benjamin Livshits WWW Conference 2020 Filter lists play a large and growing role in protecting and assisting web users. The vast majority of popular filter lists are crowd-sourced, where a large number of people manually label resources related to undesirable web resources (e.g. ads, trackers, paywall libraries), so that they can be blocked by browsers and extensions. Because only a small percentage of web users participate in the generation of filter lists, a crowd-sourcing strategy works well for blocking either uncommon resources that appear on “popular” websites, or resources that appear on a large number of “unpopular” websites. A crowd-sourcing strategy will performs poorly for parts of the web with small “crowds”, such as regions of the web serving languages with (relatively) few speakers. This work addresses this problem through the combination of two novel techniques: (i) deep browser instrumentation that allows for the accurate generation of request chains, in a way that is robust in situations that confuse existing measurement techniques, and (ii) an ad classifier that uniquely combines perceptual and page-context features to remain accurate across multiple languages. We apply our unique two-step filter list generation pipeline to three regions of the web that currently have poorly maintained filter lists: Sri Lanka, Hungary, and Albania. We generate new filter lists that complement existing filter lists. Our complementary lists block an additional 2,270 of ad and ad-related resources (1,901 unique) when applied to 6,475 pages targeting these three regions. We hope that this work can be part of an increased effort at ensuring that the security, privacy, and performance benefits of web resource blocking can be shared with all users, and not only those in dominant linguistic or economic regions.

Privacy-Preserving Bandits Authors: Mohammad Malekzadeh, Dimitrios Athanasakis, Hamed Haddadi, Ben Livshits Conference on Machine Learning and Systems 2020 Contextual bandit algorithms (CBAs) often rely on personal data to provide recommendations. This means that potentially sensitive data from past interactions are utilized to provide personalization to end-users. Using a local agent on the user’s device protects the user’s privacy, by keeping the data locally, however, the agent requires longer to produce useful recommendations, as it does not leverage feedback from other users. This paper proposes a technique we call Privacy-Preserving Bandits (P2B), a system that updates local agents by collecting feedback from other agents in a differentially-private manner. Comparisons of our proposed approach with a non-private, as well as a fully-private (local) system, show competitive performance on both synthetic benchmarks and real-world data. Specifically, we observed a decrease of 2.6% and 3.6% in multi-label classification accuracy, and a CTR increase of 0.0025 in online advertising for a privacy budget ε≈ 0.693. These results suggest P2B is an effective approach to problems arising in on-device privacy-preserving personalization.

Keeping Out the Masses: Understanding the Popularity and Implications of Internet Paywalls Authors: Panagiotis Papadopoulos, Peter Snyder, Benjamin Livshits WWW Conference 2020 Funding the production and distribution of quality online content is an open problem for content producers. Selling subscriptions to content, once considered passe, has been growing in popularity recently. Decreasing revenues from digital advertising, along with increasing ad fraud, have driven publishers to “lock” their content behind paywalls, thus denying access to non-subscribed users. How much do we know about the technology that may obliterate what we know as free web? What is its prevalence? How does it work? Is it better than ads when it comes to user privacy? How well is the premium content of publishers protected? In this study, we aim to address all the above by building a paywall detection mechanism and performing the first full-scale analysis of real-world paywall systems. Our results show that the prevalence of paywalls across the top sites in Great Britain reach 4.2%, in Australia 4.1%, in France 3.6% and globally 7.6%. We find that paywall use is especially pronounced among news sites, and that 33.4% of sites in the Alexa 1k ranking for global news sites have adopted paywalls. Further, we see a remarkable 25% of paywalled sites outsourcing their paywall functionality (including user tracking and access control enforcement) to third-parties. Putting aside the significant privacy concerns, these paywall deployments can be easily circumvented, and are thus mostly unable to protect publisher content.

Evaluating the End-User Experience of Private Browsing Mode Authors: Ruba Abu-Salma, Benjamin Livshits CHI 2020 Nowadays, all major web browsers have a private browsing mode. However, the mode’s benefits and limitations are not particularly understood. Through the use of survey studies, prior work has found that most users are either unaware of private browsing or do not use it. Further, those who do use private browsing generally have misconceptions about what protection it provides. However, prior work has not investigated why users misunderstand the benefits and limitations of private browsing. In this work, we do so by designing and conducting a two-part user study with 20 demographically-diverse participants: (1) a qualitative, interview-based study to explore users’ mental models of private browsing and its security goals; (2) a participatory design study to investigate whether existing browser disclosures, the in-browser explanations of private browsing mode, communicate the security goals of private browsing to users. We asked our participants to critique the browser disclosures of three web browsers: Brave, Firefox, and Google Chrome, and then design new ones. We find that most participants had incorrect mental models of private browsing, influencing their understanding and usage of private browsing mode. Further, we find that existing browser disclosures are not only vague, but also misleading. None of the three studied browser disclosures communicates or explains the primary security goal of private browsing. Drawing from the results of our user study, we distill a set of design recommendations that we encourage browser designers to implement and test, in order to design more effective browser disclosures.

Authors: Umar Iqbal (The University of Iowa), Peter Snyder (Brave Software), Shitong Zhu (University of California Riverside), Benjamin Livshits (Brave Software and Imperial College London), Zhiyun Qian (University of California Riverside), Zubair Shafiq (The University of Iowa)

IEEE Symposium on Security and Privacy 2020

Filter lists are widely deployed by adblockers to block ads and other forms of undesirable content in web browsers. However, these filter lists are manually curated based on informal crowdsourced feedback, which brings with it a significant number of maintenance challenges. To address these challenges, we propose a machine learning approach for automatic and effective adblocking called AdGraph. Our approach relies on information obtained from multiple layers of the web stack (HTML, HTTP, and JavaScript) to train a machine learning classifier to block ads and trackers. Our evaluation on Alexa top-10K websites shows that AdGraph automatically and effectively blocks ads and trackers with 97.7% accuracy. Our manual analysis shows that AdGraph has better recall than filter lists, it blocks 16% more ads and trackers with 65% accuracy. We also show that AdGraph is fairly robust against adversarial obfuscation by publishers and advertisers that bypass filter lists.

BatteryLab: A Distributed Platform for Battery Measurements Authors: Matteo Varvello (Brave Software), Kleomenis Katevas (Imperial College London), Mihai Plesa (Brave Software), Hamed Haddadi (Brave Software and Imperial College London), Ben Livshits (Brave Software and Imperial College London) HotNets 2019: Eighteenth ACM Workshop on Hot Topics in Networks Recent advances in cloud computing have simplified the way that both software development and testing are performed. Unfortunately, this is not true for battery testing for which state of the art test-beds simply consist of one phone attached to a power meter. These test-beds have limited resources, access, and are overall hard to maintain; for these reasons, they often sit idle with no experiment to run. In this paper, we propose to share existing battery testing setups and build BatteryLab, a distributed platform for battery measurements. Our vision is to transform independent battery testing setups into vantage points of a planetary-scale measurement platform offering heterogeneous devices and testing conditions. In the paper, we design and deploy a combination of hardware and software solutions to enable BatteryLab’s vision. We then evaluate BatteryLab’s accuracy of battery reporting, along with some system benchmarking. We also demonstrate how BatteryLab can be used by researchers to investigate a simple research question.