What drew you to the role at Jigsaw?



I left Iran, where I was born, at a very young age and grew up in the UK. I remember going back at the age of 19 and feeling astounded by the level of censorship there was in the country, both offline and online. Terms like “world wide web” tend to sound ironic to people who aren’t living in countries where information is free. So I was drawn to work at a company which tries to make information available to everyone.

Landon Speers

Jigsaw deals with online threats, but team members also visit conflict zones as part of their work. What’s the goal of these trips?



We want to make sure that we’re designing technology that’s based on an understanding of human experiences. A big part of our methodology is sending team members out into the field to interview people who are on the front lines of the challenges we try to tackle, whether that’s repression or conflict. One of our field trips was to Iraq, where we sat face to face with people who had joined and then left the terrorist group known as ISIS. We wanted to understand the radicalization process, both the human elements but also the role that technology played. We heard about how people discovered ISIS, how people were recruited, and how technology was useful to the logistics of their travel [to join the group].

What’s the most important lesson we’ve learned about ISIS’s ability to leverage the internet for recruitment?



ISIS pretty much masters many media, from radio to leafleting. When it comes to the internet, they’ve really understood the power of microtargeting. They create content in a long list of languages, including Arabic and English, but it goes on and on, and even gets to Chinese and Hebrew. The language that really blew my mind to see a video in was sign language. So they are creating very local recruiting materials and using the algorithms that are available through social media to distribute this material and reach people in all corners of the world.

Some of the content terrorist networks post online clearly needs to be taken down. But how do we deal with more subtle forms of propaganda?



There are definitely categories of content that you want to make sure don’t see the light of day, like beheadings and bomb-making tutorials. Then there’s a whole host of other content that really isn’t advocating for violence but could help advance people down the path toward it. Our research was aimed at understanding what the recruiting themes were that got people to sign up to ISIS. It turns out they weren’t generally drawn to beheadings; instead, they were convinced that this group was religiously legitimate and the cause of jihad was their religious duty. ISIS was shaping the conversation, asking questions to which [it] had seemingly compelling answers.

“I don’t think anyone imagined that we’d have the level of intimidation and hate speech online that we currently do, and we have to try really hard to make sure that we’re getting ahead of it.”

How have you tried to counter this online radicalization?



One of the takeaways for us was that timing is critical. By the time potential recruits are sold on the ideology, it’s too late to influence them; you have to get to them when they are sympathetic but not yet sold. So we turned to targeted online advertising. We’ve designed something called the “redirect method,” which uses targeted ads to reach people who are sympathetic to ISIS but not yet committed to it, and redirects them to online videos from moderate clerics, defectors, and citizen journalists that could avert their radicalization.



What results has this method delivered?



The pilot, which was eight weeks long and ran in Arabic and English, reached 320,000 people. It had an exceptionally high click-through rate on the ads and drove half a million minutes of video watch time. Given people don’t spend more than a few seconds on a video they’re not interested in, that’s encouraging. After our pilot, YouTube integrated the method into its search results. The open-source methodology has also been replicated by others like the Gen Next Foundation, and we continue to support some new deployments.

Jigsaw also tries to tackle online censorship. How bad is this ­problem?



If you look at Freedom House’s index on this, they say that every year the situation is getting worse. That’s really discouraging, because it’s antithetical to what people who are developing the internet want for it. The situation is so volatile. When you have civil unrest, as there was recently in Iran, you see the dilemma of repressive governments around whether or not to shut down the internet because [censoring] it inflames the population and draws public attention and outrage. But if those who are in power feel threatened, they will censor.

What can companies like Alphabet do to counter this?



We feel a really big responsibility to help people get access to information, especially when there’s conflict and repression. One of our products focuses on protecting independent media around the world from a type of censorship attack called distributed denial of service, or DDoS, which knocks websites offline by flooding them with traffic. It’s called Project Shield. The idea for this came from our fieldwork. Our team spoke to the Kenyan election monitoring group, and their site had gone down on the day of a key election. Google has an enormous infrastructure and world-class DDoS mitigation capabilities, which Project Shield takes advantage of. Without websites having to host with Google or Jigsaw, it offers to vet traffic before it arrives at a server so we can spot the traffic that’s malicious and filter it out.

Let’s switch to the problem of fake news. How can we better identify state-sponsored disinformation efforts?