Lee Rigby

British researchers began looking at online hate after the killing of British Army soldier Lee Rigby by Islamic extremists on a London street in 2013. Analysts collected Twitter data and tested a text classifier that distinguished between hateful and antagonistic responses focusing on race, ethnicity and religion.

(Oli Scarff/Getty Images )

LOS ANGELES -- Researchers have undertaken a three-year experiment to learn whether police prevent hate crimes by monitoring racist banter on social media.

British researchers working with the Rand Corp. will monitor millions of tweets related to the Los Angeles area to identify patterns and markers that prejudice-motivated violence is about to occur.

The researchers then will compare the data against records of reported violent acts. The U.S. Department of Justice is investing $600,000 in research by Cardiff University Social Data Science Lab, which has been at the forefront of predictive social media models.

Cardiff University professor Matthew Williams said the research is designed to enable authorities to predict when and where hate crime is likely to occur and deploy law enforcement resources to prevent it.

"The insights provided by our work will help U.S. localities to design policies to address specific hate-crime issues unique to their jurisdiction and allow service providers to tailor their services to the needs of victims, especially if those victims are members of an emerging category of hate-crime targets."

His lab's previous research in the United Kingdom found that Twitter data can be used to identify areas where hate speech is occurring but where no hate crimes have been committed. This can be useful, researchers said, in neighborhoods with many new immigrants, who are unlikely to report such crimes because of fear of deportation.

In 2012, an estimated 293,800 nonfatal violent and property hate crimes occurred in the United States, according to the Bureau of Justice Statistics. About 60 percent of those were not reported, the Justice Department found.

Of course, there is a big difference between someone spouting off on Twitter or Snapchat and an actual hate crime.

"It is a great idea in the abstract. But it is not the panacea you might think," said Brian Levin, executive director of Cal State San Bernardino's Center on Hate and Extremism. "The problem is the correlation and reliability. ... There are many different forms of social media."

Levin, who has tracked Middle Eastern terrorist groups and local neo-Nazi organizations, also noted that some hate groups don't advertise their work on social media.

"Local tensions may arise to fly and be absent from social media," he said. "Some segments of the community shun social media ... so examining social media as a predictor can be a bit like having one screwdriver and sometimes it doesn't work."

Predictive policing already is in use at the Los Angeles Police Department and other agencies. The LAPD uses a predictive policing algorithm to deploy officers to locations where prior crime patterns strongly suggest that similar crimes may occur. As crime during the past two decades has dropped significantly across the nation and in Los Angeles, police commanders are increasingly looking for any edge they can get in cutting crime.

L.A. County is particularly useful because a huge volume of social media produces huge data sets that increase the accuracy of predictive models over traditional crime analysis and trend-chasing, said Pete Burnap of Cardiff University's School of Computer Science and Informatics.

"Predictive policing is a proactive law enforcement model that has become more common partially due to the advent of advanced analytics such as data mining and machine-learning methods," he said.

Traditional predictive police modeling has paired historical crime records with geographical locations, and then made a probable calculation to predict future crimes. But Twitter and social media-based models work in real time. The algorithms look for particular language that is likely to indicate the imminent occurrence of a crime.

British researchers began looking at online hate after the killing of British Army soldier Lee Rigby by Islamic extremists on a London street in 2013. Analysts collected Twitter data and tested a text classifier that distinguished between hateful and antagonistic responses focusing on race, ethnicity and religion.

The British researchers are building a new hate-speech algorithm designed specifically for Los Angeles. They said that's necessary because of the linguistic and cultural difference L.A. and London.

As part of the effort, they will feed 12 months of Los Angeles Police Department hate-crime data, Williams said.

"We know that official reports of hate crime from police probably underestimate how common hate crime really is--but we don't really know by how much, or which types of hate crimes are most seriously underreported," said Megan Cahill, senior researcher at Rand Corp. said: "Using Twitter data from Los Angeles County as a test case, this research will help create better knowledge about hate crime. And, we hope it will ultimately contribute to more hate crime prevention by police and other agencies alike."

-- Los Angeles Times