Police will use artificial intelligence to predict real-life hate crimes based on Twitter comments in the first trial of its kind in the UK.

The AI system, which was developed by Cardiff University researchers, will be used to match hateful comments on the social media site to locations in the UK in an effort to prevent violence offline.

Researchers proved that as the number of "hate tweets" – those deemed to be antagonistic in terms of race, ethnicity or religion – made from one location increased, so did the number of racially and religiously aggravated crimes, including violence, harassment and criminal damage.

Police plan to use this technology from October 31 to track racist and hateful comments targeting religious and ethnic minorities across the country to measure sentiment after Brexit deadline day.

Professor Matthew Williams of Cardiff University said that the system will be trialled by the National Police Chief Council (NPCC)'s national online hate crime hub for the first time this month in a "toxic" political environment.

He said: "Brexit is one of our test cases to see if hate speech will spike. There has been talk of riots on the streets, and there is an expectation that tensions will bubble up around that date."

His AI system collated data from Twitter and the Met Police between August 2013 and August 2014 and proved for the first time that hate-filled comments online can signal tension in local communities and precede a rise in hate crimes.

The national hate crime hub will attempt to measure the information online to significantly improve the service police can offer to victims, reduce the burden on frontline officers and help bring more offenders to justice.

The Twitter AI tool will be used as part of the polices' "hate speech dashboard", which gives police the ability to assess potential threats to communities in greater volume.