Open-source Xenophobic Tweet Classifier

We will periodically release an open-source AI model on a relevant topic for your personal projects or hackathons. We start with a xenophobic tweet detector.

By Abraham Starosta and Tanner Gilligan

Introduction

When building new applications in today’s quickly changing world, machine learning is often necessary to provide users with power insights. Unfortunately, for many of the people developing these applications, AI can be difficult to implement, and expensive to outsource. Several of our tech friends have said that while working on personal projects or at hackathons, they would have liked to include machine learning. Typically, even if they knew how to implement the model, there wasn’t an existing dataset they could use for their application and creating a dataset from scratch would have been a very laborious process.

To help address this gap in the developer and AI community, we have decided to create and release a series of publicly available classifiers that anyone can use. We will periodically release an open-source AI model on an interesting and relevant topic, and users can either download the model/data for local use, or simply use our API.

For our first model, we created a model to detect xenophobic tweets.

Xenophobia on Twitter

Detecting hate speech in general is a very important problem and AI will be instrumental in fighting it on social media platforms. Despite the AI advancements made in recent years, we still have a long way to go on this problem. Louise Matsakis of Wired explains that only 38% of hate-speech posts that Facebook removes are detected by AI. This is mainly because there are so many types of hate speech, and the language used changes rapidly.

One type of hate speech that has been a hot topic in the news over the last week is xenophobia, which is prejudice against immigrants. Like many social media sites, Twitter has seen an influx of tweets on the topic, including many that could be considered hate speech. Luckily for us, Twitter also provides an API for developers to download tweets, so this was a great source to get relevant data from.

Here are a few example tweets that we’re looking to detect:

Implying all illegal immigrants are terrorists (ISIS loving)

Saying immigrants are invading the US

Implying that a congresswoman is undocumented just because of her looks or country of origin

Criteria

Once we had downloaded a substantial number of tweets, we needed to go through the process of creating a labeled dataset. To do this, we utilized a combination of manual annotation and weak supervision to label a total of 10,181 tweets, following a similar process to our older blog. At a high-level, we used topic detection and keyword searches to label data, and used that data to inform a weakly-supervised model. Some examples of concepts we used for identifying xenophobic tweets includes:

Using pejorative phrases like “illegal aliens” or “illegal criminals”

Telling immigrants to “go back to their country”

Saying immigrants are “invading” the US

Saying immigrants are anti-American

Saying there should be mass deportations of undocumented immigrants

Saying immigrants come to the US just to take advantage of the system

And this is the main criteria for non-xenophobic tweets:

Calling out racist comments or chants

Talking about the history of racism

Talking about legally acquiring citizenship

Saying American immigrants are also American

Various other topics that aren’t xenophobic

Model Architecture

After creating our dataset, the next step was to create a model out of it. We created a model using TF-IDF vectorizer with logistic-regression predictor. Tweets are split using a tokenizer, and only the top 2000 tokens are used as features. All steps were done using Scikit-Learn, so you can try training a model for yourself if interested.

Quantitative evaluation

The following graph shows the precision-recall curve, with an area under the curve of 0.82 (validated with ~150 examples).

Precision-Recall Curve

The model returns the probability that a tweets is xenophobic. Therefore, if you want high precision (you don’t want to make false positive mistakes), then make sure you use a high probability threshold of 0.9 or more. In other words, when you compute the probability with the model, it should be higher than 0.9 for you to classify the tweet as xenophobe.

You can also use the table below to choose your threshold based on your desired precision and recall:

Precision and Recall at different probability thresholds

Instructions for using the model and downloading the dataset

You can copy paste the code in our Google Colab notebook into your own local or Colab notebook. Just to make it easier, we’ll also add our code here.

Step 1: Initial tweet filtering

We only want to run the model on tweets that match the search queries we used to build our tweet dataset. That way we’ll be safer from data mismatch. Otherwise, the model can get confused when it sees tweets that look significantly different from the ones it was trained on. You can use the function below:

def does_tweet_match(tweet):

"""

Check that tweet has one of the search queries used.

"""

search_terms = ["illegal alien",

"illegal immigrant",

"illegal immigration",

"send her back",

"send them back",

"illegal criminal"]

for s in search_terms:

if s in tweet.lower():

return True

return False

Step 2 — option 1: Using our API

If a tweet matches one of our search queries, then we can call the API with the following code. If a tweet doesn’t match one of the search queries then the API will return an error message saying “Tweet doesn’t match search terms.”

import json

import requests



def xenophobe_tweet_api(tweet_content, threshold=0.9):

"""

Call API.

"""

if not does_tweet_match(tweet_content):

raise Exception("Tweet doesn't match search terms.")



url = "https://rk56kry0qj.execute-api.us-west-2.amazonaws.com/default/xenophobic-tweet"

payload = {}

payload['text'] = tweet_content

response = requests.request(

"POST", url,

data=json.dumps(payload),

headers={'Content-Type': "application/json"}

)



prob_is_xenophobe = float(response.text)

return prob_is_xenophobe > threshold, prob_is_xenophobe





xenophobe_tweet_api("You illegal alien go back home!")

>>> (True, 0.9002771303057671)

Note for Windows users: because the models were serialized in a Mac machine, deserializing them in a Windows machine can cause problems. Therefore, if you have a Windows machine then we highly suggest you use the API.

Step 2 — option 2: Using models locally

Or, you can download the models and use them locally.

import urllib.request

from sklearn.externals import joblib # Download models

url = 'https://sculpt-public-models.s3-us-west-2.amazonaws.com/xenophobia_tfidf.joblib'

urllib.request.urlretrieve(url, './xenophobia_tfidf.joblib')

url = 'https://sculpt-public-models.s3-us-west-2.amazonaws.com/xenophobia_logreg.joblib'

urllib.request.urlretrieve(url, './xenophobia_logreg.joblib')

tfidf = joblib.load("./xenophobia_tfidf.joblib")

logistic_reg = joblib.load("./xenophobia_logreg.joblib")



def classify_tweet_locally(tweet_content, threshold=0.95):

if not does_tweet_match(tweet_content):

raise Exception("Tweet doesn't match search terms")

featurized_tweet = tfidf.transform([tweet_content])

prob_is_xenophobic = logistic_reg.predict_proba(featurized_tweet)

prob_is_xenophobe = prob_is_xenophobic[0][0]

return prob_is_xenophobe > threshold, prob_is_xenophobe



classify_tweet_locally("You illegal alien go back home!")

>>> (True, 0.9998621759905209)

If you get an SSL error when downloading the models then run this command (on mac):

/Applications/Python\ 3.7/Install\ Certificates.command

Downloading the dataset

If you want, you can also download the zipped labeled training and test datasets here to train your own model and hopefully beat our logistic regression.

Project ideas

If you’d like some inspiration we’ve written below a few project / hackathon ideas the model could help with:

Visualize xenophobic tweets on a map (could be done with folium in a Jupyter notebook)

Where in the USA is there more xenophobia?

Find different topics in xenophobic tweets (you can find our data below)

Train a BERT model with our data

Does it perform better than our logistic regression?

A small app that shows a few xenophobic tweets every day

A dashboard that tracks the number of xenophobic tweets per day and plots a graph

An app that calls out users who said potentially xenophobic tweets

How to build your own twitter dataset

You need to make a free Twitter developer account and get your access token and consumer key. Then, if you run our script overnight you can download tweets from the past 7 days that match a query. You can also use GNIP you’d like to get access to a real-time stream of tweets, although they might charge for that.

Conclusion

We hope you get to try our model and help fight hate speech. If you would like to partner on building a model please email abraham@sculptintel.com.

Disclaimer: xenophobia is a complex and often emotional topic. We’re computer science experts, not xenophobia experts. We built this model based on our personal research and we’re not making any political statement whatsoever.