spacy.io

Named Entity Recognition (NER) is the information extraction task of identifying and classifying mentions of locations, quantities, monetary values, organizations, people, and other named entities within a text. It is a core component of many natural language processing (NLP) applications.

Objective:

In this blog, I will review the technologies and steps that are involved in creating a custom Named Entity Recognition model using spaCy’s Named Entity Recognition library. (https://spacy.io/usage/linguistic-features#named-entities) I will annotate a custom data set containing submissions from the Subreddit called HardwareSwap (reddit.com/r/hardwareswap). Then, I will use the data to train my model to label entities in the submissions, such as product, price, location, zip code, product condition, and URL. Below, you can see an example output of my fully trained model with predictions for the target labels.

Fully trained model predictions

Recurrent Neural Networks:

Recurrent Neural Networks (RNNs) are a family of neural networks that operate on sequential data. They take as input a sequence of vectors (x1, x2…, xn ) and return another sequence that represents some information about every step in the initial sequence. Although they can, in theory, learn long dependencies, in practice they fail to do so and tend to be biased towards their most recent inputs. And so, Long Short-Term Memory Networks (LSTMs) have been designed to combat this issue by incorporating a memory cell to capture long-range dependencies. Using several gates, LSTMs control the proportion of input that is given to the memory cell, as well as the proportion from the previous state that should be forgotten.(Source: https://arxiv.org/pdf/1603.01360.pdf)

For a given sequence containing n words (x1, x2, . . . , xn), each represented as a d-dimensional vector, an LSTM computes a representation vector (ht) of the left-hand context of the sentence at every word t. Naturally, generating a representation vector of the right-hand context should also yield useful information, and this can be achieved using a second LSTM that reads the same sequence in reverse. We will refer to the former as the “forward LSTM” and the latter as the “backward LSTM.” These are two distinct networks with different parameters. A pair consisting of a forward LSTM and a backward LSTM is referred to as a bidirectional LSTM (Graves and Schmidhuber, 2005).

Embed, Encode, Attend, and Predict

“When people think about machine learning improvements, they usually think first of efficiency and accuracy, but the most important dimension is generality. If you want to write a program to flag abusive posts on a social media platform, you should be able to generalize the problem to “I need to take text and predict a class ID.” It shouldn’t matter whether you’re flagging abusive posts or tagging emails that propose meetings; if two tasks take the same type of input and produce the same type of output, then we should be able to reuse the same model code and get different behavior by plugging in different data — like playing different games that use the same engine.” Embed, Encode, Attend, Predict refers to Mathew Honnibal’s conceptual framework for deep learning for natural language processing. I highly recommend you check out the full paper at https://explosion.ai/blog/deep-learning-formula-nlp .

Using the “embed, encode, attend, and predict” playbook, we can make accurate statistical predictions about named entities.

Embed:

Most neural networks models will begin by breaking up a sequence of strings into individual words, phrases, or whole sentences: a process known as tokenizing. For example, if we are analyzing text in an ASCII format, there will be 256 possible values. The value for “‘a”’ would be a vector of 0, the value for “‘b”’ would be a vector of 1, the value for “c” would be a vector of 2 and so on.

Encode:

One of the many problems in NLP is how to understand a word’s context. In some cases, a word can have completely different meanings depending on the surrounding words. For example, the word “crane” can be used in the sentence “That bird is a crane,” referencing an animal, or “They had to use a crane to lift the object,” referencing a piece of machinery. In these two sentences, the spelling of “crane” is the same, but its meaning is different.

To solve this problem, we can encode each sequence in a row’s vector represent each token (in word, phrase, or sentence form) in the context of the rest of the sentence.

The technology used for this purpose is a bidirectional RNN, which requires two steps. The first step is a forward pass (taking in words from left to right), and the second part is a backwards pass (taking in words from right to left). Unlike humans, computers need to take in token input from both directions to get the full context. To get the full vector for each token, the system adds the forward pass and the backwards pass together.

The important point here is that a full vector represents a token in the context of the phrase, sentence, or paragraph. This is a big issue that bidirectional RNNs have been able to solve.

Attend:

The “attend” step takes the matrix that was produced in the “encode” step, which is a combination of the vectors that were outputted from the forward pass and the backwards pass. This matrix is taken and shrunk down into a single vector so that it can be passed through a standard feed-forward RNN. A feed-forward RNN is one directional compared to the previous RNN in the encode step which was bidirectional.

When the matrix is reduced to a vector, some information is necessarily lost. This is why the context vector is so important: because it tells the algorithm which information can be discarded. In a book, for example, a word on page 3 is most likely not going to have a huge contextual impact on a word on page 7. The “attend” step tells the algorithm which information to let go of to avoid information overload.

Predict:

The output layer will be of a fixed size, depending on how many labels you are predicting. For example, if you are predicting whether a sentence is about cats or dogs, the output layer will have two labels: “Cat” and “Dog.” Once the text has been reduced to a single vector, the model will predict the target label.

As important as the statistical model is, it’s equally important to have training data that cover the entities you are trying to label, and this is one of the reasons why Named Entity Recognition has been slow to catch on. So that you can make the best use of Named Entity model, it is extremely important to ensure that the training data are relevant and up to date. In order to get the proper training data for specific problems, you will need training sets that are annotated with the entities you are trying to label.

Project:

In order to get the proper training data for this project, I have been using the Python Reddit API Wrapper (PRAW) and storing all submissions from the Subreddit r/hardwareswap in a SQL server. I have reviewed these steps in a previous article that you can find here. At the time of writing this, I have about 16,000 submissions, which is plenty of data to start training with. As stated above, a crucial part of solving a Named Entity Recognition problem is having good training data.

The code for the below steps can be found on SpaCy’s website at https://spacy.io/usage/training#ner, and on my Github at https://github.com/Landstein/Product-Named-Entity-Recognition. On SpaCy.io they do a great job of providing examples and use cases.

For this project, I will start with a blank English NER model. The below code sets the model to “none,” meaning that there is no model yet; output_dir indicates the location where the model will be saved; and n_iter indicates how many times the model will train on the data.

# sets the model, output directory and training iterations

model = None

output_dir=Path("/Users/eric/Projects/Product-Named-Entity-Recognition/model1")

n_iter=100

In this example, because there is no model, a blank English model will be created.

# Checks to see if there is a current model or no model. In this case I will be starting with a blank model



if model is not None:

ner_model = spacy.load(model) # load existing spaCy model

print("Loaded model '%s'" % model)

else:

# this will create a blank english model

ner_model = spacy.blank('en') # create blank Language class

print("Created blank 'en' model")

Since this is a supervised learning problem, the training data will need to be annotated manually. This requires you to go submission by submission and label each part of speech or category that you want your model to identify as a specific entity.

First Annotated Text:

x = data[0] TRAIN_DATA = [

(x, {

'entities': [(8, 28, 'PRODUCT'), (74, 89, 'PRODUCT'), (96, 128, 'CONDITION'), (29, 61, 'URL'), (0, 6, 'LOCATION')]

})

The annotating process is pretty straightforward. I set the first piece of training data to be equal to x (which can be seen just after the left-hand parenthesis). Then, I will just need to identify what indexes an entity falls between and input the label. For example, the PRODUCT category falls between indexes 8–28, as well as indexes 74–89. As you can see marked off in the above picture.

After that, I am ready to train the model.

The code below will train on the data using the blank English NER model I created above:

# add labels, Trains data based on annotations

for _, annotations in TRAIN_DATA:

for ent in annotations.get('entities'):

ner.add_label(ent[2])



# get names of other pipes to disable them during training

other_pipes = [pipe for pipe in ner_model.pipe_names if pipe != 'ner']

with ner_model.disable_pipes(*other_pipes): # only train NER

optimizer = ner_model.begin_training()

for itn in range(n_iter):

random.shuffle(TRAIN_DATA)

losses = {}

for text, annotations in tqdm(TRAIN_DATA):

ner_model.update(

[text], # batch of texts

[annotations], # batch of annotations

drop=0.5, # dropout

sgd=optimizer, # callable to update weights

losses=losses)

print(losses)

Testing the model:

The below code loads in the now-trained model based on the one annotated submission, x. Since I annotated only one submission, I can’t expect any significant results, but the model was able to identify the Location:

To build on the model, I will annotate a second submission, y, and add it to the training data.

Submission y:

All training data, now including submissions x and y:

After I’ve trained the model a second time with both submissions, it identifies Location, Price, URL, and Username correctly. However, it still does not identify Product correctly:

In order to make the model’s predictions more accurate, I will need to give it more training data.

Annotating submission z:

Retraining the model with training data x, y, and z:

After I’ve trained the model a third time with submissions x, y, and z, it labels Location, Price, and URL correctly. For the most part, the Product labels are correct, but they also capture a few other words. Additionally, the Condition label does capture the condition, but it also captures the specs, which is incorrect.

The above tests were all run on the training data. To see how well the model is really performing at this point, it is necessary to run a test on some unseen data. To do this, I will run the model on the following text:

Predictions:

Considering the limited training data, it is impressive what this custom SpaCy NER model is capable of labeling correctly. I recommend that you train with at least a few hundred annotated texts before running a model.

Annotating the data is a long and tedious process, and there are tools out there that will make the process more efficient. After spending a lot of time annotating the data from r/hardwareswap and improving my custom model, the model is able to make predictions that are very accurate. Below are a few examples:

For additional examples and to check out my progress with this project check out the github repo.

Sources: