Tweet Share Share

Last Updated on August 7, 2019

You cannot feed raw text directly into deep learning models.

Text data must be encoded as numbers to be used as input or output for machine learning and deep learning models.

The Keras deep learning library provides some basic tools to help you prepare your text data.

In this tutorial, you will discover how you can use Keras to prepare your text data.

After completing this tutorial, you will know:

About the convenience methods that you can use to quickly prepare text data.

The Tokenizer API that can be fit on training data and used to encode training, validation, and test documents.

The range of 4 different document encoding schemes offered by the Tokenizer API.

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

Split words with text_to_word_sequence. Encoding with one_hot. Hash Encoding with hashing_trick. Tokenizer API

Need help with Deep Learning for Text Data? Take my free 7-day email crash course now (with code). Click to sign-up and also get a free PDF Ebook version of the course. Start Your FREE Crash-Course Now

Split Words with text_to_word_sequence

A good first step when working with text is to split it into words.

Words are called tokens and the process of splitting text into tokens is called tokenization.

Keras provides the text_to_word_sequence() function that you can use to split text into a list of words.

By default, this function automatically does 3 things:

Splits words by space (split=” “).

Filters out punctuation (filters=’!”#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t

’).

Converts text to lowercase (lower=True).

You can change any of these defaults by passing arguments to the function.

Below is an example of using the text_to_word_sequence() function to split a document (in this case a simple string) into a list of words.

from keras.preprocessing.text import text_to_word_sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # tokenize the document result = text_to_word_sequence(text) print(result) 1 2 3 4 5 6 from keras . preprocessing . text import text_to_word _ sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # tokenize the document result = text_to_word_sequence ( text ) print ( result )

Running the example creates an array containing all of the words in the document. The list of words is printed for review.

['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog'] 1 ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']

This is a good first step, but further pre-processing is required before you can work with the text.

Encoding with one_hot

It is popular to represent a document as a sequence of integer values, where each word in the document is represented as a unique integer.

Keras provides the one_hot() function that you can use to tokenize and integer encode a text document in one step. The name suggests that it will create a one-hot encoding of the document, which is not the case.

Instead, the function is a wrapper for the hashing_trick() function described in the next section. The function returns an integer encoded version of the document. The use of a hash function means that there may be collisions and not all words will be assigned unique integer values.

As with the text_to_word_sequence() function in the previous section, the one_hot() function will make the text lower case, filter out punctuation, and split words based on white space.

In addition to the text, the vocabulary size (total words) must be specified. This could be the total number of words in the document or more if you intend to encode additional documents that contains additional words. The size of the vocabulary defines the hashing space from which words are hashed. Ideally, this should be larger than the vocabulary by some percentage (perhaps 25%) to minimize the number of collisions. By default, the ‘hash’ function is used, although as we will see in the next section, alternate hash functions can be specified when calling the hashing_trick() function directly.

We can use the text_to_word_sequence() function from the previous section to split the document into words and then use a set to represent only the unique words in the document. The size of this set can be used to estimate the size of the vocabulary for one document.

For example:

from keras.preprocessing.text import text_to_word_sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set(text_to_word_sequence(text)) vocab_size = len(words) print(vocab_size) 1 2 3 4 5 6 7 from keras . preprocessing . text import text_to_word _ sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set ( text_to_word_sequence ( text ) ) vocab_size = len ( words ) print ( vocab_size )

We can put this together with the one_hot() function and one hot encode the words in the document. The complete example is listed below.

The vocabulary size is increased by one-third to minimize collisions when hashing words.

from keras.preprocessing.text import one_hot from keras.preprocessing.text import text_to_word_sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set(text_to_word_sequence(text)) vocab_size = len(words) print(vocab_size) # integer encode the document result = one_hot(text, round(vocab_size*1.3)) print(result) 1 2 3 4 5 6 7 8 9 10 11 from keras . preprocessing . text import one_hot from keras . preprocessing . text import text_to_word _ sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set ( text_to_word_sequence ( text ) ) vocab_size = len ( words ) print ( vocab_size ) # integer encode the document result = one_hot ( text , round ( vocab_size* 1.3 ) ) print ( result )

Running the example first prints the size of the vocabulary as 8. The encoded document is then printed as an array of integer encoded words.

8 [5, 9, 8, 7, 9, 1, 5, 3, 8] 1 2 8 [5, 9, 8, 7, 9, 1, 5, 3, 8]

Hash Encoding with hashing_trick

A limitation of integer and count base encodings is that they must maintain a vocabulary of words and their mapping to integers.

An alternative to this approach is to use a one-way hash function to convert words to integers. This avoids the need to keep track of a vocabulary, which is faster and requires less memory.

Keras provides the hashing_trick() function that tokenizes and then integer encodes the document, just like the one_hot() function. It provides more flexibility, allowing you to specify the hash function as either ‘hash’ (the default) or other hash functions such as the built in md5 function or your own function.

Below is an example of integer encoding a document using the md5 hash function.

from keras.preprocessing.text import hashing_trick from keras.preprocessing.text import text_to_word_sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set(text_to_word_sequence(text)) vocab_size = len(words) print(vocab_size) # integer encode the document result = hashing_trick(text, round(vocab_size*1.3), hash_function='md5') print(result) 1 2 3 4 5 6 7 8 9 10 11 from keras . preprocessing . text import hashing_trick from keras . preprocessing . text import text_to_word _ sequence # define the document text = 'The quick brown fox jumped over the lazy dog.' # estimate the size of the vocabulary words = set ( text_to_word_sequence ( text ) ) vocab_size = len ( words ) print ( vocab_size ) # integer encode the document result = hashing_trick ( text , round ( vocab_size* 1.3 ) , hash_function = 'md5' ) print ( result )

Running the example prints the size of the vocabulary and the integer encoded document.

We can see that the use of a different hash function results in consistent, but different integers for words as the one_hot() function in the previous section.

8 [6, 4, 1, 2, 7, 5, 6, 2, 6] 1 2 8 [6, 4, 1, 2, 7, 5, 6, 2, 6]

Tokenizer API

So far we have looked at one-off convenience methods for preparing text with Keras.

Keras provides a more sophisticated API for preparing text that can be fit and reused to prepare multiple text documents. This may be the preferred approach for large projects.

Keras provides the Tokenizer class for preparing text documents for deep learning. The Tokenizer must be constructed and then fit on either raw text documents or integer encoded text documents.

For example:

from keras.preprocessing.text import Tokenizer # define 5 documents docs = ['Well done!', 'Good work', 'Great effort', 'nice work', 'Excellent!'] # create the tokenizer t = Tokenizer() # fit the tokenizer on the documents t.fit_on_texts(docs) 1 2 3 4 5 6 7 8 9 10 11 from keras . preprocessing . text import Tokenizer # define 5 documents docs = [ 'Well done!' , 'Good work' , 'Great effort' , 'nice work' , 'Excellent!' ] # create the tokenizer t = Tokenizer ( ) # fit the tokenizer on the documents t . fit_on_texts ( docs )

Once fit, the Tokenizer provides 4 attributes that you can use to query what has been learned about your documents:

word_counts : A dictionary of words and their counts.

: A dictionary of words and their counts. word_docs : A dictionary of words and how many documents each appeared in.

: A dictionary of words and how many documents each appeared in. word_index : A dictionary of words and their uniquely assigned integers.

: A dictionary of words and their uniquely assigned integers. document_count:An integer count of the total number of documents that were used to fit the Tokenizer.

For example:

# summarize what was learned print(t.word_counts) print(t.document_count) print(t.word_index) print(t.word_docs) 1 2 3 4 5 # summarize what was learned print ( t . word_counts ) print ( t . document_count ) print ( t . word_index ) print ( t . word_docs )

Once the Tokenizer has been fit on training data, it can be used to encode documents in the train or test datasets.

The texts_to_matrix() function on the Tokenizer can be used to create one vector per document provided per input. The length of the vectors is the total size of the vocabulary.

This function provides a suite of standard bag-of-words model text encoding schemes that can be provided via a mode argument to the function.

The modes available include:

‘binary‘: Whether or not each word is present in the document. This is the default.

‘count‘: The count of each word in the document.

‘tfidf‘: The Text Frequency-Inverse DocumentFrequency (TF-IDF) scoring for each word in the document.

‘freq‘: The frequency of each word as a ratio of words within each document.

We can put all of this together with a worked example.

from keras.preprocessing.text import Tokenizer # define 5 documents docs = ['Well done!', 'Good work', 'Great effort', 'nice work', 'Excellent!'] # create the tokenizer t = Tokenizer() # fit the tokenizer on the documents t.fit_on_texts(docs) # summarize what was learned print(t.word_counts) print(t.document_count) print(t.word_index) print(t.word_docs) # integer encode documents encoded_docs = t.texts_to_matrix(docs, mode='count') print(encoded_docs) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 from keras . preprocessing . text import Tokenizer # define 5 documents docs = [ 'Well done!' , 'Good work' , 'Great effort' , 'nice work' , 'Excellent!' ] # create the tokenizer t = Tokenizer ( ) # fit the tokenizer on the documents t . fit_on_texts ( docs ) # summarize what was learned print ( t . word_counts ) print ( t . document_count ) print ( t . word_index ) print ( t . word_docs ) # integer encode documents encoded_docs = t . texts_to_matrix ( docs , mode = 'count' ) print ( encoded_docs )

Running the example fits the Tokenizer with 5 small documents. The details of the fit Tokenizer are printed. Then the 5 documents are encoded using a word count.

Each document is encoded as a 9-element vector with one position for each word and the chosen encoding scheme value for each word position. In this case, a simple word count mode is used.

OrderedDict([('well', 1), ('done', 1), ('good', 1), ('work', 2), ('great', 1), ('effort', 1), ('nice', 1), ('excellent', 1)]) 5 {'work': 1, 'effort': 6, 'done': 3, 'great': 5, 'good': 4, 'excellent': 8, 'well': 2, 'nice': 7} {'work': 2, 'effort': 1, 'done': 1, 'well': 1, 'good': 1, 'great': 1, 'excellent': 1, 'nice': 1} [[ 0. 0. 1. 1. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 1. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1. 1. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 1.]] 1 2 3 4 5 6 7 8 9 OrderedDict([('well', 1), ('done', 1), ('good', 1), ('work', 2), ('great', 1), ('effort', 1), ('nice', 1), ('excellent', 1)]) 5 {'work': 1, 'effort': 6, 'done': 3, 'great': 5, 'good': 4, 'excellent': 8, 'well': 2, 'nice': 7} {'work': 2, 'effort': 1, 'done': 1, 'well': 1, 'good': 1, 'great': 1, 'excellent': 1, 'nice': 1} [[ 0. 0. 1. 1. 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 1. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1. 1. 0. 0.] [ 0. 1. 0. 0. 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 1.]]

Further Reading

This section provides more resources on the topic if you are looking go deeper.

Summary

In this tutorial, you discovered how you can use the Keras API to prepare your text data for deep learning.

Specifically, you learned:

About the convenience methods that you can use to quickly prepare text data.

The Tokenizer API that can be fit on training data and used to encode training, validation, and test documents.

The range of 4 different document encoding schemes offered by the Tokenizer API.

Do you have any questions?

Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning models for Text Data Today! Develop Your Own Text models in Minutes ...with just a few lines of python code Discover how in my new Ebook:

Deep Learning for Natural Language Processing It provides self-study tutorials on topics like:

Bag-of-Words, Word Embedding, Language Models, Caption Generation, Text Translation and much more... Finally Bring Deep Learning to your Natural Language Processing Projects Skip the Academics. Just Results. See What's Inside