Recurrent Neural Networks (RNNs) are a kind of neural network that specialize in processing sequences. They’re often used in Natural Language Processing (NLP) tasks because of their effectiveness in handling text. In this post, we’ll explore what RNNs are, understand how they work, and build a real one from scratch (using only numpy) in Python.

This post assumes a basic knowledge of neural networks. My introduction to Neural Networks covers everything you’ll need to know, so I’d recommend reading that first.

Let’s get into it!

1. The Why

One issue with vanilla neural nets (and also CNNs) is that they only work with pre-determined sizes: they take fixed-size inputs and produce fixed-size outputs. RNNs are useful because they let us have variable-length sequences as both inputs and outputs. Here are a few examples of what RNNs can look like:

Inputs are red, the RNN itself is green, and outputs are blue. Source: Andrej Karpathy

This ability to process sequences makes RNNs very useful. For example:

Machine Translation (e.g. Google Translate) is done with “many to many” RNNs. The original text sequence is fed into an RNN, which then produces translated text as output.

(e.g. Google Translate) is done with “many to many” RNNs. The original text sequence is fed into an RNN, which then produces translated text as output. Sentiment Analysis (e.g. Is this a positive or negative review?) is often done with “many to one” RNNs. The text to be analyzed is fed into an RNN, which then produces a single output classification (e.g. This is a positive review).

Later in this post, we’ll build a “many to one” RNN from scratch to perform basic Sentiment Analysis.

2. The How

Let’s consider a “many to many” RNN with inputs x 0 , x 1 , … x n x_0, x_1, \ldots x_n x0​,x1​,…xn​ that wants to produce outputs y 0 , y 1 , … y n y_0, y_1, \ldots y_n y0​,y1​,…yn​. These x i x_i xi​ and y i y_i yi​ are vectors and can have arbitrary dimensions.

RNNs work by iteratively updating a hidden state h h h, which is a vector that can also have arbitrary dimension. At any given step t t t,

The next hidden state h t h_t h t ​ is calculated using the previous hidden state h t − 1 h_{t-1} h t − 1 ​ and the next input x t x_t x t ​ . The next output y t y_t y t ​ is calculated using h t h_t h t ​ .

A many to many RNN

Here’s what makes a RNN recurrent: it uses the same weights for each step. More specifically, a typical vanilla RNN uses only 3 sets of weights to perform its calculations:

W x h W_{xh} W x h ​ , used for all x t x_t x t ​ → h t h_t h t ​ links.

, used for all links. W h h W_{hh} W h h ​ , used for all h t − 1 h_{t-1} h t − 1 ​ → h t h_t h t ​ links.

, used for all links. W h y W_{hy} W h y ​ , used for all h t h_t h t ​ → y t y_t y t ​ links.

We’ll also use two biases for our RNN:

b h b_h b h ​ , added when calculating h t h_t h t ​ .

, added when calculating . b y b_y b y ​ , added when calculating y t y_t y t ​ .

We’ll represent the weights as matrices and the biases as vectors. These 3 weights and 2 biases make up the entire RNN!

Here are the equations that put everything together:

h t = tanh ⁡ ( W x h x t + W h h h t − 1 + b h ) h_t = \tanh (W_{xh} x_t + W_{hh} h_{t-1} + b_h) h t ​ = tanh ( W x h ​ x t ​ + W h h ​ h t − 1 ​ + b h ​ )

y t = W h y h t + b y y_t = W_{hy} h_t + b_y y t ​ = W h y ​ h t ​ + b y ​

Don't skim over these equations. Stop and stare at this for a minute. Also, remember that the weights are matrices and the other variables are vectors.

All the weights are applied using matrix multiplication, and the biases are added to the resulting products. We then use tanh as an activation function for the first equation (but other activations like sigmoid can also be used).

No idea what an activation function is? Read my introduction to Neural Networks like I mentioned. Seriously.

3. The Problem

Let’s get our hands dirty! We’ll implement an RNN from scratch to perform a simple Sentiment Analysis task: determining whether a given text string is positive or negative.

Here are a few samples from the small dataset I put together for this post:

Text Positive? i am good ✓ i am bad ❌ this is very good ✓ this is not bad ✓ i am bad not good ❌ i am not at all happy ❌ this was good earlier ✓ i am not at all bad or sad right now ✓

4. The Plan

Since this is a classification problem, we’ll use a “many to one” RNN. This is similar to the “many to many” RNN we discussed earlier, but it only uses the final hidden state to produce the one output y y y:

A many to one RNN

Each x i x_i xi​ will be a vector representing a word from the text. The output y y y will be a vector containing two numbers, one representing positive and the other negative. We’ll apply Softmax to turn those values into probabilities and ultimately decide between positive / negative.

Let’s start building our RNN!

5. The Pre-Processing

The dataset I mentioned earlier consists of two Python dictionaries:

data.py

train_data = { 'good' : True , 'bad' : False , } test_data = { 'this is happy' : True , 'i am good' : True , }

True = Positive, False = Negative

We’ll have to do some pre-processing to get the data into a usable format. To start, we’ll construct a vocabulary of all words that exist in our data:

main.py

from data import train_data , test_data vocab = list ( set ( [ w for text in train_data . keys ( ) for w in text . split ( ' ' ) ] ) ) vocab_size = len ( vocab ) print ( '%d unique words found' % vocab_size )

vocab now holds a list of all words that appear in at least one training text. Next, we’ll assign an integer index to represent each word in our vocab.

main.py

word_to_idx = { w : i for i , w in enumerate ( vocab ) } idx_to_word = { i : w for i , w in enumerate ( vocab ) } print ( word_to_idx [ 'good' ] ) print ( idx_to_word [ 0 ] )

We can now represent any given word with its corresponding integer index! This is necessary because RNNs can’t understand words - we have to give them numbers.

Finally, recall that each input x i x_i xi​ to our RNN is a vector. We’ll use one-hot vectors, which contain all zeros except for a single one. The “one” in each one-hot vector will be at the word’s corresponding integer index.

Since we have 18 unique words in our vocabulary, each x i x_i xi​ will be a 18-dimensional one-hot vector.

main.py

import numpy as np def createInputs ( text ) : ''' Returns an array of one-hot vectors representing the words in the input text string. - text is a string - Each one-hot vector has shape (vocab_size, 1) ''' inputs = [ ] for w in text . split ( ' ' ) : v = np . zeros ( ( vocab_size , 1 ) ) v [ word_to_idx [ w ] ] = 1 inputs . append ( v ) return inputs

We’ll use createInputs() later to create vector inputs to pass in to our RNN.

6. The Forward Phase

It’s time to start implementing our RNN! We’ll start by initializing the 3 weights and 2 biases our RNN needs:

rnn.py

import numpy as np from numpy . random import randn class RNN : def __init__ ( self , input_size , output_size , hidden_size = 64 ) : self . Whh = randn ( hidden_size , hidden_size ) / 1000 self . Wxh = randn ( hidden_size , input_size ) / 1000 self . Why = randn ( output_size , hidden_size ) / 1000 self . bh = np . zeros ( ( hidden_size , 1 ) ) self . by = np . zeros ( ( output_size , 1 ) )

Note: We're dividing by 1000 to reduce the initial variance of our weights. This is not the best way to initialize weights, but it's simple and works for this post.

We use np.random.randn() to initialize our weights from the standard normal distribution.

Next, let’s implement our RNN’s forward pass. Remember these two equations we saw earlier?

h t = tanh ⁡ ( W x h x t + W h h h t − 1 + b h ) h_t = \tanh (W_{xh} x_t + W_{hh} h_{t-1} + b_h) h t ​ = tanh ( W x h ​ x t ​ + W h h ​ h t − 1 ​ + b h ​ )

y t = W h y h t + b y y_t = W_{hy} h_t + b_y y t ​ = W h y ​ h t ​ + b y ​

Here are those same equations put into code:

rnn.py

class RNN : def forward ( self , inputs ) : ''' Perform a forward pass of the RNN using the given inputs. Returns the final output and hidden state. - inputs is an array of one-hot vectors with shape (input_size, 1). ''' h = np . zeros ( ( self . Whh . shape [ 0 ] , 1 ) ) for i , x in enumerate ( inputs ) : h = np . tanh ( self . Wxh @ x + self . Whh @ h + self . bh ) y = self . Why @ h + self . by return y , h

Pretty simple, right? Note that we initialized h h h to the zero vector for the first step, since there’s no previous h h h we can use at that point.

Let’s try it out:

main.py

def softmax ( xs ) : return np . exp ( xs ) / sum ( np . exp ( xs ) ) rnn = RNN ( vocab_size , 2 ) inputs = createInputs ( 'i am very good' ) out , h = rnn . forward ( inputs ) probs = softmax ( out ) print ( probs )

If you need a refresher on Softmax, read my quick explanation of Softmax.

Our RNN works, but it’s not very useful yet. Let’s change that…

Liking this introduction so far? Subscribe to my newsletter to get notified about new Machine Learning posts like this one.

7. The Backward Phase

In order to train our RNN, we first need a loss function. We’ll use cross-entropy loss, which is often paired with Softmax. Here’s how we calculate it:

L = − ln ⁡ ( p c ) L = -\ln (p_c) L = − ln ( p c ​ )

where p c p_c pc​ is our RNN’s predicted probability for the correct class (positive or negative). For example, if a positive text is predicted to be 90% positive by our RNN, the loss is:

L = − ln ⁡ ( 0.90 ) = 0.105 L = -\ln(0.90) = 0.105 L = − ln ( 0 . 9 0 ) = 0 . 1 0 5

Want a longer explanation? Read the Cross-Entropy Loss section of my introduction to Convolutional Neural Networks (CNNs).

Now that we have a loss, we’ll train our RNN using gradient descent to minimize loss. That means it’s time to derive some gradients!

⚠️ The following section assumes a basic knowledge of multivariable calculus. You can skip it if you want, but I recommend giving it a skim even if you don’t understand much. We’ll incrementally write code as we derive results, and even a surface-level understanding can be helpful.

If you want some extra background for this section, I recommend first reading the Training a Neural Network section of my introduction to Neural Networks. Also, all of the code for this post is on Github, so you can follow along there if you’d like.

Ready? Here we go.

7.1 Definitions

First, some definitions:

Let y y y represent the raw outputs from our RNN.

represent the raw outputs from our RNN. Let p p p represent the final probabilities: p = softmax ( y ) p = \text{softmax}(y) p = softmax ( y ) .

represent the final probabilities: . Let c c c refer to the true label of a certain text sample, a.k.a. the “correct” class.

refer to the true label of a certain text sample, a.k.a. the “correct” class. Let L L L be the cross-entropy loss: L = − ln ⁡ ( p c ) L = -\ln(p_c) L = − ln ( p c ​ ) .

be the cross-entropy loss: . Let W x h W_{xh} W x h ​ , W h h W_{hh} W h h ​ , and W h y W_{hy} W h y ​ be the 3 weight matrices in our RNN.

, , and be the 3 weight matrices in our RNN. Let b h b_h b h ​ and b y b_y b y ​ be the 2 bias vectors in our RNN.

7.2 Setup

Next, we need to edit our forward phase to cache some data for use in the backward phase. While we’re at it, we’ll also setup the skeleton for our backwards phase. Here’s what that looks like:

rnn.py

class RNN : def forward ( self , inputs ) : ''' Perform a forward pass of the RNN using the given inputs. Returns the final output and hidden state. - inputs is an array of one-hot vectors with shape (input_size, 1). ''' h = np . zeros ( ( self . Whh . shape [ 0 ] , 1 ) ) self . last_inputs = inputs self . last_hs = { 0 : h } for i , x in enumerate ( inputs ) : h = np . tanh ( self . Wxh @ x + self . Whh @ h + self . bh ) self . last_hs [ i + 1 ] = h y = self . Why @ h + self . by return y , h def backprop ( self , d_y , learn_rate = 2e - 2 ) : ''' Perform a backward pass of the RNN. - d_y (dL/dy) has shape (output_size, 1). - learn_rate is a float. ''' pass

Curious about why we’re doing this caching? Read my explanation in the Training Overview of my introduction to CNNs, in which we do the same thing.

7.3 Gradients

It’s math time! We’ll start by calculating ∂ L ∂ y \frac{\partial L}{\partial y} ∂y∂L​. We know:

L = − ln ⁡ ( p c ) = − ln ⁡ ( softmax ( y c ) ) L = -\ln(p_c) = -\ln(\text{softmax}(y_c)) L = − ln ( p c ​ ) = − ln ( softmax ( y c ​ ) )

I’ll leave the actual derivation of ∂ L ∂ y \frac{\partial L}{\partial y} ∂y∂L​ using the Chain Rule as an exercise for you 😉, but the result comes out really nice:

∂ L ∂ y i = { p i if i ≠ c p i − 1 if i = c \frac{\partial L}{\partial y_i} = \begin{cases} p_i & \text{if $i

eq c$} \\ p_i - 1 & \text{if $i = c$} \\ \end{cases} ∂ y i ​ ∂ L ​ = { p i ​ p i ​ − 1 ​ if i  ​ = c if i = c ​

For example, if we have p = [ 0.2 , 0.2 , 0.6 ] p = [0.2, 0.2, 0.6] p=[0.2,0.2,0.6] and the correct class is c = 0 c = 0 c=0, then we’d get ∂ L ∂ y = [ − 0.8 , 0.2 , 0.6 ] \frac{\partial L}{\partial y} = [-0.8, 0.2, 0.6] ∂y∂L​=[−0.8,0.2,0.6]. This is also quite easy to turn into code:

main.py

for x , y in train_data . items ( ) : inputs = createInputs ( x ) target = int ( y ) out , _ = rnn . forward ( inputs ) probs = softmax ( out ) d_L_d_y = probs d_L_d_y [ target ] -= 1 rnn . backprop ( d_L_d_y )

Nice. Next up, let’s take a crack at gradients for W h y W_{hy} Why​ and b y b_y by​, which are only used to turn the final hidden state into the RNN’s output. We have:

∂ L ∂ W h y = ∂ L ∂ y ∗ ∂ y ∂ W h y \frac{\partial L}{\partial W_{hy}} = \frac{\partial L}{\partial y} * \frac{\partial y}{\partial W_{hy}} ∂ W h y ​ ∂ L ​ = ∂ y ∂ L ​ ∗ ∂ W h y ​ ∂ y ​

y = W h y h n + b y y = W_{hy} h_n + b_y y = W h y ​ h n ​ + b y ​

where h n h_n hn​ is the final hidden state. Thus,

∂ y ∂ W h y = h n \frac{\partial y}{\partial W_{hy}} = h_n ∂ W h y ​ ∂ y ​ = h n ​

∂ L ∂ W h y = ∂ L ∂ y h n \frac{\partial L}{\partial W_{hy}} = \boxed{\frac{\partial L}{\partial y} h_n} ∂ W h y ​ ∂ L ​ = ∂ y ∂ L ​ h n ​ ​

Similarly,

∂ y ∂ b y = 1 \frac{\partial y}{\partial b_y} = 1 ∂ b y ​ ∂ y ​ = 1

∂ L ∂ b y = ∂ L ∂ y \frac{\partial L}{\partial b_y} = \boxed{\frac{\partial L}{\partial y}} ∂ b y ​ ∂ L ​ = ∂ y ∂ L ​ ​

We can now start implementing backprop() !

rnn.py

class RNN : def backprop ( self , d_y , learn_rate = 2e - 2 ) : ''' Perform a backward pass of the RNN. - d_y (dL/dy) has shape (output_size, 1). - learn_rate is a float. ''' n = len ( self . last_inputs ) d_Why = d_y @ self . last_hs [ n ] . T d_by = d_y

Reminder: We created self.last_hs in forward() earlier.

Finally, we need the gradients for W h h W_{hh} Whh​, W x h W_{xh} Wxh​, and b h b_h bh​, which are used every step during the RNN. We have:

∂ L ∂ W x h = ∂ L ∂ y ∑ t ∂ y ∂ h t ∗ ∂ h t ∂ W x h \frac{\partial L}{\partial W_{xh}} = \frac{\partial L}{\partial y} \sum_t \frac{\partial y}{\partial h_t} * \frac{\partial h_t}{\partial W_{xh}} ∂ W x h ​ ∂ L ​ = ∂ y ∂ L ​ t ∑ ​ ∂ h t ​ ∂ y ​ ∗ ∂ W x h ​ ∂ h t ​ ​

because changing W x h W_{xh} Wxh​ affects every h t h_t ht​, which all affect y y y and ultimately L L L. In order to fully calculate the gradient of W x h W_{xh} Wxh​, we’ll need to backpropagate through all timesteps, which is known as Backpropagation Through Time (BPTT):

Backpropagation Through Time

W x h W_{xh} Wxh​ is used for all x t x_t xt​ → h t h_t ht​ forward links, so we have to backpropagate back to each of those links.

Once we arrive at a given step t t t, we need to calculate ∂ h t ∂ W x h \frac{\partial h_t}{\partial W_{xh}} ∂Wxh​∂ht​​:

h t = tanh ⁡ ( W x h x t + W h h h t − 1 + b h ) h_t = \tanh (W_{xh} x_t + W_{hh} h_{t-1} + b_h) h t ​ = tanh ( W x h ​ x t ​ + W h h ​ h t − 1 ​ + b h ​ )

The derivative of tanh ⁡ \tanh tanh is well-known:

d tanh ⁡ ( x ) d x = 1 − tanh ⁡ 2 ( x ) \frac{d \tanh(x)}{dx} = 1 - \tanh^2(x) d x d tanh ( x ) ​ = 1 − tanh 2 ( x )

We use Chain Rule like usual:

∂ h t ∂ W x h = ( 1 − h t 2 ) x t \frac{\partial h_t}{\partial W_{xh}} = \boxed{(1 - h_t^2) x_t} ∂ W x h ​ ∂ h t ​ ​ = ( 1 − h t 2 ​ ) x t ​ ​

Similarly,

∂ h t ∂ W h h = ( 1 − h t 2 ) h t − 1 \frac{\partial h_t}{\partial W_{hh}} = \boxed{(1 - h_t^2) h_{t-1}} ∂ W h h ​ ∂ h t ​ ​ = ( 1 − h t 2 ​ ) h t − 1 ​ ​

∂ h t ∂ b h = ( 1 − h t 2 ) \frac{\partial h_t}{\partial b_h} = \boxed{(1 - h_t^2)} ∂ b h ​ ∂ h t ​ ​ = ( 1 − h t 2 ​ ) ​

The last thing we need is ∂ y ∂ h t \frac{\partial y}{\partial h_t} ∂ht​∂y​. We can calculate this recursively:

∂ y ∂ h t = ∂ y ∂ h t + 1 ∗ ∂ h t + 1 ∂ h t = ∂ y ∂ h t + 1 ( 1 − h t 2 ) W h h \begin{aligned} \frac{\partial y}{\partial h_t} &= \frac{\partial y}{\partial h_{t+1}} * \frac{\partial h_{t+1}}{\partial h_t} \\ &= \frac{\partial y}{\partial h_{t+1}} (1 - h_t^2) W_{hh} \\ \end{aligned} ∂ h t ​ ∂ y ​ ​ = ∂ h t + 1 ​ ∂ y ​ ∗ ∂ h t ​ ∂ h t + 1 ​ ​ = ∂ h t + 1 ​ ∂ y ​ ( 1 − h t 2 ​ ) W h h ​ ​

We’ll implement BPTT starting from the last hidden state and working backwards, so we’ll already have ∂ y ∂ h t + 1 \frac{\partial y}{\partial h_{t+1}} ∂ht+1​∂y​ by the time we want to calculate ∂ y ∂ h t \frac{\partial y}{\partial h_t} ∂ht​∂y​! The exception is the last hidden state, h n h_n hn​:

∂ y ∂ h n = W h y \frac{\partial y}{\partial h_n} = W_{hy} ∂ h n ​ ∂ y ​ = W h y ​

We now have everything we need to finally implement BPTT and finish backprop() :

rnn.py

class RNN : def backprop ( self , d_y , learn_rate = 2e - 2 ) : ''' Perform a backward pass of the RNN. - d_y (dL/dy) has shape (output_size, 1). - learn_rate is a float. ''' n = len ( self . last_inputs ) d_Why = d_y @ self . last_hs [ n ] . T d_by = d_y d_Whh = np . zeros ( self . Whh . shape ) d_Wxh = np . zeros ( self . Wxh . shape ) d_bh = np . zeros ( self . bh . shape ) d_h = self . Why . T @ d_y for t in reversed ( range ( n ) ) : temp = ( ( 1 - self . last_hs [ t + 1 ] ** 2 ) * d_h ) d_bh += temp d_Whh += temp @ self . last_hs [ t ] . T d_Wxh += temp @ self . last_inputs [ t ] . T d_h = self . Whh @ temp for d in [ d_Wxh , d_Whh , d_Why , d_bh , d_by ] : np . clip ( d , - 1 , 1 , out = d ) self . Whh -= learn_rate * d_Whh self . Wxh -= learn_rate * d_Wxh self . Why -= learn_rate * d_Why self . bh -= learn_rate * d_bh self . by -= learn_rate * d_by

A few things to note:

We’ve merged ∂ L ∂ y ∗ ∂ y ∂ h \frac{\partial L}{\partial y} * \frac{\partial y}{\partial h} ∂ y ∂ L ​ ∗ ∂ h ∂ y ​ into ∂ L ∂ h \frac{\partial L}{\partial h} ∂ h ∂ L ​ for convenience.

into for convenience. We’re constantly updating a d_h variable that holds the most recent ∂ L ∂ h t + 1 \frac{\partial L}{\partial h_{t+1}} ∂ h t + 1 ​ ∂ L ​ , which we need to calculate ∂ L ∂ h t \frac{\partial L}{\partial h_t} ∂ h t ​ ∂ L ​ .

variable that holds the most recent , which we need to calculate . After finishing BPTT, we np.clip() gradient values that are below -1 or above 1. This helps mitigate the exploding gradient problem, which is when gradients become very large due to having lots of multiplied terms. Exploding or vanishing gradients are quite problematic for vanilla RNNs - more complex RNNs like LSTMs are generally better-equipped to handle them.

problem, which is when gradients become very large due to having lots of multiplied terms. Exploding or vanishing gradients are quite problematic for vanilla RNNs - more complex RNNs like LSTMs are generally better-equipped to handle them. Once all gradients are calculated, we update weights and biases using gradient descent.

We’ve done it! Our RNN is complete.

8. The Culmination

It’s finally the moment we been waiting for - let’s test our RNN!

First, we’ll write a helper function to process data with our RNN:

main.py

import random def processData ( data , backprop = True ) : ''' Returns the RNN's loss and accuracy for the given data. - data is a dictionary mapping text to True or False. - backprop determines if the backward phase should be run. ''' items = list ( data . items ( ) ) random . shuffle ( items ) loss = 0 num_correct = 0 for x , y in items : inputs = createInputs ( x ) target = int ( y ) out , _ = rnn . forward ( inputs ) probs = softmax ( out ) loss -= np . log ( probs [ target ] ) num_correct += int ( np . argmax ( probs ) == target ) if backprop : d_L_d_y = probs d_L_d_y [ target ] -= 1 rnn . backprop ( d_L_d_y ) return loss / len ( data ) , num_correct / len ( data )

Now, we can write the training loop:

main.py

for epoch in range ( 1000 ) : train_loss , train_acc = processData ( train_data ) if epoch % 100 == 99 : print ( '--- Epoch %d' % ( epoch + 1 ) ) print ( 'Train:\tLoss %.3f | Accuracy: %.3f' % ( train_loss , train_acc ) ) test_loss , test_acc = processData ( test_data , backprop = False ) print ( 'Test:\tLoss %.3f | Accuracy: %.3f' % ( test_loss , test_acc ) )

Running main.py should output something like this:

--- Epoch 100 Train: Loss 0.688 | Accuracy: 0.517 Test: Loss 0.700 | Accuracy: 0.500 --- Epoch 200 Train: Loss 0.680 | Accuracy: 0.552 Test: Loss 0.717 | Accuracy: 0.450 --- Epoch 300 Train: Loss 0.593 | Accuracy: 0.655 Test: Loss 0.657 | Accuracy: 0.650 --- Epoch 400 Train: Loss 0.401 | Accuracy: 0.810 Test: Loss 0.689 | Accuracy: 0.650 --- Epoch 500 Train: Loss 0.312 | Accuracy: 0.862 Test: Loss 0.693 | Accuracy: 0.550 --- Epoch 600 Train: Loss 0.148 | Accuracy: 0.914 Test: Loss 0.404 | Accuracy: 0.800 --- Epoch 700 Train: Loss 0.008 | Accuracy: 1.000 Test: Loss 0.016 | Accuracy: 1.000 --- Epoch 800 Train: Loss 0.004 | Accuracy: 1.000 Test: Loss 0.007 | Accuracy: 1.000 --- Epoch 900 Train: Loss 0.002 | Accuracy: 1.000 Test: Loss 0.004 | Accuracy: 1.000 --- Epoch 1000 Train: Loss 0.002 | Accuracy: 1.000 Test: Loss 0.003 | Accuracy: 1.000

Not bad from a RNN we built ourselves. 💯

Want to try or tinker with this code yourself? Run this RNN in your browser. It’s also available on Github.

9. The End

That’s it! In this post, we completed a walkthrough of Recurrent Neural Networks, including what they are, how they work, why they’re useful, how to train them, and how to implement one. There’s still much more you can do, though:

I write a lot about Machine Learning, so subscribe to my newsletter if you’re interested in getting future ML content from me.

Thanks for reading!