The problem of very deep neural networks

In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.

The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn’t always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow.

During training, we might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:

We are now going to solve this problem by building a Residual Network!

Building a Residual Network

In ResNets, a “shortcut” or a “skip connection” allows the gradient to be directly back-propagated to earlier layers:

The image on the left shows the “main path” through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, we can form a very deep network.

Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. We are going to implement both of them.

1 – The identity block

The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say a[l]) has the same dimension as the output activation (say a[l+2]). To flesh out the different steps of what happens in a ResNet’s identity block, here is an alternative diagram showing the individual steps:

The upper path is the “shortcut path.” The lower path is the “main path.” In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step.

In this exercise, we’ll actually implement a slightly more powerful version of this identity block, in which the skip connection “skips over” 3 hidden layers rather than 2 layers. It looks like this:

Here’re the individual steps.

First component of main path:

The first CONV2D has F 1 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be conv_name_base + '2a' . Use 0 as the seed for the random initialization.

filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be . Use 0 as the seed for the random initialization. The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a' .

. Then apply the ReLU activation function. This has no name and no hyperparameters.

Second component of main path:

The second CONV2D has F 2 filters of shape ( f , f ) and a stride of (1,1). Its padding is “same” and its name should be conv_name_base + '2b' . Use 0 as the seed for the random initialization.

filters of shape and a stride of (1,1). Its padding is “same” and its name should be . Use 0 as the seed for the random initialization. The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b' .

. Then apply the ReLU activation function. This has no name and no hyperparameters.

Third component of main path:

The third CONV2D has F 3 filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be conv_name_base + '2c' . Use 0 as the seed for the random initialization.

filters of shape (1,1) and a stride of (1,1). Its padding is “valid” and its name should be . Use 0 as the seed for the random initialization. The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c' . Note that there is no ReLU activation function in this component.

Final step:

The shortcut and the input are added together.

Then apply the ReLU activation function. This has no name and no hyperparameters.

Now let’s implement the ResNet identity block.

To implement the Conv2D step: See reference

To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))

For the activation, use: Activation('relu')(X)

To add the value passed forward by the shortcut: See reference

def identity_block ( X , f , filters , stage , block ): """ Implementation of the identity block as defined in Figure 3 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network Returns: X -- output of the identity block, tensor of shape (n_H, n_W, n_C) """ ### The first Component ### # defining name basis conv_name_base = 'res' + str ( stage ) + block + '_branch' bn_name_base = 'bn' + str ( stage ) + block + '_branch' # Retrieve Filters F1 , F2 , F3 = filters # Save the input value. You'll need this later to add back to the main path. X_shortcut = X # First component of main path X = Conv2D ( filters = F1 , kernel_size = ( 1 , 1 ), strides = ( 1 , 1 ), padding = 'valid' , name = conv_name_base + '2a' , kernel_initializer = glorot_uniform ( seed = 0 ))( X ) X = BatchNormalization ( axis = 3 , name = bn_name_base + '2a' )( X ) X = Activation ( 'relu' )( X ) ### The second Component ### # ... ### The third Component ### # ... return X

2 – The convolutional block

Next, the ResNet “convolutional block” is the other type of block. We can use this type of block when the input and output dimensions don’t match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:

The CONV2D layer in the shortcut path is used to resize the input x to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. For example, to reduce the activation dimensions’s height and width by a factor of 2, we can use a 1×1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.

The details of the convolutional block are as follows.

First component of main path:

The first CONV2D has F 1 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be conv_name_base + '2a' .

filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be . The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a' .

. Then apply the ReLU activation function. This has no name and no hyperparameters.

Second component of main path:

The second CONV2D has F 2 filters of (f,f) and a stride of (1,1). Its padding is “same” and it’s name should be conv_name_base + '2b' .

filters of (f,f) and a stride of (1,1). Its padding is “same” and it’s name should be . The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b' .

. Then apply the ReLU activation function. This has no name and no hyperparameters.

Third component of main path:

The third CONV2D has F 3 filters of (1,1) and a stride of (1,1). Its padding is “valid” and it’s name should be conv_name_base + '2c' .

filters of (1,1) and a stride of (1,1). Its padding is “valid” and it’s name should be . The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c' . Note that there is no ReLU activation function in this component.

Shortcut path:

The CONV2D has F 3 filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be conv_name_base + '1' .

filters of shape (1,1) and a stride of (s,s). Its padding is “valid” and its name should be . The BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '1' .

Final step:

The shortcut and the main path values are added together.

Then apply the ReLU activation function. This has no name and no hyperparameters.

Let’s now implement the convolutional block.

Conv Hint

BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))

For the activation, use: Activation('relu')(X)

Addition Hint

def convolutional_block ( X , f , filters , stage , block , s = 2 ): """ Implementation of the convolutional block as defined in Figure 4 Arguments: X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev) f -- integer, specifying the shape of the middle CONV's window for the main path filters -- python list of integers, defining the number of filters in the CONV layers of the main path stage -- integer, used to name the layers, depending on their position in the network block -- string/character, used to name the layers, depending on their position in the network s -- Integer, specifying the stride to be used Returns: X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C) """ # defining name basis conv_name_base = 'res' + str ( stage ) + block + '_branch' bn_name_base = 'bn' + str ( stage ) + block + '_branch' # Retrieve Filters F1 , F2 , F3 = filters # Save the input value X_shortcut = X ##### MAIN PATH ##### # First component of main path X = Conv2D ( F1 , ( 1 , 1 ), strides = ( s , s ), name = conv_name_base + '2a' , kernel_initializer = glorot_uniform ( seed = 0 ))( X ) X = BatchNormalization ( axis = 3 , name = bn_name_base + '2a' )( X ) X = Activation ( 'relu' )( X ) # Second component of main path # ... # Third component of main path # ... ##### SHORTCUT PATH #### # ... # Final step: Add shortcut value to main path, and pass it through a RELU activation # ... return X

3 – Building our first ResNet model (50 layers)

We now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. “ID BLOCK” in the diagram stands for “Identity block,” and “ID BLOCK x3” means we should stack 3 identity blocks together.

The details of this ResNet-50 model are: