Thanks to Google, we can now just download the inception V3 pre-trained model and setup our image classifier. We don’t have to spend a huge amount of time training this model from scratch. The pre-trained model can classify 1000 different objects and we can also add more classes or categories to it. The time it takes to retrain Tensorflow Inception model is much lesser than the time taken to train it from scratch.

The reason why we don’t train for the new models from scratch is because it can take many days or a few weeks to train on lower hardware specifications. When we retrain Tensorflow Inception model, we can do it within a few hours or a day. More the number of classes, higher will be the retraining time. To continue with this tutorial, it will help if you are familiar with some of Tensorflow’s concepts and working. If you aren’t, then you can read more in this Simple guide to Tensorflow’s concepts and jargons.

Also, if you haven’t already, check out the tutorial on setting up tensorflow image classifier with a pre-trained model. This is for beginners to test what a classifier built with deep learning can do.

How does the Retraining work?

By now, you know that the retraining is a quick process on a machine with a decent GPU. To understand why it is quick, you need to know the concepts of Tensorflow Bottlenecks. The last but one layer of the neural network is trained to give out different values based on the image that it gets. This layer has enough summarized information to provide the next layer which does the actual classification task. This last but one layer is called the bottleneck.

Tensorflow computes all the bottleneck values as the first step in training. The bottleneck values are then stored as they will be required for each iteration of training. The computation of these values is faster because tensorflow takes the help of existing pre-trained model to assist it with the process. Be default 4000 iterations of training will be performed. This can be varied depending on the accuracy required. Computation of bottleneck values takes the maximum amount of time in a retraining process.

Step1: Download the pre-trained model and the required scripts.

I have combined all the required files into a git repository. Download it by using the following command and navigate in to that folder.

git clone https://github.com/akshaypai/tfClassifier cd tfClassifier

This folder contains the scripts required to retrain the classifier and also the pre-trained model which we will be using.

Step2: Setup the image folder

This step involves setting up the folder structure so that tensorflow can pick up the classes easily. Let’s assume that you want to train 5 new flower types, say “roses”, “tulips”, “dandelions”, “mayflower”, and “marigold”. To create the folder structure,

Create one folder for each flower type. The name of the folder will be the name of the class ( in this case, that particular flower). Add all the images of the flowers into its respective folders. Eg; all images of roses go into the “roses” folder. Add all the folders into another parent folder, say, “flowers”.

At the end of this exercise, you will have the following structure:

~/flowers ~/flowers/roses/img1.jpg ~/flowers/roses/img2.jpg ... ~/flowers/tulips/tulips_img1.jpg ~/flowers/tulips/tulips_img2.jpg ~/flowers/tulips/tulips_img3.jpg ...

This will repeat for all the folders. The folder structure is now ready.

Ad: Don’t miss – Python for Data Science and Machine Learning Bootcamp

Step 3: Running the re-training script

Use the following command to run the script.

python retrain.py --model_dir ./inception --image_dir ~/flowers --output_graph ./output --how_many_training_steps 500

command line arguments:

–model_dir – This parameter gives the location of the pre-trained model. the pre-trained model is stored under the inception folder of the git repository.

This parameter gives the location of the pre-trained model. the pre-trained model is stored under the inception folder of the git repository. –image_dir – Path of the image folder which was created in step 2

Path of the image folder which was created in step 2 –output_graph – The location to store the newly trained graph.

The location to store the newly trained graph. –how_many_training_steps – Training steps indicate the number iterations to perform. By default, this is 4000. Finding the right number is a trial and error process and once you find the best model, you can start using that.

Your new model can perform better. Thus, use other parameters to improve accuracy.

random_crop – Random cropping allows you to focus on the main part of the image.

Random_scale – This is similar to cropping but it randomly scales up the image size.

flip_left_right – Flipping is rotating the image horizontally and with this feature, a flip can be induced randomly in the training images.

The size of validation and test set in percentages and in numbers can be controlled. Distortion includes random brightness as well. Distortions are induced to take into account the various anomalies that occur during the real-time detection phase. The type of distortion that suits you is dependent on the type and classes of images that you are using. This makes the parameter selection and tuning a trial and error exercise.

Update 1: Added code to test the retrained model

Once you have the trained model, you will have two files as output. First is the “ouput.pb” file and the second one is the “labels.txt” file. To test the model, you can find a new script in the git Repository named “retrain_model_classifier.py” . To test the model follow the procedure below.

Make sure the retrain_model_classifier.py is in the same folder as the retrained model and labels file. Run the following command.

python retrain_model_classifier.py <image_path> #following is an example of classifying an image present in Pictures directory:&nbsp; python retrain_model_classifier.py /home/akshay/Pictures/test_image_flower.jpg #following is an example output """ rose ( score=0.78) tulips( score=0.14) others( score =0.02) """

Note:

The output will be the probability values of each class with the highest one being the first. In the example below, The model says that it is 78% confident that the image belongs to the “rose” class. Thus the classified result will be “rose”.

Conclusion

So, this is how you retrain Tensorflow Inception model on Ubuntu. The accuracy of the model is printed at the very end of the training process. Once you have the model, you can use it for the classification process.

Remember that the older classes will no longer be available. You will be able to classify only on those classes on which you have retained.

If you are interested to learn machine learning and python in depth, then investing in Python for Data Science and Machine Learning Bootcamp

would give you an amazing platform to learn and grow.

Subscribe to my blog below for free tutorials on Tensorflow and machine learning delivered straight to your inbox.