This is the second blog in the series Deploying a Multi-Label Image Classifier using PyTorch, Flask, ReactJS and Firebase data storage. If you missed the first blog its here. Before, starting with this blog be sure to go through the previous one.

In this blog we will be developing a flask app/service to create an API for our ReactJS front-which we will be building in next part.

The whole code base is shared here.

1. Installing the required modules

Go to your Unix/Linux based terminal and use the following line to install the required modules. If you have trained and saved the model in the previous blog on Colab then you would need to install PyTorch into your system as well.

pip install flask flask-cors gunicorn pillow requests

To install PyTorch for your system go to Quick Stark Locally on the PyTorch website and choose your system configurations and then install using pip .

2. Flask APP folder structure

Go to a directory of your choice and open it into your terminal. You need to create two folders inside the directory one models folder and another templates folder and create three files in your main directory from where your created folders. The three files are app.py file, model.py file, and predict.py file. My folder structure is as follows:

API | |-- templates | | - home.html |-- models | | - pytorch_saved_model_file |-- app.py |-- model.py |-- predict.py

3. Creating our Flask App

After completing the above folder structure step you can now move onto the part where the coding begins. Open the API folder into your favorite text-editor. Go to the app.p y file and start typing.

app.py

import subprocess import io from flask_cors import CORS from flask import Flask, request, render_template, jsonify @app.route('/predict_api', methods=['GET', 'POST']) def predict_classes(): der_template('home.html', value = "Image") if request.method == 'POST': if 'file' not in request.files: return "Image not uploaded" file = request.files['file'].read() try: img = Image.open(io.BytesIO(file)) except IOError: return jsonify(predictions = "Not an Image, Upload a proper image file", preds_prob = "") img = img.convert("RGB") img.save("input.jpg", "JPEG") p = (subprocess.check_output(["python /Users/vatsalsaglani/Desktop/njunk/personal/CelebA_API/predict.py input.jpg"], shell=True).decode("utf-8")) pp = p.split("@") preds = pp[0] pred_proba = pp[1] pred_proba = pred_proba.split("-") return jsonify(predicitions = preds, preds_prob = pred_proba) if __name__ == '__main__': app.run(debug = True)

Now go to the templates folder and create a home.html file and follow the following code snippet.

<!DOCTYPE html> <html> <head> <title>Classifier</title> </head> <body> <h2>Upload FIle</h2> <p>{{ value }}</p> <form method = 'post' enctype=multipart/form-data> <input type = "file" name = "file"> <input type="submit" value = "upload"> </form> </body> </html>

Let’s start our Flask server and check how far have we come. Go to your terminal inside your API folder and type the following commands.

export FLASK_APP=app.py export FLASK_ENV=development export FLASK_DEBUG=1 flask run

After running the flask app you can go to localhost:5000 and it should look something like this.

4. Why Subprocess?

As you might have noticed that we are using a subprocess output to get our predictions, but why exactly are we doing so? When working with PyTorch saved models there are chances of serialization issues so the PyTorch model needs to get loaded inside the main function but we cannot run it directly through the app.py ’s main function so we create another predict.py file to load the model inside the main .

PyTorch also needs to have the model architecture in-place while prediction so we have one more file called the model.py file wherein we have defined our same model architecture which we have used during training.

Let’s get into the predict.py and model.py files and get done with our Flask APP to move onto the ReactJS and Firebase part in the next blog.

model.py

import torch import torch.nn as nn from torchvision import transforms import torch.nn.functional as F class MultiClassifier(nn.Module): def __init__(self): super(MultiClassifier, self).__init__() self.ConvLayer1 = nn.Sequential( nn.Conv2d(3, 64, 3), # 3, 256, 256 nn.MaxPool2d(2), # op: 16, 127, 127 nn.ReLU(), # op: 64, 127, 127 ) self.ConvLayer2 = nn.Sequential( nn.Conv2d(64, 128, 3), # 64, 127, 127 nn.MaxPool2d(2), #op: 128, 63, 63 nn.ReLU() # op: 128, 63, 63 ) self.ConvLayer3 = nn.Sequential( nn.Conv2d(128, 256, 3), # 128, 63, 63 nn.MaxPool2d(2), #op: 256, 30, 30 nn.ReLU() #op: 256, 30, 30 ) self.ConvLayer4 = nn.Sequential( nn.Conv2d(256, 512, 3), # 256, 30, 30 nn.MaxPool2d(2), #op: 512, 14, 14 nn.ReLU(), #op: 512, 14, 14 nn.Dropout(0.2) ) self.Linear1 = nn.Linear(512 * 14 * 14, 1024) self.Linear2 = nn.Linear(1024, 256) self.Linear3 = nn.Linear(256, 40) def forward(self, x): x = self.ConvLayer1(x) x = self.ConvLayer2(x) x = self.ConvLayer3(x) x = self.ConvLayer4(x) x = x.view(x.size(0), -1) x = self.Linear1(x) x = self.Linear2(x) x = self.Linear3(x) return F.sigmoid(x)

predict.py

import torch import torch.nn as nn from torchvision import transforms import torch.nn.functional as F from PIL import Image import numpy as np from model import MultiClassifier import io import argparse def get_model(path): model = MultiClassifier() model = torch.load(path, map_location='cpu') model = model.eval() return model def get_tensor(img): tfms = transforms.Compose([ transforms.Resize((256, 256)), transforms.ToTensor() ]) return tfms(Image.open((img))).unsqueeze(0) def predict(img): model = get_model(<pathtoyoursavedpytorchmodel>) label_lst = ['5_o_Clock_Shadow','Arched_Eyebrows','Attractive','Bags_Under_Eyes','Bald','Bangs','Big_Lips','Big_Nose','Black_Hair', 'Blond_Hair', 'Blurry','Brown_Hair','Bushy_Eyebrows','Chubby','Double_Chin','Eyeglasses','Goatee','Gray_Hair','Heavy_Makeup', 'High_Cheekbones','Male','Mouth_Slightly_Open','Mustache','Narrow_Eyes','No_Beard','Oval_Face','Pale_Skin','Pointy_Nose', 'Receding_Hairline','Rosy_Cheeks','Sideburns','Smiling','Straight_Hair','Wavy_Hair','Wearing_Earrings','Wearing_Hat', 'Wearing_Lipstick','Wearing_Necklace','Wearing_Necktie','Young'] tnsr = get_tensor(img) op = model(tnsr) op_b = torch.round(op) op_b_np = torch.Tensor.cpu(op_b).detach().numpy() preds = np.where(op_b_np == 1)[1] sigs_op = torch.Tensor.cpu(torch.round((op)*100)).detach().numpy()[0] o_p = np.argsort(torch.Tensor.cpu(op).detach().numpy())[0][::-1] label = [] for i in preds: label.append(label_lst[i]) arg_s = {} for i in o_p: arg_s[label_lst[int(i)]] = sigs_op[int(i)] _l = list(arg_s.items())[:10] cd = [': '.join(map(str, tup)) for tup in _l] cd = '-'.join(cd) return str(label)+"@", cd if __name__ == '__main__': parser = argparse.ArgumentParser(description='predict arguments') parser.add_argument('img_path', type = str, help = 'Image Required') args = parser.parse_args() img_path = args.img_path l = predict(img_path) for i in l: print(str(i))

The predict function is the same as we had in the first part the only addition is the get_model() function wherein we are loading our model and using it for prediction.

function wherein we are loading our model and using it for prediction. At the end every thing is called under the main and is executed there. So, when the app.py calls in the subprocess which takes imagepath as one of the only arguments, this predict.py file is executed and a string output is shared to the main app.py file.

calls in the which takes as one of the only arguments, this file is executed and a string output is shared to the main file. To get proper results we have joined different string outputs with the @ character so we can then split it and process our output to the user.

Now let’s execute the whole charade.

Go to localhost:5000 in your web browser and select an image and press upload. You will get an output something like this.

In the next part we will be building a ReactJS front-end and also working with Firebase Storage to store the files which are being uploaded. You can play around with the final output here. We will be building it in the next part. If you did follow all everything in the first and second blogs and have got everything working then,

If this article helped you in any which way possible and you liked it, please appreciate it by sharing it in among your community. If there are any mistakes feel free to point those out by commenting down below. To know more about me please click here and if you find something interesting just shoot me a mail and we could have a chat over a cup of ☕️. For updated contents of this blog, you can visit https://blogs.vatsal.ml

Support this content 😃 😇