Hello Readers,

With this as My 50’th BlogPost, I’m here to bring some PowerShell fun in your lives 🙂

Today we’ll talk about Microsoft’s Project Oxford and its key capabilities like Emotion API.



What is Project Oxford?

Project Oxford is Microsoft’s Cloud based artificial Intelligence, that allows developers to automate Sophisticated tasks that would just be too costly and time-consuming for them to do by hand because Features this Project offers are only possible with advanced Machine Learning that you don’t have the time or resources to do on your own.

Some Key offerings of this projects are –

Face and Emotion Detection Speech Processing Language Understanding Intelligent Service (LUIS)

What is Machine Learning?

Machine learning is a sub field of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In short these are algorithms to make data-driven predictions or decisions. Which has made Self driven cars such as Tesla a reality.

HOW IT WORKS WITH POWERSHELL :

Emotion API is a RESTfull API that returns Structured data when called with Proper parameters and information.

So here we’ll Pass a Locally Stored Image and some other parameters to the Microsoft Emotion API in the cloud using the Invoke-RestMethod cmdlet, which will return a structured information like in below snaphot

Structured data in above snapshot has some important attributes.

FaceRectangle : This is the Rectangle’s location on the Image where the Face has been detected Scores : Scores are the Emotional variants based upon 8 common Facially expressed emotions by Humans all across the globe. Since my Image had just single Face in it, hence only one Face was detected, returning one set of {FaceRect, Scores} pair . If we pass an Image comprising of more faces, there will be multiple {FaceRect, Scores} Pairs returned as a result.

So we know the Exact Location on the Image, where a face has been detected and what are the various Emotions on the detected face.

With this data in our hand its just a 3 step process to detect and show emotions on all faces on the Image –

Calculate the Strongest Emotion : This the highest value in the Emotion score returned. Draw Two rectangles : Using the System.Drawing class draw two rectangles to meet following purposes. FACE RECTANGLE – This will Surround the face using the Location of the FaceRectangle attribute returned from the API.

This will using the returned from the API. EMOTION RECTANGLE – This Rectangle would be smaller in height and will sit upon the top edge of Face Rectangle, holding the Strongest Emotion name Calculated in Step 1 Invoke GUI and stick Image on it : Now with Rectangles drawn and the Emotion Marked on the Image, we’ll create a windows form to place this Image, using the System.Windows.Form class. The result will look something like the following Images

SINGLE FACE IMAGE :

GROUP IMAGE



NOTE : I broke down the script in just 3 major steps mentioned above, but the script has more has more sub steps and lots of Data Wrangling involved.

SCRIPT :

HOW TO USE IT :

Compile and Run the Script like in the animation below.

Hoping you’ll find this fun. 🙂

