Python is the de-facto language for machine learning and data science. There are numerous libraries and frameworks available in Python for this purpose. Elixir on the other hand is relatively new and doing machine learning in Elixir is still not easy. In this article, I will present how we can use a machine learning model trained using Python from a Phoenix (Elixir) web application. Since the process remains the same, it does not necessarily have to be web application. Any Elixir application can use similar approach and use pre-trained Python ML models.

ErlPort: The Secret Sauce

I will use ErlPort to establish the communication channel between Python and Elixir.

ErlPort is — “a library for Erlang which helps connect Erlang to a number of other programming languages. Currently supported external languages are Python and Ruby. The library uses Erlang port protocol to simplify connection between languages and Erlang external term format to set the common data types mapping.”

To know in details about ErlPort and how it works for Python, I suggest to read the official documentation here and here.

In summary,

— ErlPort creates an instance of Python language using library function start/0 or start/1

— Once the language instance is created, functions call/4 (synchronous) or cast/2 (asynchronous) can be used to call Python functions and pass back results to the caller.

— At the end of the session, stop/1 function is called to terminate the language instance

— Data types mapping between Erlang and Python are presented in tabular form here — http://erlport.org/docs/python.html. It is also possible to use custom data types providing custom encoder/decoder.

In this article, I will use the ErlPort hex module from here — https://hex.pm/packages/erlport

Training ML Model

Earlier I wrote an article — “Train your own ML model using Scikit and use in iOS app with CoreML (and probably with Augmented Reality)” . Since the focus of this article is how we can use pre-trained models from Elixir app, I will use the same model trained in that article here. Readers interested in understanding the Machine Learning part of it are encouraged to read the article. Since we will not use CoreML model here, the last part of the training has been slightly modified to save the model in pickle format using ‘joblib’ instead of CoreML.

The full Jupyter notebook (iris-analysis.ipynb) is available here — https://github.com/imeraj/Phoenix_Playground/blob/master/1.4/phoenix_ml/lib/phoenix_ml/model/iris-analysis.ipynb

The CSV file (iris-data.csv) containing training data is available here — https://github.com/imeraj/Phoenix_Playground/blob/master/1.4/phoenix_ml/lib/phoenix_ml/model/iris-data.csv

Generated ML model saved in file (classifier.pkl) is available here — https://github.com/imeraj/Phoenix_Playground/blob/master/1.4/phoenix_ml/lib/phoenix_ml/model/classifier.pkl

The complete source code of Phoenix app (phoenix_ml) is available here — https://github.com/imeraj/Phoenix_Playground/tree/master/1.4/phoenix_ml

NOTE: Please note, I have placed the Jupyter notebook, CSV file, and the generated model in the same source repository for this article purpose only. In practice, your application code and ML code can be stored in separate locations. All we need is access to the trained model file “classifier.pkl” from our web app.

Access ML Model from Phoenix(Elixir) Web App

In this section, I will focus on the application side and walk-through necessary code.

I generated a simple Phoenix 1.4 application without ecto and added ErlPort as a dependency in mix.exs —

defp deps do

[

{:phoenix, "~> 1.4.0"},

{:phoenix_pubsub, "~> 1.1"},

{:phoenix_html, "~> 2.11"},

{:phoenix_live_reload, "~> 1.2", only: :dev},

{:gettext, "~> 0.11"},

{:jason, "~> 1.0"},

{:plug_cowboy, "~> 2.0"},

{:erlport, "~> 0.10.0"}

]

end

Added a new route in router.ex —

scope "/", PhoenixMlWeb do

pipe_through :browser



get "/", PageController, :index

post "/predict", PageController, :show

end

Modified the UI code (templates/page/{form.html.eex, index.html.eex}) so that final UI looks as below —

Controller code for “show” is pretty much straight-forward and looks as below —

def show(conn, %{

"sepal_length" => sepal_length,

"sepal_width" => sepal_width,

"petal_length" => petal_length,

"petal_width" => petal_width

}) do

with {sepal_length, _} <- Float.parse(sepal_length),

{sepal_width, _} <- Float.parse(sepal_width),

{petal_length, _} <- Float.parse(petal_length),

{petal_width, _} <- Float.parse(petal_width) do

class = ML.predict([[sepal_length, sepal_width, petal_length, petal_width]])



conn

|> put_flash(:info, "Predicted class: " <> class)

|> render("index.html")

else

_error ->

conn

|> put_flash(:error, "Invalid parameters!")

|> render("index.html")

end

end

Here, controller takes input parameters and parses them to float. Upon successful parse, ML.predict is called to make the predicted class. Most of our logic resides in ML.predict.

Making Predictions

The main part of reading the model and making prediction is done in Python code (lib/phoenix_ml/model/classifier.py) —

import os

from sklearn.externals import joblib



def load_model():

path = os.path.abspath('lib/phoenix_ml/model/classifier.pkl')

return joblib.load(path)



def predict_model(args):

iris_classifier = load_model()

return iris_classifier.predict([args])[0]

predict_model function loads the model from ‘classifier.pkl’ and call predict function passing necessary parameters and returns the result. (I will talk more about the [args] part later)

Inside lib/phoenix_ml/helpers there are two files — model_predictor.ex and python_helper.ex.

Let me explain this part of the code now.

defmodule PhoenixMl.ModelPredictor do

@moduledoc false



alias PhoenixMl.PythonHelper, as: Helper



@path 'lib/phoenix_ml/model/'



def predict(args) do

call_python(:classifier, :predict_model, args)

end



defp call_python(module, func, args) do

pid = Helper.py_instance(Path.absname(@path))

result = Helper.py_call(pid, module, func, args)



pid

|> Helper.py_stop()



result

end

end

Here —

— predict function calls call_python with module (:classifier), function(:predict_model) and arguments (args)

— call_python function communicates with Python module using the helper functions residing in python_helper.ex —

defmodule PhoenixMl.PythonHelper do

@moduledoc false



def py_instance(path) when is_binary(path) do

{:ok, pid} = :python.start([{:python_path, to_charlist(path)}])

pid

end



def py_call(pid, module, func, args \\ []) do

pid

|> :python.call(module, func, args)

end



def py_stop(pid) do

:python.stop(pid)

end

end

Here —

— py_instance: creates Python instance using the Python module (:classifier)

— py_call: makes synchrounous call to Python module’s predict_model function with args as arguments

— py_stop: terminates the Python instance

At this stage, we are ready to make predictions. Below screenshot shows the input parameters and corresponding class predicted —

Making prediction

One Last Thing

I want to say a bit more about the arguments passed.

From controller, I pass argument — [[5.5, 2.4, 3.7, 1.0]]

This goes to ML.predict function as — [[5.5, 2.4, 3.7, 1.0]]

py_call receives this argument and calls — predict_model function passing argument as [5.5, 2.4, 3.7, 1.0] since ErlPort will separate each argument (it takes a list of arguments)

But SKLean expects a 2-D array for making predictions. So, when I call iris_classifier.predict([args]), I have to again make a 2-D array so that final argument becomes — [[5.5, 2.4, 3.7, 1.0]]

References

I hope this articles helped some readers to understand how it’s possible to use Machine Learning models trained in Python from Phoenix(Elixir) applications. However, ErlPort is not the only way to communicate with Python code from Elixir. There are other projects like — Apache Thrift for cross language service development. Interested readers are urged to look into alternative solutions.

For more elaborate and in depth future technical posts please follow me here or on twitter.