Tensorflow, the popular Machine Learning framework, just released its version 1.10 on 09/08/2018. As any data scientist worthy of this name, ignoring the most recent patches is not an option: you want to use your libraries and frameworks to bring out their fullest potential so that your clients know they got their money’s worth through you! (Needless to say, but I am taking no chances here: version 1.10 is NOT the same as version 1.1. Remember that, it will save your reputation some day!)

And so, in an attempt to help my fellow data scientists and the data science community as a whole (After all, if you listened to the latest episode of the Superdatascience podcast, “We are a collaboration, not a competition!”), I will go through the three major features and improvements that version 1.10 has over its predecessor, version 1.9. Note that these are not the only changes that were made, but to explain the other improvements, I would probably need one article each because they are quite advanced. If you are still curious, feel free to have a look at this Github link.

The tf.lite runtime now supports complex64

We all know how computationally intensive Machine Learning and Deep Learning algorithms can be. Now imagine deploying those on a mobile phone! It might be a good idea if you’re planning to heat up your lunch using your processor, but otherwise, RIP phone ☹

Luckily, Tensorflow Lite comes to the rescue! Tensorflow Lite is Tensorflow’s lightweight solution for mobile and embedded devices, and goes by tf.lite when used in code.

Alright, that’s cool, but what’s complex64? ‘complex64’ is a data type representing a complex number, with a float64 as real and a float64 as an imaginary component. Integrating this into tf.lite means the range of calculations you can perform has just been extended to cover operations involving complex numbers!

2. Initial Bigtable integration for tf.data

On May 6 2015, Google released Google Cloud Bigtable, “a fully managed, high-performance, extremely scalable NoSQL database service accessible through the industry-standard, open-source Apache HBase API”. Intimidating, huh? Don’t worry, just bear in mind that this service is powered by Bigtable, the same database that drives nearly all of Google’s largest applications.

So, what does integrating Bigtable into tf.data mean for us? Well, the tf.data API makes it easy to deal with large amounts of data, different data formats, and complicated transformations. Integrating a database feature into it is expected to help with the data manipulation and analytics aspect.

3. Improved local run behaviour in tf.estimator.train_and_evaluate which does not reload checkpoints for evaluation.

Ah, tf.estimator! With its premade models and functions, this API is a sweetheart…until you want to build a custom model. The tf.estimator.train_and_evaluate function trains, evaluates and optionally even exports the model by using the estimator passed to it as argument. By not reloading checkpoints (states of the model) for the evaluation part, one can expect better performance from this function.

For starters, here are three of the major changes made to TensorFlow. While these do seem like good initiatives, time will eventually tell whether they are actually improvements.

If this article was helpful to you, please give it an applause so that others can find this article as well. It is always good to help each other in the data science community :)