5 Important Changes Coming with TensorFlow 2.0

What’s new and what’s good!

TensorFlow 2.0 is the next major release coming up for the TensorFlow open source library. Since it’s initial release in 2015, TensorFlow has undergone many significant changes, mainly focused on expanding the library’s capabilities to be able to do everything that Machine Learning practitioners want to possibly do!

TensorFlow 2.0 will represent a major milestone in the library’s development. Over the past years, the main complaint that came from ML practitioners about TensorFlow was its complicated, black-box style.

Version 2.0’s design is directly aimed at solving that problem, at making TensorFlow more user-friendly and easy to use. Here we’ll take a look at 5 important changes coming to this new TensorFlow release that are part of that effort.

(1) Eager Execution by Default

To build a Neural Network in TF 1.x, we needed to define a TensorFlow Graph. This was a very abstract and black box data structure. We couldn’t see what’s inside at runtime.

If we attempted to print a TF graph variable at runtime using a simple Python print() function, we don’t get to see the values of the variable. Instead, we would see a reference to the graph node — not exactly the information we’re looking for!

All of that changes with Eager Execution. TensorFlow code can now be run just like normal Python code — eagerly. It starts to look and work a lot like basic Numpy!

No more need to create a tf.Session() and not being able to see the values of graph nodes. All variables are visible right away using a simple print() .

Training in TensorFlow 1.x:

Training in TensorFlow 2.0:

(2) Keras as the high-level API

For many ML practitioners, Keras is the go-to API for building Deep Learning models. It’s just so much easier to use than other ML frameworks like TensorFlow or MXNet. Networks can be built in a very intuitive way, passing inputs and outputs from one function to the next where each function represents a network layer like convolutional or pooling.

Finally, Keras has become the official high-level API of TensorFlow in release 2.0. When you install TensorFlow 2.0 it’ll come with Keras. It integrates seamlessly with TensorFlow without the need for any kind of bridge code.

Part of the beauty of this is the ability to use Keras with TensorFlow — it doesn’t have to be a fully disjointed replacement anymore. When Keras is enough to do the job, you can use 100% Keras code. If you need to do something fancy like custom layers or a brand new training scheme, then you can simply add those extra code snippets and functions with TensorFlow code. It’s a perfect balance!

(3) API Cleanup

As TensorFlow 1.x went through development, many, many custom and contrib APIs popped up to try and expand the library’s functionality.

When building a neural network in TensorFlow 1.x you have many options to choose from: tf.slim , tf.layers , tf.contrib.layers , and tf.keras . Beyond that, more custom code for debugging, math, and specific ML functions are also available. Needless to say, it’s become quite the mess!

Version 2.0 has a lot of API cleanup to simplify and unify the TensorFlow API. Many APIs such as tf.app, tf.flags, and tf.logging are either gone or moved in 2.0. APIs that had Keras equivalents have been completely replaced in favour of the much simpler Keras version.

All in all, this is a welcomed change. General usage of the library is now much simpler and easier. Documentation is clearer. And, one big reason: code can now more easily be shared between users, since everyone will finally be using the same API!

One extra bonus: if you’re looking to upgrade your TF 1.x code to TF 2.0 with the now moved APIs, you can use the v2 upgrade script!

(4) TF datasets

No more of those ugly queue runners that were required for optimized training with large datasets. For TensorFlow 2.0, queue runners have been completely replaced with tf.data.

With tf.data, training data is read using input pipelines in a much cleaner way. The API itself is simplified and far easier to use, handling in a similar way as the fit_generator and related flow functions in Keras. Convenient input from in-memory data such as Numpy arrays is also supported.

For a tutorial on how to use tf.data for TensorFlow 2.0, see this link!

(5) You can still run TensorFlow 1.x code with 2.0 release

Finally, one of the biggest sighs of relief comes from hearing that you can still use TensorFlow 1.x code when running TensorFlow 2.0. This is quite important as it makes the transition from versions 1.x to 2.0 smoother without any major breakage.

Of course, this does not let you take advantage of many of the improvements made in TensorFlow 2.0. But it does allow you to make a steady transition, replacing parts of the code as needed with 2.0 improvements.