By Alex Wiltschko, Dan Moldovan, Wolff Dobson

We’d like to tell you about a new TensorFlow feature called “AutoGraph”. AutoGraph converts Python code, including control flow, print() and other Python-native features, into pure TensorFlow graph code.

Writing TensorFlow code without using eager execution requires you to do a little metaprogramming — -you write a program that creates a graph, and then that graph is executed later. This can be confusing, especially for new developers. Some especially tricky situations involve more complex models such as ones that use if and while , or ones that have side effects like print() , or accept structured input.

Why do we need graphs at all? Graphs allow all kinds of optimizations, like removing common sub-expressions and fusing kernels. Plus, graphs simplify distributed training and deployment to all sorts of environments, as they form a platform-independent model of computation. This is especially important for distributed training on multiple GPUs or TPUs, or distributing your model on other platforms like mobile or IoT via TensorFlow Lite.

Here’s a really simple example of an operation you might want to add to your graph:

With eager execution, this would “just work”, however such operations may be slow due to Python interpreter overheads or missed program optimization opportunities.

To make this ready for graph execution, you’d need to rewrite this to use constructs like tf.cond() , but that can be tedious and difficult to implement. AutoGraph can do this conversion automatically for you, keeping the ease of eager programming while reaping the performance benefit of graph-based execution.

In our example, we can decorate our function with autograph.convert() , and AutoGraph will automatically generate graph-ready code.

Using AutoGraph, this code:

becomes this code at execution time due to the decorator.

You can then call your code as if it were a TensorFlow op:

As you can see, AutoGraph bridges the gap between eager execution and Graphs. AutoGraph takes in your eager-style Python code and converts it to graph-generating code.

AutoGraph isn’t just a collection of useful macros; it uses source code transformation to allow it to override any part of the Python language, including control flow, function application, and assignment, generating boilerplate code, and refactoring idiomatic Python to make it easy to turn into graphs.

With any compiler, a worry will be readability of error messages; to this end, AutoGraph is set up to create error messages and stack traces that reveal the source of the error in the original source code rather than only showing references to generated code.

Runnable Examples

So, what can AutoGraph do for you? Here are some examples of code that now can turn directly into graph code without any changes. If you want to check out all of this in action, we have a notebook you can open this in Colab or see it in GitHub.

Here we check the Collatz conjecture using loops and branches. Note, that for variety, instead of the decorator, we use AutoGraph’s .to_graph() function to turn this into a graph.

AutoGraph can support arbitrary nested control flow, such as:

AutoGraph allows you to append elements to arrays inside loops. To make this work, we use some AutoGraph helpers, set_element_type and stack .

We also support constructs like break , continue , and even print and assert . When converted, this snippet’s Python assert converts to a graph that uses the appropriate tf.Assert .

Having the ability to easily add loops, control flow, and more to graphs means that it’s easy to move the training loop into the graph. An example of this can be found in this notebook where we take an RNN training loop and execute it with a single sess.run() call. This could be useful in situations where you need to pass an entire training loop to an accelerator, rather than manage training via a CPU controller.

AutoGraph opens new ways of thinking about building and training models. We’re looking forward to adding more features to AutoGraph based on suggestions from the developer community, so please file issues with suggestions!

Graph Performance vs. Eager Execution

Eager execution is quite usable, but graphs are often much faster. Although benchmarking is complex (and depends both on the application as well as the hardware configuration), in this simple example we see a significant speedup when switching from eager to AutoGraph code that makes heavy use of if and while .

Ultimately, AutoGraph lets you use your dynamic and flow-control-heavy models on accelerator hardware like GPUs and Cloud TPUs, which is necessary when training large models on lots of data.

We are just starting the process of exploring performance. File an issue if you find a graph construct that runs slower than expected!

AutoGraph and Eager Execution

While using eager execution, you can still use graph execution for parts of your code via tf.contrib.eager.defun . This requires you to use graph TensorFlow ops like tf.cond() . In the future, AutoGraph will be seamlessly integrated with defun to allow authoring graph code in plain eager-style Python. When that implementation is available, you can expect to use AutoGraph to speed up hotspots by selectively turning eager code into graph fragments.

Conclusion

AutoGraph is a tool that lets you easily build intuitive, complicated models that run effortlessly in the TensorFlow graph. This is an experimental tool now in contrib , but we expect to move it into core TensorFlow soon.

Tell us your experience with AutoGraph! Please, file issues and send messages to the TensorFlow Developer group if you have feedback, suggestions, or ideas.

Acknowledgements

We would like to acknowledge core contributions from Andrew Johnson, Bart van Merriënboer, Zachary Nado and Alex Passos. We would also like to thank the following colleagues: Akshay Agrawal, Mark Daoust, Josh Levenberg, Dougal Maclaurin, Rajat Monga, Mahima Pushkarna, Alexey Radul, D. Sculley and Asim Shankar.