Fast debugging with immediate run-time errors and integration with Python tools

Support for dynamic models using easy-to-use Python control flow

Strong support for custom and higher-order gradients

Almost all of the available TensorFlow operations

Using Eager Execution

Session.run()

import tensorflow as tf

import tensorflow.contrib.eager as tfe



tfe.enable_eager_execution()



x = [[2.]]

m = tf.matmul(x, x)



print(m)

# The 1x1 matrix [[4.]]



a = tf.constant(12)

counter = 0

while not tf.equal(a, 1):

if tf.equal(a % 2, 0):

a = a / 2

else:

a = 3 * a + 1

print(a)



tf.constant(12)

Gradients

autograd

def square(x):

return tf.multiply(x, x)



grad = tfe.gradients_function(square)



print(square(3.)) # [9.]

print(grad(3.)) # [6.]



gradients_function

square()

square()

square()

grad(3.0)

gradients_function

gradgrad = tfe.gradients_function(lambda x: grad(x)[0])



print(gradgrad(3.)) # [2.]



def abs(x):

return x if x > 0. else -x



grad = tfe.gradients_function(abs)



print(grad(2.0)) # [1.]

print(grad(-2.0)) # [-1.]



Custom Gradients

def log1pexp(x):

return tf.log(1 + tf.exp(x))

grad_log1pexp = tfe.gradients_function(log1pexp)



# The gradient computation works fine at x = 0.

print(grad_log1pexp(0.))

# [0.5]

# However it returns a `nan` at x = 100 due to numerical instability.

print(grad_log1pexp(100.))

# [nan]



tf.exp(x)

@tfe.custom_gradient

def log1pexp(x):

e = tf.exp(x)

def grad(dy):

return dy * (1 - 1 / (1 + e))

return tf.log(1 + e), grad

grad_log1pexp = tfe.gradients_function(log1pexp)



# Gradient at x = 0 works as before.

print(grad_log1pexp(0.))

# [0.5]

# And now gradient computation at x=100 works as well.

print(grad_log1pexp(100.))

# [1.0]



Building models

class MNISTModel(tfe.Network):

def __init__(self):

super(MNISTModel, self).__init__()

self.layer1 = self.track_layer(tf.layers.Dense(units=10))

self.layer2 = self.track_layer(tf.layers.Dense(units=10))

def call(self, input):

"""Actually runs the model."""

result = self.layer1(input)

result = self.layer2(result)

return result



tfe.Network

tf.layer.Layer

Network

Network

# Let's make up a blank input image

model = MNISTModel()

batch = tf.zeros([1, 1, 784])

print(batch.shape)

# (1, 1, 784)

result = model(batch)

print(result)

# tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)



def loss_function(model, x, y):

y_ = model(x)

return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)



optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)

for (x, y) in tfe.Iterator(dataset):

grads = tfe.implicit_gradients(loss_function)(model, x, y)

optimizer.apply_gradients(grads)



implicit_gradients()

loss_function

with tf.device("/gpu:0"):

for (x, y) in tfe.Iterator(dataset):

optimizer.minimize(lambda: loss_function(model, x, y))



optimizer.minimize

apply_gradients()

Using Eager with Graphs

How does my code change?

As with TensorFlow generally, we recommend that if you have not yet switched from queues to using tf.data for input processing, you should. It's easier to use and usually faster. For help, see this blog post and the documentation page.

for input processing, you should. It's easier to use and usually faster. For help, see this blog post and the documentation page. Use object-oriented layers, like tf.layer.Conv2D() or Keras layers; these have explicit storage for variables.

or Keras layers; these have explicit storage for variables. For most models, you can write code so that it will work the same for both eager execution and graph construction. There are some exceptions, such as dynamic models that use Python control flow to alter the computation based on inputs.

Once you invoke tfe.enable_eager_execution() , it cannot be turned off. To get graph behavior, start a new Python session.

Getting started and the future

Install the nightly build of TensorFlow.

Check out the README (including known issues)

Get detailed instructions in the eager execution User Guide

Browse the eager examples in GitHub

Follow the changelog for updates.



