Implementing (parts of) TensorFlow (almost) from Scratch

A Walkthrough of Symbolic Differentiation

Jim Fleming (@jimmfleming)

main.py | graph.py | tensor.py | ops.py | session.py

Next: The Graph

This literate programming exercise will construct a simple 2-layer feed-forward neural network to compute the exclusive or, using symbolic differentiation to compute the gradients automatically. In total, about 500 lines of code, including comments. The only functional dependency is numpy. I highly recommend reading Chris Olah's Calculus on Computational Graphs: Backpropagation for more background on what this code is doing.

The XOR task is convenient for a number of reasons: it's very fast to compute; it is not linearly separable thus requiring at least two layers and making the gradient calculation more interesting; it doesn't require more complicated matrix-matrix features such as broadcasting.

(I'm also working on a more involved example for MNIST but as soon as I added support for matrix shapes and broadcasting the code ballooned by 5x and it was no longer a simple example.)

Let's start by going over the architecture. We're going to use four main components:

Graph , composed of Tensor nodes and Op nodes that together represent the computation we want to differentiate.

, composed of nodes and nodes that together represent the computation we want to differentiate. Tensor represents a value in the graph. Tensors keep a reference to the operation that produced it, if any.

represents a value in the graph. Tensors keep a reference to the operation that produced it, if any. BaseOp represents a computation to perform and its differentiable components. Operations hold references to their input tensors and an output tensor.

represents a computation to perform and its differentiable components. Operations hold references to their input tensors and an output tensor. Session is used to evaluate tensors in the graph.

Note the return from a graph operation is actually a tensor, representing the output of the operation.