Yesterday, Google released version 2.0.0 of Tensorflow. According to the release notes, the significant improvements are:

Easy model building with Keras and eager execution. Robust model deployment in production on any platform. Powerful experimentation for research. API simplification by reducing duplication and removing deprecated endpoints.

Here are the full release notes: https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0

So let’s get started with a Mnist classification with 14 lines of code:

import tensorflow as tf

mnist = tf.keras.datasets.mnist

At first, we import tensorflow and name it tf. TensorFlow has a few build in datasets in the module tf.keras.datasets. In this tutorial, we will use the Mnist dataset, which is an extensive collection of handwritten digits from 0 to 9.

(x_train, y_train),(x_test, y_test) = mnist.load_data()

x_train, x_test = x_train / 255.0, x_test / 255.0

With mnist.load_data(), we generate our training and test dataset. The pictures have only one channel, and each pixel has a value between 0–255. We normalize the values to 0–1.

model = tf.keras.models.Sequential([

tf.keras.layers.Flatten(input_shape=(28, 28)),

tf.keras.layers.Dense(128, activation='relu'),

tf.keras.layers.Dropout(0.1),

tf.keras.layers.Dense(10, activation='softmax')

])

A sequential model is a linear stack of layers. We always have to name the input shape in the first layer, in this case (28, 28), because our pictures have the size 28 times 28 pixels. We flatten the array to one dimension and use a dense layer with 128 neurons and a relu output function.

To prevent overfitting, we use dropout. Dropout is a regularization technique and drops units in the process of training. I can recommend this article if you want to know more about dropout:

The last layer is a dense layer with a softmax activation function. The dense layer has ten neurons because we want to use One Hot Encoding for our features:

[1, 0, 0, 0, 0, 0, 0, 0, 0, 0] → 0 [0, 1, 0, 0, 0, 0, 0, 0, 0, 0] → 1

Softmax is an activation function that turns numbers into probabilities that sum to one. So the output from our neural network could look like this:

[0.05, 0.02, 0.58, 0.1, 0.03, 0.05, 0.01, 0.01, 0.1, 0.05]

The numbers represent the model’s “confidence” that the image corresponds to each of the ten different classes. The neural network’s issuing that it’s with a probability of 58% a 2. If you want to know more about softmax I can recommend you this article:

model.compile(optimizer='adam',

loss='sparse_categorical_crossentropy',

metrics=['accuracy']) model.fit(x_train, y_train, epochs=5)

model.evaluate(x_test, y_test)

We compile our model and use the Adam Optimizer and the sparse categorical cross-entropy as a loss. With model.fit the training starts. After five epochs, we get a score of 98% in the test set. Because of random weight initialization and randomness during the training, your score can differ.

If you want to know more about TensorFlow, check out the official TensorFlow tutorials: