

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

What is a Tensor?

The central unit of data in TensorFlow is the tensor. A tensor consists of a set of primitive values shaped into an array of any number of dimensions. A tensor’s rank is its number of dimensions. Here are some examples of tensors:

3 # a rank 0 tensor; this is a scalar with shape [] [ 1. , 2. , 3. ] # a rank 1 tensor; this is a vector with shape [3] [[ 1. , 2. , 3. ], [ 4. , 5. , 6. ]] # a rank 2 tensor; a matrix with shape [2, 3] [[[ 1. , 2. , 3. ]], [[ 7. , 8. , 9. ]]] # a rank 3 tensor with shape [2, 1, 3]

Install TensorFlow …

The pip or pip3 package manager is usually installed on Ubuntu. Take a moment to confirm (by issuing a pip -V or pip3 -V command) that pip or pip3 is installed.

sudo apt-get install python3-pip python3-dev # for Python 3.n

$ pip3 install tensorflow # Python 3.n; CPU support (no GPU support)

$ pip3 install tensorflow-gpu # Python 3.n; GPU support

Google Inception:

Transfer learning is a machine learning method which utilizes a pre-trained neural network.The pre-trained Inception-v3 model achieves state-of-the-art accuracy for recognizing general objects with 1000 classes, like “Zebra”, “Dalmatian”, and “Dishwasher”

from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import os.path #This module implements some useful functions on pathnames. To read or write files see open(), and for accessing the filesystem import re #This module provides regular expression matching operations import sys #This module provides access to some variables used or maintained by the interpreter and to functions that interact strongly with the interpreter import tarfile #The tarfile module makes it possible to read and write tar archives import numpy as np from six.moves import urllib #six is a package that helps in writing code that is compatible with both Python 2 and Python 3 , The six.moves module provides those modules under a common name for both Python2 and 3 import tensorflow as tf

argparse module makes it easy to write user-friendly command-line interfaces. The program defines what arguments it requires, and argparse will figure out how to parse those out of sys.argv

os.path module implements some useful functions on pathnames. To read or write files see open(), and for accessing the filesystem

re module provides regular expression matching operation

sys module provides access to some variables used or maintained by the interpreter and to functions that interact strongly with the interpreter

tarfile module makes it possible to read and write tar archives

six is a package that helps in writing code that is compatible with both Python 2 and Python 3, The six.moves module provides those modules under a common name for both Python2 and 3

FLAGS = None DATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'

class NodeLookup(object): def __init__(self, label_lookup_path=None, uid_lookup_path=None): if not label_lookup_path: label_lookup_path = os.path.join( FLAGS.model_dir, 'imagenet_2012_challenge_label_map_proto.pbtxt') if not uid_lookup_path: uid_lookup_path = os.path.join( FLAGS.model_dir, 'imagenet_synset_to_human_label_map.txt') self.node_lookup = self.load(label_lookup_path, uid_lookup_path) def load(self, label_lookup_path, uid_lookup_path): if not tf.gfile.Exists(uid_lookup_path): tf.logging.fatal('File does not exist %s', uid_lookup_path) if not tf.gfile.Exists(label_lookup_path): tf.logging.fatal('File does not exist %s', label_lookup_path) # Loads mapping from string UID to human-readable string proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines() uid_to_human = {} p = re.compile(r'[n\d]*[ \S,]*') #SRE pattern for line in proto_as_ascii_lines: parsed_items = p.findall(line) #line 'n00004475 organism, being' uid = parsed_items[0] human_string = parsed_items[2] uid_to_human[uid] = human_string #'organism, being' # Loads mapping from string UID to integer node ID. node_id_to_uid = {} proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines() for line in proto_as_ascii: if line.startswith(' target_class:'): target_class = int(line.split(': ')[1]) if line.startswith(' target_class_string:'): target_class_string = line.split(': ')[1] node_id_to_uid[target_class] = target_class_string[1:-2] # Loads the final mapping of integer node ID to human-readable string node_id_to_name = {} for key, val in node_id_to_uid.items(): if val not in uid_to_human: tf.logging.fatal('Failed to locate: %s', val) name = uid_to_human[val] node_id_to_name[key] = name return node_id_to_name def id_to_string(self, node_id): if node_id not in self.node_lookup: return '' return self.node_lookup[node_id]

def create_graph(): """Creates a graph from saved GraphDef file and returns a saver.""" # Creates graph from saved graph_def.pb. with tf.gfile.FastGFile(os.path.join( FLAGS.model_dir, 'classify_image_graph_def.pb'), 'rb') as f: # f is a fastGfile object graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) _ = tf.import_graph_def(graph_def, name='') """ This function provides a way to import a serialized TensorFlow [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) protocol buffer, and extract individual objects in the `GraphDef` as @{tf.Tensor} and @{tf.Operation} objects. Once extracted, these objects are placed into the current default `Graph` """

def run_inference_on_image(image): """Runs inference on an image. Args: image: Image file name. Returns: Nothing """ if not tf.gfile.Exists(image): tf.logging.fatal('File does not exist %s', image) image_data = tf.gfile.FastGFile(image, 'rb').read() #image data in bytes # Creates graph from saved GraphDef. create_graph() with tf.Session() as sess: writer = tf.summary.FileWriter("/home/akash/Desktop/cue", sess.graph) writer.close() # Some useful tensors: # 'softmax:0': A tensor containing the normalized prediction across # 1000 labels. # 'pool_3:0': A tensor containing the next-to-last layer containing 2048 # float description of the image. # 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG # encoding of the image. # Runs the softmax tensor by feeding the image_data as input to the graph. softmax_tensor = sess.graph.get_tensor_by_name('softmax:0') predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': image_data}) predictions = np.squeeze(predictions) # Creates node ID --> English string lookup. node_lookup = NodeLookup() top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1] for node_id in top_k: human_string = node_lookup.id_to_string(node_id) score = predictions[node_id] print('%s (score = %.5f)' % (human_string, score))

def maybe_download_and_extract(): """Download and extract model tar file.""" dest_directory = FLAGS.model_dir if not os.path.exists(dest_directory): os.makedirs(dest_directory) filename = DATA_URL.split('/')[-1] filepath = os.path.join(dest_directory, filename) if not os.path.exists(filepath): def _progress(count, block_size, total_size): sys.stdout.write('\r>> Downloading %s %.1f%%' % ( filename, float(count * block_size) / float(total_size) * 100.0)) sys.stdout.flush() filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress) print() statinfo = os.stat(filepath) print('Successfully downloaded', filename, statinfo.st_size, 'bytes.') tarfile.open(filepath, 'r:gz').extractall(dest_directory)

def main(_): maybe_download_and_extract() image = (FLAGS.image_file if FLAGS.image_file else os.path.join(FLAGS.model_dir, 'cropped_panda.jpg')) run_inference_on_image(image)

if __name__ == '__main__': parser = argparse.ArgumentParser() # classify_image_graph_def.pb: # Binary representation of the GraphDef protocol buffer. # imagenet_synset_to_human_label_map.txt: # Map from synset ID to a human readable string. # imagenet_2012_challenge_label_map_proto.pbtxt: # Text representation of a protocol buffer mapping a label to synset ID. parser.add_argument( '--model_dir', type=str, default='/tmp/imagenet', #"\Path to classify_image_graph_def.pb,imagenet_synset_to_human_label_map.txt, andimagenet_2012_challenge_label_map_proto.pbtxt.\ ) parser.add_argument( '--image_file', type=str, default='/home/akash/Pictures/komondor.jpg', #'Absolute path to image file.' ) parser.add_argument( '--num_top_predictions', type=int, default=5, #Display this many predictions.' ) FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

Let’s test it to distinguish between a mop and Komondor dog (sheepdog), even google mention this in their presentation that even human eyes can sometimes not able to distinguish between them.

For the mop we have a score of 0.99

For the komondor we have 0.93