Welcome to the Yann Toolbox. It is a toolbox for building and learning convolutional neural networks, built on top of theano. This toolbox is a homage to Prof. Yann LeCun, one of the earliest poineers of CNNs. To setup the toolbox refer the Installation Guide guide. Once setup, you may start with the Quick Start guide or try your hand at the Tutorials and the guide to Getting Started. A user base discussion group is setup on gitter and also on google groups.

If you are here for the theano-tensorflow migration tool, click [here](http:://www.tf-lenet.readthedocs.io).

Yann is currently under its early phases and is presently undergoing massive development. Expect a lot of changes. Unittests are only starting to be written, therefore the coverage and travis build passes are not to be completely trusted. The toolbox will be formalized in the future but at this moment, the authorship, coverage and maintanence of the toolbox is under extremely limited manpower.

Because of this reason, Yann is specifically designed to be intuitive and easy to use for beginners. That does not compromise Yann of any of its core purpose - to be able to build CNNs in a plug and play fashion. It is still a good choice for a toolbox for running pre-trained models and build complicated, non-vannilla CNN architectures that are not easy to build with the other toolboxes. It is also a good choice for researchers and industrial scientists, who want to quickly prototype networks and test them before developing production scale models.

While, there are more formal and wholesome toolboxes that are similar and have a much larger userbase such as Lasagne , Keras , Blocks and Caffe , this toolbox is designed differently. This is much simpler and versatile. Yann is designed as a supplement to an upcoming beginner’s book on Convolutional Neural Networks and also the toolbox of choice for a introductory course on deep learning for computer vision.

Quick Start¶

The easiest way to get going with Yann is to follow this quick start guide. If you are not satisfied and want a more detailed introduction to the toolbox, you may refer to the Tutorials and the Structure of the Yann network. This tutorial was also presented in CSE591 at ASU and the video of the presentation is available. A more detailed Jupyter Notebook version of this tutorial is available here.

To install in a quick fashion without much dependencies run the follwing command:

pip install git+git://github.com/ragavvenkatesan/yann.git

If there was an error with installing skdata , you might want to install numpy and scipy independently first and then run the above command. Note that this installer, does not enable a lot of options of the toolbox for which you need to go through the complete install described at the Installation Guide page.

Verify that the installation of theano is indeed version 0.9 or greater by doing the following in a python shell

import theano theano . __version__

If the version was not 0.9, you can install 0.9 by doing the following:

pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git

The start and the end of Yann toolbox is the network module. The yann.network . network object is where all the magic happens. Start by importing network and creating a network object in a python shell.

from yann.network import network net = network ()

Voila! We have thus created a new network. The network doesn’t have any layers or modules in it. It be seen verified by probing into net.layers property of the net object.

net . layers

This will produce an output which is essentially an empty dictionary {} . Let’s add some layers! The toolbox comes with a port to skdata the MNIST dataset of handwritten characters can be built using this port.

To cook a mnist dataset for yann run the following code:

from yann.special.datasets import cook_mnist cook_mnist ()

Running this code will print a statement to the following effect >>Dataset xxxxx is created. The five digits marked xxxxx in the statement is the codeword for the dataset. The actual dataset is located now at _datasets/_dataset_xxxxx/ from the directory from where this code was called. Mnist dataset is created and stored at this dataset in a format that is configured for yann to work with. Refer to the Tutorials on how to convert your own dataset for yann.

The first layer that we need to add to our network now is an input layer. Every ``input``layer requries a dataset to be associated with it. Let us create this layer.

dataset_params = { "dataset" : "_datasets/_dataset_xxxxx" , "n_classes" : 10 } net . add_layer ( type = "input" , dataset_init_args = dataset_params )

This piece of code creates and adds a new datastream module to the net and wires up the newly added input layer with this datastream . Confirm this by checking net.datastream . Let us now build a classifier layer. The default classifier that yann is setup with is the logistic regression classifier. Refer to Toolbox Documentation or Tutorials for other types of layers. Let us create a this classifier layer for now.

net . add_layer ( type = "classifier" , num_classes = 10 ) net . add_layer ( type = "objective" )

The layer objective creates the loss function from the classifier that can be used as a learning metric. It also provides a scope for other modules such as the optimizer module. Refer Structure of the Yann network and Toolbox Documentation for more details on modules. Now that our network is created and constructed we can see that the net objects have layers populated.

net . layers >> { '1' : < yann . network . layers . classifier_layer object at 0x7eff9a7d0050 > , '0' : < yann . network . layers . input_layer object at 0x7effa410d6d0 > , '2' : < yann . network . layers . objective_layer object at 0x7eff9a71b210 > }

The keys of the dictionary such as '1' , '0' and '2' are the id of the layer. We could have created a layer with a custom id by supplying an id argument to the add_layer method. To get a better idea of how the network looks like, you can use the pretty_print mehtod in yann.

net . pretty_print ()

Now our network is finally ready to be trained. Before training, we need to build an optimizer and other tools, but for now let us use the default ones. Once all of this is done, yann requires that the network be ‘cooked’. For more details on cooking refer Structure of the Yann network. For now let us imagine that cooking a network will finalize the wiring, architecture, cache and prepare the first batch of data, prepare the modules and in general prepare the network for training using back propagation.

net . cook ()

Cooking would take a few seconds and might print what it is doing along the way. Once cooked, we may notice for instance that the network has a optimizer module.

net . optimizer >> { 'main' : < yann . network . modules . optimizer object at 0x7eff9a7c1b10 > }

To train the model that we have just cooked, we can use the train function that becomes available to us once the network is cooked.

net . train ()

This will print a progress for each epoch and will show validation accuracy after each epoch on a validation set that is independent from the training set. By default the training might run for 40 epochs: 20 on a higher learning rate and 20 more on a fine tuning learning rate.

Every layer also has an layer.output object. The output can be probed by using the layer_activity method as long as it is directly or in-directly associated with a datastream module through an input layer and the network was cooked. Let us observe the activity of the input layer for trial. Once trained we can observe this output. The layer activity will just be a numpy array of numbers, so let us print its shape instead.

net . layer_activity ( id = '0' ) . shape net . layers [ '0' ] . output_shape

The second line of code will verify the output we produced in the first line. An interesting layer output is the output of the objective layer, which will give us the current negative log likelihood of the network, the one that we are trying to minimize.

net . layer_activity ( id = '2' ) >> array ( 0.3926551938056946 , dtype = float32 )

Once we are done training, we can run the network feedforward on the testing set to produce a generalization performance result.

net . test ()

Congratualations, you now know how to use the yann toolbox successfully. A full-fledge code of the logistic regression that we implemented here can be found here . That piece of code also has in-commentary that discusses briefly other options that could be supplied to some of the function calls we made here that explain the processes better.

Hope you liked this quick start guide to the Yann toolbox and have fun!