Poutyne is compatible with the latest version of PyTorch and Python >= 3.6 .

Use callbacks to save your best model, perform early stopping and much more.

Poutyne is a Keras-like framework for PyTorch and handles much of the boilerplating code needed to train neural networks.

Getting started: few seconds to Poutyne¶

The core data structure of Poutyne is a Model , a way to train your own PyTorch neural networks.

How Poutyne works is that you create your PyTorch module (neural network) as usual but when comes the time to train it you feed it into the Poutyne Model, which handles all the steps, stats and callbacks, similar to what Keras does.

Here is a simple example:

# Import the Poutyne Model and define a toy dataset from poutyne import Model import torch import torch.nn as nn import numpy as np num_features = 20 num_classes = 5 hidden_state_size = 100 num_train_samples = 800 train_x = np . random . randn ( num_train_samples , num_features ) . astype ( 'float32' ) train_y = np . random . randint ( num_classes , size = num_train_samples ) . astype ( 'int64' ) num_valid_samples = 200 valid_x = np . random . randn ( num_valid_samples , num_features ) . astype ( 'float32' ) valid_y = np . random . randint ( num_classes , size = num_valid_samples ) . astype ( 'int64' ) num_test_samples = 200 test_x = np . random . randn ( num_test_samples , num_features ) . astype ( 'float32' ) test_y = np . random . randint ( num_classes , size = num_test_samples ) . astype ( 'int64' )

Select a PyTorch device so that it runs on GPU if you have one:

cuda_device = 0 device = torch . device ( "cuda: %d " % cuda_device if torch . cuda . is_available () else "cpu" )

Create yourself a PyTorch network:

network = nn . Sequential ( nn . Linear ( num_features , hidden_state_size ), nn . ReLU (), nn . Linear ( hidden_state_size , num_classes ) )

You can now use Poutyne’s model to train your network easily:

model = Model ( network , 'sgd' , 'cross_entropy' , batch_metrics = [ 'accuracy' ], epoch_metrics = [ 'f1' ]) model . to ( device ) model . fit ( train_x , train_y , validation_data = ( valid_x , valid_y ), epochs = 5 , batch_size = 32 )

This is really similar to the model.compile and model.fit functions as in Keras.

You can evaluate the performances of your network using the evaluate method of Poutyne’s model:

loss , ( accuracy , f1score ) = model . evaluate ( test_x , test_y )

Or only predict on new data:

predictions = model . predict ( test_x )

See the complete code here. Also, see this for an example for regression that also uses epoch metrics.

One of the strengths Poutyne are callbacks. They allow you to save checkpoints, log training statistics and more. See this notebook for an introduction to callbacks. In that vein, Poutyne also offers an Experiment class that offers automatic checkpointing, logging and more using callbacks under the hood. Here is an example of usage.

from poutyne import Experiment , TensorDataset from torch.utils.data import DataLoader # We need to use dataloaders (i.e. an iterable of batches) with Experiment train_loader = DataLoader ( TensorDataset ( train_x , train_y ), batch_size = 32 ) valid_loader = DataLoader ( TensorDataset ( valid_x , valid_y ), batch_size = 32 ) test_loader = DataLoader ( TensorDataset ( test_x , test_y ), batch_size = 32 ) # Everything is saved in ./expt/my_classification_network expt = Experiment ( './expt/my_classification_network' , network , device = device , optimizer = 'sgd' , task = 'classif' ) expt . train ( train_loader , valid_loader , epochs = 5 ) expt . test ( test_loader )

See the complete code here. Also, see this for an example for regression that again also uses epoch metrics.

As you can see, Poutyne is inspired a lot by the friendliness of Keras.