You can define a computation graph (neural network) intuitively with less amount of code. Defining a two layered neural network with softmax loss only requires the following simple 5 lines of code.

Dynamic Computation Graph Support

Static computation graph which has been used in general is a method to build a computation graph before executing the graph. On the other hand, dynamic computation graph enables flexible runtime network construction. The Library can use both paradigms of static and dynamic graph. Here is a dynamic computation graph example in the Library.

x = nn.Variable(input_shape) x.d = some_data t = nn.Variable(target_shape) t.d = some_target with nn.auto_forward(): h = F.relu(PF.convolution(x, hidden_size, (3, 3), pad=(1, 1), name='conv0')) for i in range(num_stochatic_layers): if np.random.rand() < layer_drop_ratio: continue # Stochastically drop a layer. h2 = F.relu(PF.convolution(x, hidden_size, (3, 3), pad=(1, 1), name='conv%d' % (i + 1))) h = F.add2(h, h2) y = PF.affine(h, target_size, name='classification') loss = F.mean(F.softmax_cross_entropy(y, t)) # Backward computation can also be done in dynamically executed graph. loss.backward()

The memory caching system implemented in the Library enables fast execution without memory allocation overhead.