Network Pruning via Transformable Architecture Search

This is a paper in the network pruning arena. It proposes applying neural architecture search directly for a network with flexible channel and layer sizes. Minimizing the loss of the pruned networks aids in learning the number of channels.

The feature map of the pruned network is made up of K feature map fragments that are sampled based on the probability distribution. The loss is backpropagated to the network weights and to the parameterized distribution.

The pruning approach proposed in this paper is divided into three stages: