Pruning Algorithm Supported in NNIΒΆ

Note that not all pruners from the previous version have been migrated to the new framework yet. NNI has plans to migrate all pruners that were implemented in NNI 3.2.

If you believe that a certain old pruner has not been implemented or that another pruning algorithm would be valuable, please feel free to contact us. We will prioritize and expedite support accordingly.


Brief Introduction of Algorithm

Level Pruner

Pruning the specified ratio on each weight element based on absolute value of weight element

L1 Norm Pruner

Pruning output channels with the smallest L1 norm of weights (Pruning Filters for Efficient Convnets) Reference Paper

L2 Norm Pruner

Pruning output channels with the smallest L2 norm of weights

FPGM Pruner

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration Reference Paper

Slim Pruner

Pruning output channels by pruning scaling factors in BN layers(Learning Efficient Convolutional Networks through Network Slimming) Reference Paper

Taylor FO Weight Pruner

Pruning filters based on the first order taylor expansion on weights(Importance Estimation for Neural Network Pruning) Reference Paper

Linear Pruner

Sparsity ratio increases linearly during each pruning rounds, in each round, using a basic pruner to prune the model.

AGP Pruner

Automated gradual pruning (To prune, or not to prune: exploring the efficacy of pruning for model compression) Reference Paper

Movement Pruner

Movement Pruning: Adaptive Sparsity by Fine-Tuning Reference Paper