Python API Reference of Compression Utilities¶
Contents
Sensitivity Utilities¶
-
class
nni.compression.pytorch.utils.sensitivity_analysis.
SensitivityAnalysis
(model, val_func, sparsities=None, prune_type='l1', early_stop_mode=None, early_stop_value=None)[source]¶ -
analysis
(val_args=None, val_kwargs=None, specified_layers=None)[source]¶ This function analyze the sensitivity to pruning for each conv layer in the target model. If start and end are not set, we analyze all the conv layers by default. Users can specify several layers to analyze or parallelize the analysis process easily through the start and end parameter.
- Parameters
val_args (list) – args for the val_function
val_kwargs (dict) – kwargs for the val_funtion
specified_layers (list) – list of layer names to analyze sensitivity. If this variable is set, then only analyze the conv layers that specified in the list. User can also use this option to parallelize the sensitivity analysis easily.
- Returns
sensitivities – dict object that stores the trajectory of the accuracy/loss when the prune ratio changes
- Return type
dict
-
export
(filepath)[source]¶ Export the results of the sensitivity analysis to a csv file. The firstline of the csv file describe the content structure. The first line is constructed by ‘layername’ and sparsity list. Each line below records the validation metric returned by val_func when this layer is under different sparsities. Note that, due to the early_stop option, some layers may not have the metrics under all sparsities.
layername, 0.25, 0.5, 0.75 conv1, 0.6, 0.55 conv2, 0.61, 0.57, 0.56
- Parameters
filepath (str) – Path of the output file
-
Topology Utilities¶
-
class
nni.compression.pytorch.utils.shape_dependency.
ChannelDependency
(model=None, dummy_input=None, traced_model=None)[source]¶ -
-
property
dependency_sets
¶ Get the list of the dependency set.
- Returns
dependency_sets – list of the dependency sets. For example, [set([‘conv1’, ‘conv2’]), set([‘conv3’, ‘conv4’])]
- Return type
list
-
export
(filepath)[source]¶ export the channel dependencies as a csv file. The layers at the same line have output channel dependencies with each other. For example, layer1.1.conv2, conv1, and layer1.0.conv2 have output channel dependencies with each other, which means the output channel(filters) numbers of these three layers should be same with each other, otherwise the model may has shape conflict.
Output example: Dependency Set,Convolutional Layers Set 1,layer1.1.conv2,layer1.0.conv2,conv1 Set 2,layer1.0.conv1 Set 3,layer1.1.conv1
-
property
-
class
nni.compression.pytorch.utils.shape_dependency.
GroupDependency
(model=None, dummy_input=None, traced_model=None)[source]¶ -
build_dependency
()[source]¶ Build the channel dependency for the conv layers in the model. This function return the group number of each conv layers. Note that, here, the group count of conv layers may be larger than their originl groups. This is because that the input channel will also be grouped for the group conv layers. To make this clear, assume we have two group conv layers: conv1(group=2), conv2(group=4). conv2 takes the output features of conv1 as input. Then we have to the filters of conv1 can still be divided into 4 groups after filter pruning, because the input channels of conv2 shoule be divided into 4 groups.
- Returns
self.dependency – key: the name of conv layers, value: the minimum value that the number of filters should be divisible to.
- Return type
dict
-
export
(filepath)[source]¶ export the group dependency to a csv file. Each line describes a convolution layer, the first part of each line is the Pytorch module name of the conv layer. The second part of each line is the group count of the filters in this layer. Note that, the group count may be larger than this layers original group number.
output example: Conv layer, Groups Conv1, 1 Conv2, 2 Conv3, 4
-
-
class
nni.compression.pytorch.utils.mask_conflict.
CatMaskPadding
(masks, model, dummy_input=None, traced=None)[source]¶
-
class
nni.compression.pytorch.utils.mask_conflict.
GroupMaskConflict
(masks, model=None, dummy_input=None, traced=None)[source]¶
Model FLOPs/Parameters Counter¶
-
nni.compression.pytorch.utils.counter.
count_flops_params
(model, x, custom_ops=None, verbose=True, mode='default')[source]¶ Count FLOPs and Params of the given model. This function would identify the mask on the module and take the pruned shape into consideration. Note that, for sturctured pruning, we only identify the remained filters according to its mask, and do not take the pruned input channels into consideration, so the calculated FLOPs will be larger than real number.
- Parameters
model (nn.Module) – Target model.
x (tuple or tensor) – The input shape of data (a tuple), a tensor or a tuple of tensor as input data.
custom_ops (dict) – A mapping of (module -> torch.nn.Module : custom operation) the custom operation is a callback funtion to calculate the module flops and parameters, it will overwrite the default operation. for reference, please see
ops
inModelProfiler
.verbose (bool) – If False, mute detail information about modules. Default is True.
mode (str) – the mode of how to collect information. If the mode is set to
default
, only the information of convolution and linear will be collected. If the mode is set tofull
, other operations will also be collected.
- Returns
Representing total FLOPs, total parameters, and a detailed list of results respectively. The list of results are a list of dict, each of which contains (name, module_type, weight_shape, flops, params, input_size, output_size) as its keys.
- Return type
tuple of int, int and dict