Framework Related¶
Pruner¶
- class nni.algorithms.compression.v2.pytorch.base.Pruner(model, config_list)[源代码]¶
The abstract class for pruning algorithm. Inherit this class and implement the _reset_tools to customize a pruner.
- export_model(model_path, mask_path=None)[源代码]¶
Export pruned model weights, masks and onnx model(optional)
- 参数:
model_path (str) -- Path to save pruned model state_dict. The weight and bias have already multiplied the masks.
mask_path (Optional[str]) -- Path to save mask dict.
- get_modules_wrapper()[源代码]¶
- 返回:
An ordered dict, key is the name of the module, value is the wrapper of the module.
- 返回类型:
OrderedDict[str, PrunerModuleWrapper]
- get_origin2wrapped_parameter_name_map()[源代码]¶
Get the name mapping of parameters from original model to wrapped model.
- 返回:
Return a dict {original_model_parameter_name: wrapped_model_parameter_name}
- 返回类型:
Dict[str, str]
PrunerModuleWrapper¶
- class nni.algorithms.compression.v2.pytorch.base.PrunerModuleWrapper(module, module_name, config)[源代码]¶
Wrap a module to enable data parallel, forward method customization and buffer registeration.
- 参数:
module (Module) -- The module user wants to compress.
config (Dict) -- The configurations that users specify for compression.
module_name (str) -- The name of the module to compress, wrapper module shares same name.
BasicPruner¶
- class nni.algorithms.compression.v2.pytorch.pruning.basic_pruner.BasicPruner(model, config_list)[源代码]¶
- compress()[源代码]¶
Used to generate the mask. Pruning process is divided in three stages. self.data_collector collect the data used to calculate the specify metric. self.metrics_calculator calculate the metric and self.sparsity_allocator generate the mask depend on the metric.
- 返回:
Return the wrapped model and mask.
- 返回类型:
Tuple[Module, Dict]
DataCollector¶
- class nni.algorithms.compression.v2.pytorch.pruning.tools.DataCollector(compressor)[源代码]¶
An abstract class for collect the data needed by the compressor.
- 参数:
compressor (Pruner) -- The compressor binded with this DataCollector.
MetricsCalculator¶
- class nni.algorithms.compression.v2.pytorch.pruning.tools.MetricsCalculator(scalers=None)[源代码]¶
An abstract class for calculate a kind of metrics of the given data.
- 参数:
scalers (Dict[str, Dict[str, Scaling]] | Scaling | None) -- Scaler is used to scale the metrics' size. It scaling metric to the same size as the shrinked mask in the sparsity allocator. If you want to use different scalers for different pruning targets in different modules, please use a dict {module_name: {target_name: scaler}}. If allocator meets an unspecified module name, it will try to use scalers['_default'][target_name] to scale its mask. If allocator meets an unspecified target name, it will try to use scalers[module_name]['_default'] to scale its mask. Passing in a scaler instead of a dict of scalers will be treated as passed in {'_default': {'_default': scalers}}. Passing in None means no need to scale.
SparsityAllocator¶
- class nni.algorithms.compression.v2.pytorch.pruning.tools.SparsityAllocator(pruner, scalers=None, continuous_mask=True)[源代码]¶
A base class for allocating mask based on metrics.
- 参数:
pruner (Pruner) -- The pruner that binded with this SparsityAllocator.
scalers (Dict[str, Dict[str, Scaling]] | Scaling | None) -- Scaler is used to scale the masks' size. It shrinks the mask of the same size as the pruning target to the same size as the metric, or expands the mask of the same size as the metric to the same size as the pruning target. If you want to use different scalers for different pruning targets in different modules, please use a dict {module_name: {target_name: scaler}}. If allocator meets an unspecified module name, it will try to use scalers['_default'][target_name] to scale its mask. If allocator meets an unspecified target name, it will try to use scalers[module_name]['_default'] to scale its mask. Passing in a scaler instead of a dict of scalers will be treated as passed in {'_default': {'_default': scalers}}. Passing in None means no need to scale.
continuous_mask (bool) -- If set True, the part that has been masked will be masked first. If set False, the part that has been masked may be unmasked due to the increase of its corresponding metric.
- common_target_masks_generation(metrics)[源代码]¶
Generate masks for metrics-dependent targets.
- 参数:
metrics (Dict[str, Dict[str, Tensor]]) -- The format is {module_name: {target_name: target_metric}}. The metric of usually has the same size with shrinked mask.
- 返回:
The format is {module_name: {target_name: mask}}. Return the masks of the same size as its target.
- 返回类型:
Dict[str, Dict[str, Tensor]]
- generate_sparsity(metrics)[源代码]¶
The main function of SparsityAllocator, generate a set of masks based on the given metrics.
- 参数:
metrics (Dict) -- A metric dict with format {module_name: weight_metric}
- 返回:
The masks format is {module_name: {target_name: mask}}.
- 返回类型:
Dict[str, Dict[str, Tensor]]
- special_target_masks_generation(masks)[源代码]¶
Some pruning targets' mask generation depends on other targets, i.e., bias mask depends on weight mask. This function is used to generate these masks, and it be called at the end of generate_sparsity.
- 参数:
masks (Dict[str, Dict[str, Tensor]]) -- The format is {module_name: {target_name: mask}}. It is usually the return value of common_target_masks_generation.
BasePruningScheduler¶
- class nni.algorithms.compression.v2.pytorch.base.BasePruningScheduler[源代码]¶
-
- get_best_result()[源代码]¶
- 返回:
Return the task result that has the best performance, inculde task id, the compact model, the masks on the compact model, score and config list used in this task.
- 返回类型:
Tuple[int, Module, Dict[str, Dict[str, Tensor]], float, List[Dict]]
TaskGenerator¶
- class nni.algorithms.compression.v2.pytorch.pruning.tools.TaskGenerator(origin_model, origin_masks={}, origin_config_list=[], log_dir='.', keep_intermediate_result=False, best_result_mode='maximize')[源代码]¶
This class used to generate config list for pruner in each iteration.
- 参数:
origin_model (Optional[Module]) -- The origin unwrapped pytorch model to be pruned.
origin_masks (Optional[Dict[str, Dict[str, Tensor]]]) -- The pre masks on the origin model. This mask maybe user-defined or maybe generate by previous pruning.
origin_config_list (Optional[List[Dict]]) -- The origin config list provided by the user. Note that this config_list is directly config the origin model. This means the sparsity provided by the origin_masks should also be recorded in the origin_config_list.
log_dir (Union[str, Path]) -- The log directory use to saving the task generator log.
keep_intermediate_result (bool) -- If keeping the intermediate result, including intermediate model and masks during each iteration.
best_result_mode (Literal['latest', 'maximize', 'minimize']) --
The way to decide which one is the best result. Three modes are supported. If the task results don't contain scores (task_result.score is None), it will fall back to
latest
.latest: The newest received result is the best result.
maximize: The one with largest task result score is the best result.
minimize: The one with smallest task result score is the best result.
Quantizer¶
- class nni.compression.pytorch.compressor.Quantizer(model, config_list, optimizer=None, dummy_input=None)[源代码]¶
Base quantizer for pytorch quantizer
- export_model(model_path, calibration_path=None, onnx_path=None, input_shape=None, device=None)[源代码]¶
Export quantized model weights and calibration parameters
- 参数:
model_path (str) -- path to save quantized model weight
calibration_path (str) -- (optional) path to save quantize parameters after calibration
onnx_path (str) -- (optional) path to save onnx model
input_shape (list or tuple) -- input shape to onnx model
device (torch.device) -- device of the model, used to place the dummy input tensor for exporting onnx file. the tensor is placed on cpu if
`device`
is None
- 返回类型:
Dict
- export_model_save(model, model_path, calibration_config=None, calibration_path=None, onnx_path=None, input_shape=None, device=None)[源代码]¶
This method helps save pytorch model, calibration config, onnx model in quantizer.
- 参数:
model (pytorch model) -- pytorch model to be saved
model_path (str) -- path to save pytorch
calibration_config (dict) -- (optional) config of calibration parameters
calibration_path (str) -- (optional) path to save quantize parameters after calibration
onnx_path (str) -- (optional) path to save onnx model
input_shape (list or tuple) -- input shape to onnx model
device (torch.device) -- device of the model, used to place the dummy input tensor for exporting onnx file. the tensor is placed on cpu if
`device`
is None
- find_conv_bn_patterns(model, dummy_input)[源代码]¶
Find all Conv-BN patterns, used for batch normalization folding
- 参数:
model (torch.nn.Module) -- model to be analyzed.
dummy_input (tupel of torch.tensor) -- inputs to the model, used for generating the torchscript
- fold_bn(*inputs, wrapper)[源代码]¶
Simulate batch normalization folding in the training graph. Folded weight and bias are returned for the following operations.
- 参数:
inputs (tuple of torch.Tensor) -- inputs for the module
wrapper (QuantizerModuleWrapper) -- the wrapper for origin module
- 返回类型:
Tuple of torch.Tensor
- load_calibration_config(calibration_config)[源代码]¶
This function aims to help quantizer set quantization parameters by loading from a calibration_config which is exported by other quantizer or itself. The main usage of this function is helping quantize aware training quantizer set appropriate initial parameters so that the training process will be much more flexible and converges quickly. What's more, it can also enable quantizer resume quantization model by loading parameters from config.
- 参数:
calibration_config (dict) -- dict which saves quantization parameters, quantizer can export itself calibration config. eg, calibration_config = quantizer.export_model(model_path, calibration_path)
- quantize_input(inputs, wrapper, **kwargs)[源代码]¶
quantize should overload this method to quantize input. This method is effectively hooked to
forward()
of the model.- 参数:
inputs (Tensor) -- inputs that needs to be quantized
wrapper (QuantizerModuleWrapper) -- the wrapper for origin module
- quantize_output(output, wrapper, **kwargs)[源代码]¶
quantize should overload this method to quantize output. This method is effectively hooked to
forward()
of the model.- 参数:
output (Tensor) -- output that needs to be quantized
wrapper (QuantizerModuleWrapper) -- the wrapper for origin module
- quantize_weight(wrapper, **kwargs)[源代码]¶
quantize should overload this method to quantize weight. This method is effectively hooked to
forward()
of the model.- 参数:
wrapper (QuantizerModuleWrapper) -- the wrapper for origin module
QuantizerModuleWrapper¶
QuantGrad¶
- class nni.compression.pytorch.compressor.QuantGrad(*args, **kwargs)[源代码]¶
Base class for overriding backward function of quantization operation.
- classmethod get_bits_length(config, quant_type)[源代码]¶
Get bits for quantize config
- 参数:
config (Dict) -- the configuration for quantization
quant_type (str) -- quant type
- 返回:
n-bits for quantization configuration
- 返回类型:
int
- static quant_backward(tensor, grad_output, quant_type, scale, zero_point, qmin, qmax)[源代码]¶
This method should be overrided by subclass to provide customized backward function, default implementation is Straight-Through Estimator
- 参数:
tensor (Tensor) -- input of quantization operation
grad_output (Tensor) -- gradient of the output of quantization operation
scale (Tensor) -- the type of quantization, it can be QuantType.INPUT, QuantType.WEIGHT, QuantType.OUTPUT, you can define different behavior for different types.
zero_point (Tensor) -- zero_point for quantizing tensor
qmin (Tensor) -- quant_min for quantizing tensor
qmax (Tensor) -- quant_max for quantizng tensor
- 返回:
gradient of the input of quantization operation
- 返回类型:
tensor