- class nni.contrib.compression.TorchEvaluator(training_func, optimizers, training_step, lr_schedulers=None, dummy_input=None, evaluating_func=None)[源代码]¶
TorchEvaluator is the Evaluator for native PyTorch users. Please refer to the Compression Evaluator for the evaluator initialization example.
training_func (_TRAINING_FUNC) --
The training function is used to train the model, note that this a entire optimization training loop. Training function has three required parameters,
training_step, and three optional parameters,
Let's explain these six parameters NNI passed in, but in most cases, users don't need to care about these. Users only need to treat these six parameters as the original parameters during the training process.
modelis a wrapped model from the original model, it has a similar structure to the model to be pruned, so it can share training function with the original model.
optimizersare re-initialized from the
optimizerspassed to the evaluator and the wrapped model's parameters.
training_stepalso based on the
training_steppassed to the evaluator, it might be modified by the compressor during model compression.
If users use
training_func, NNI will re-initialize the
lr_schedulerswith the re-initialized optimizers.
max_stepsis the NNI training duration limitation. It is for pruner (or quantizer) to control the number of training steps. The user implemented
max_stepsby stopping the training loop after
max_stepsis reached. Pruner may pass
max_stepswhen it only controls
max_epochsis similar to the
max_steps, the only different is that it controls the number of training epochs. The user implemented
max_epochsby stopping the training loop after
max_epochsis reached. Pruner may pass
max_epochswhen it only controls
Note that when the pruner passes
max_epochs, it treats
training_funcas a function of model fine-tuning. Users should assign proper values to
def training_func(model: torch.nn.Module, optimizers: torch.optim.Optimizer, training_step: Callable[[Any, Any], torch.Tensor], lr_schedulers: _LRScheduler | None = None, max_steps: int | None = None, max_epochs: int | None = None, *args, **kwargs): ... total_epochs = max_epochs if max_epochs else 20 total_steps = max_steps if max_steps else 1000000 current_steps = 0 ... for epoch in range(total_epochs): ... if current_steps >= total_steps: return
lr_schedulerspassed to the
training_funchave the same type as the
lr_schedulerspassed to evaluator, a single
torch.optim._LRSchedulerinstance or a list of them.
A single traced optimizer instance or a list of traced optimizers by
NNI may modify the
stepand/or optimize compressed models, so NNI needs to have the ability to re-initialize the optimizer.
nni.tracecan record the initialization parameters of a function/class, which can then be used by NNI to re-initialize the optimizer for a new but structurally similar model.
traced_optimizer = nni.trace(torch.nn.Adam)(model.parameters()).
training_step (_TRAINING_STEP) --
A callable function, the first argument of inputs should be
batch, and the outputs should contain loss. Three kinds of outputs are supported: single loss, tuple with the first element is loss, a dict contains a key
def training_step(batch, model, ...): inputs, labels = batch output = model(inputs) ... loss = loss_func(output, labels) return loss
lr_schedulers (SCHEDULER | List[SCHEDULER] | None) --
Optional. A single traced lr_scheduler instance or a list of traced lr_schedulers by
nni.trace. For the same reason with
optimizers, NNI needs the traced lr_scheduler to re-initialize it.
traced_lr_scheduler = nni.trace(ExponentialLR)(optimizer, 0.1).
dummy_input (Any | None) -- Optional. The dummy_input is used to trace the graph, it's same with
evaluating_func (_EVALUATING_FUNC | None) -- Optional. A function that input is model and return the evaluation metric. This is the function used to evaluate the compressed model performance. The input is a model and the output is a
floatmetric or a
dictshould contains key
floatvalue). NNI will take the float number as the model score, and assume the higher score means the better performance. If you want to provide additional information, please put it into a dict and NNI will take the value of key
defaultas evaluation metric.
It is also worth to note that not all the arguments of
TorchEvaluatormust be provided. Some pruners (or quantizers) only require
evaluating_funcas they do not train the model, some pruners (or quantizers) only require
training_func. Please refer to each pruner's (or quantizer's) doc to check the required arguments. But, it is fine to provide more arguments than the pruner's (or quantizer's) need.
- class nni.contrib.compression.LightningEvaluator(trainer, data_module, dummy_input=None)[源代码]¶
LightningEvaluator is the Evaluator based on PyTorchLightning. It is very friendly to the users who are familiar to PyTorchLightning or already have training/validation/testing code written in PyTorchLightning. The only need is to use
nni.traceto trace the Trainer & LightningDataModule.
Additionally, please make sure the
LR_Schedulerclass used in
LightningModule.configure_optimizers()are also be traced by
Please refer to the Compression Evaluator for the evaluator initialization example.
trainer (pl.Trainer) -- Pytorch-Lightning Trainer. It should be traced by nni, e.g.,
trainer = nni.trace(pl.Trainer)(...).
data_module (pl.LightningDataModule) -- Pytorch-Lightning LightningDataModule. It should be traced by nni, e.g.,
data_module = nni.trace(pl.LightningDataModule)(...).
dummy_input (Any | None) -- The dummy_input is used to trace the graph. If dummy_input is not given, will use the data in data_module.train_dataloader().
If the the test metric is needed by nni, please make sure log metric with key
- class nni.contrib.compression.TransformersEvaluator(trainer, dummy_input=None)[源代码]¶
TransformersEvaluator is for the users who using Huggingface
Here is an example for using
transformers.trainer.Trainerto initialize an evaluator:
from transformers.trainer import Trainer # wrap Trainer class with nni.trace trainer = nni.trace(Trainer)(model=model) evaluator = TransformersEvaluator(trainer) # if you want to using customized optimizer & lr_scheduler, please also wrap Optimzier & _LRScheduler class optimizer = nni.trace(Adam)(...) lr_scheduler = nni.trace(LambdaLR)(...) trainer = nni.trace(Trainer)(model=model, ..., optimizers=(optimizer, lr_scheduler)) evaluator = TransformersEvaluator(trainer)
trainer (HFTrainer) --
nni.trace(transformers.trainer.Trainer)instance. The trainer will be re-initialized inside evaluator, so wrap with
nni.traceis required for getting the initialization arguments.
dummy_input (Any | None) --
Optional. The dummy_input is used to trace the graph, it's same with