Python API Reference of Auto Tune

Trial

nni.get_next_parameter()[source]

Get the hyper paremeters generated by tuner. For a multiphase experiment, it returns a new group of hyper parameters at each call of get_next_parameter. For a non-multiphase (multiPhase is not configured or set to False) experiment, it returns hyper parameters only on the first call for each trial job, it returns None since second call. This API should be called only once in each trial job of an experiment which is not specified as multiphase.

Returns

A dict object contains the hyper parameters generated by tuner, the keys of the dict are defined in search space. Returns None if no more hyper parameters can be generated by tuner.

Return type

dict

nni.get_current_parameter(tag=None)[source]

Get current hyper parameters generated by tuner. It returns the same group of hyper parameters as the last call of get_next_parameter returns.

Parameters

tag (str) – hyper parameter key

nni.report_intermediate_result(metric)[source]

Reports intermediate result to NNI.

Parameters

metric – serializable object.

nni.report_final_result(metric)[source]

Reports final result to NNI.

Parameters

metric (serializable object) – Usually (for built-in tuners to work), it should be a number, or a dict with key “default” (a number), and any other extra keys.

nni.get_experiment_id()[source]

Get experiment ID.

Returns

Identifier of current experiment

Return type

str

nni.get_trial_id()[source]

Get trial job ID which is string identifier of a trial job, for example ‘MoXrp’. In one experiment, each trial job has an unique string ID.

Returns

Identifier of current trial job which is calling this API.

Return type

str

nni.get_sequence_id()[source]

Get trial job sequence nubmer. A sequence number is an integer value assigned to each trial job base on the order they are submitted, incremental starting from 0. In one experiment, both trial job ID and sequence number are unique for each trial job, they are of different data types.

Returns

Sequence number of current trial job which is calling this API.

Return type

int

Tuner

class nni.tuner.Tuner[source]

Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.

This is the abstract base class for all tuners. Tuning algorithms should inherit this class and override update_search_space(), receive_trial_result(), as well as generate_parameters() or generate_multiple_parameters().

After initializing, NNI will first call update_search_space() to tell tuner the feasible region, and then call generate_parameters() one or more times to request for hyper-parameter configurations.

The framework will train several models with given configuration. When one of them is finished, the final accuracy will be reported to receive_trial_result(). And then another configuration will be reqeusted and trained, util the whole experiment finish.

If a tuner want’s to know when a trial ends, it can also override trial_end().

Tuners use parameter ID to track trials. In tuner context, there is a one-to-one mapping between parameter ID and trial. When the framework ask tuner to generate hyper-parameters for a new trial, an ID has already been assigned and can be recorded in generate_parameters(). Later when the trial ends, the ID will be reported to trial_end(), and receive_trial_result() if it has a final result. Parameter IDs are unique integers.

The type/format of search space and hyper-parameters are not limited, as long as they are JSON-serializable and in sync with trial code. For HPO tuners, however, there is a widely shared common interface, which supports choice, randint, uniform, and so on. See docs/en_US/Tutorial/SearchSpaceSpec.md for details of this interface.

[WIP] For advanced tuners which take advantage of trials’ intermediate results, an Advisor interface is under development.

See also

Builtin, HyperoptTuner, EvolutionTuner, SMACTuner, GridSearchTuner, NetworkMorphismTuner, MetisTuner, PPOTuner, GPTuner

generate_multiple_parameters(parameter_id_list, **kwargs)[source]

Callback method which provides multiple sets of hyper-parameters.

This method will get called when the framework is about to launch one or more new trials.

If user does not override this method, it will invoke generate_parameters() on each parameter ID.

See generate_parameters() for details.

User code must override either this method or generate_parameters().

Parameters
  • parameter_id_list (list of int) – Unique identifiers for each set of requested hyper-parameters. These will later be used in receive_trial_result().

  • **kwargs – Unstable parameters which should be ignored by normal users.

Returns

List of hyper-parameters. An empty list indicates there are no more trials.

Return type

list

generate_parameters(parameter_id, **kwargs)[source]

Abstract method which provides a set of hyper-parameters.

This method will get called when the framework is about to launch a new trial, if user does not override generate_multiple_parameters().

The return value of this method will be received by trials via nni.get_next_parameter(). It should fit in the search space, though the framework will not verify this.

User code must override either this method or generate_multiple_parameters().

Parameters
  • parameter_id (int) – Unique identifier for requested hyper-parameters. This will later be used in receive_trial_result().

  • **kwargs – Unstable parameters which should be ignored by normal users.

Returns

The hyper-parameters, a dict in most cases, but could be any JSON-serializable type when needed.

Return type

any

Raises

nni.NoMoreTrialError – If the search space is fully explored, tuner can raise this exception.

import_data(data)[source]

Internal API under revising, not recommended for end users.

load_checkpoint()[source]

Internal API under revising, not recommended for end users.

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Abstract method invoked when a trial reports its final result. Must override.

This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.

Parameters
save_checkpoint()[source]

Internal API under revising, not recommended for end users.

trial_end(parameter_id, success, **kwargs)[source]

Abstract method invoked when a trial is completed or terminated. Do nothing by default.

Parameters
  • parameter_id (int) – Unique identifier for hyper-parameters used by this trial.

  • success (bool) – True if the trial successfully completed; False if failed or terminated.

  • **kwargs – Unstable parameters which should be ignored by normal users.

update_search_space(search_space)[source]

Abstract method for updating the search space. Must override.

Tuners are advised to support updating search space at run-time. If a tuner can only set search space once before generating first hyper-parameters, it should explicitly document this behaviour.

Parameters

search_space – JSON object defined by experiment owner.

class nni.algorithms.hpo.tpe_tuner.TpeTuner(optimize_mode='minimize', seed=None, tpe_args=None)[source]
Parameters
  • optimze_mode ('minimize' | 'maximize' (default: 'minimize')) – Whether optimize to minimize or maximize trial result.

  • seed (int | None) – The random seed.

  • tpe_args (dict[string, Any] | None) – Advanced users can use this to customize TPE tuner. See TpeArguments for details.

generate_parameters(parameter_id, **kwargs)[source]

Abstract method which provides a set of hyper-parameters.

This method will get called when the framework is about to launch a new trial, if user does not override generate_multiple_parameters().

The return value of this method will be received by trials via nni.get_next_parameter(). It should fit in the search space, though the framework will not verify this.

User code must override either this method or generate_multiple_parameters().

Parameters
  • parameter_id (int) – Unique identifier for requested hyper-parameters. This will later be used in receive_trial_result().

  • **kwargs – Unstable parameters which should be ignored by normal users.

Returns

The hyper-parameters, a dict in most cases, but could be any JSON-serializable type when needed.

Return type

any

Raises

nni.NoMoreTrialError – If the search space is fully explored, tuner can raise this exception.

import_data(data)[source]

Internal API under revising, not recommended for end users.

receive_trial_result(parameter_id, _parameters, value, **kwargs)[source]

Abstract method invoked when a trial reports its final result. Must override.

This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.

Parameters
trial_end(parameter_id, _success, **kwargs)[source]

Abstract method invoked when a trial is completed or terminated. Do nothing by default.

Parameters
  • parameter_id (int) – Unique identifier for hyper-parameters used by this trial.

  • success (bool) – True if the trial successfully completed; False if failed or terminated.

  • **kwargs – Unstable parameters which should be ignored by normal users.

update_search_space(space)[source]

Abstract method for updating the search space. Must override.

Tuners are advised to support updating search space at run-time. If a tuner can only set search space once before generating first hyper-parameters, it should explicitly document this behaviour.

Parameters

search_space – JSON object defined by experiment owner.

class nni.algorithms.hpo.random_tuner.RandomTuner(seed=None)[source]
generate_parameters(*args, **kwargs)[source]

Abstract method which provides a set of hyper-parameters.

This method will get called when the framework is about to launch a new trial, if user does not override generate_multiple_parameters().

The return value of this method will be received by trials via nni.get_next_parameter(). It should fit in the search space, though the framework will not verify this.

User code must override either this method or generate_multiple_parameters().

Parameters
  • parameter_id (int) – Unique identifier for requested hyper-parameters. This will later be used in receive_trial_result().

  • **kwargs – Unstable parameters which should be ignored by normal users.

Returns

The hyper-parameters, a dict in most cases, but could be any JSON-serializable type when needed.

Return type

any

Raises

nni.NoMoreTrialError – If the search space is fully explored, tuner can raise this exception.

receive_trial_result(*args, **kwargs)[source]

Abstract method invoked when a trial reports its final result. Must override.

This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.

Parameters
update_search_space(space)[source]

Abstract method for updating the search space. Must override.

Tuners are advised to support updating search space at run-time. If a tuner can only set search space once before generating first hyper-parameters, it should explicitly document this behaviour.

Parameters

search_space – JSON object defined by experiment owner.

class nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner(algorithm_name, optimize_mode='minimize', parallel_optimize=False, constant_liar_type='min')[source]

HyperoptTuner is a tuner which using hyperopt algorithm.

generate_parameters(parameter_id, **kwargs)[source]

Returns a set of trial (hyper-)parameters, as a serializable object.

Parameters

parameter_id (int) –

Returns

params

Return type

dict

get_suggestion(random_search=False)[source]

get suggestion from hyperopt

Parameters

random_search (bool) – flag to indicate random search or not (default: {False})

Returns

total_params – parameter suggestion

Return type

dict

import_data(data)[source]

Import additional data for tuning

Parameters

data – a list of dictionarys, each of which has at least two keys, ‘parameter’ and ‘value’

miscs_update_idxs_vals(miscs, idxs, vals, assert_all_vals_used=True, idxs_map=None)[source]

Unpack the idxs-vals format into the list of dictionaries that is misc.

Parameters
  • idxs_map (dict) – idxs_map is a dictionary of id->id mappings so that the misc[‘idxs’] can

  • argument. (contain different numbers than the idxs) –

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Record an observation of the objective function

Parameters
  • parameter_id (int) –

  • parameters (dict) –

  • value (dict/float) – if value is dict, it should have “default” key. value is final metrics of the trial.

update_search_space(search_space)[source]

Update search space definition in tuner by search_space in parameters.

Will called when first setup experiemnt or update search space in WebUI.

Parameters

search_space (dict) –

class nni.algorithms.hpo.evolution_tuner.EvolutionTuner(optimize_mode='maximize', population_size=32)[source]

EvolutionTuner is tuner using navie evolution algorithm.

generate_multiple_parameters(parameter_id_list, **kwargs)[source]

Returns multiple sets of trial (hyper-)parameters, as iterable of serializable objects. :param parameter_id_list: Unique identifiers for each set of requested hyper-parameters. :type parameter_id_list: list of int :param **kwargs: Not used

Returns

A list of newly generated configurations

Return type

list

generate_parameters(parameter_id, **kwargs)[source]

This function will returns a dict of trial (hyper-)parameters. If no trial configration for now, self.credit plus 1 to send the config later

Parameters

parameter_id (int) –

Returns

One newly generated configuration.

Return type

dict

import_data(data)[source]

Internal API under revising, not recommended for end users.

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Record the result from a trial

Parameters
  • parameter_id (int) –

  • parameters (dict) –

  • value (dict/float) – if value is dict, it should have “default” key. value is final metrics of the trial.

trial_end(parameter_id, success, **kwargs)[source]

To deal with trial failure. If a trial fails, random generate the parameters and add into the population. :param parameter_id: Unique identifier for hyper-parameters used by this trial. :type parameter_id: int :param success: True if the trial successfully completed; False if failed or terminated. :type success: bool :param **kwargs: Not used

update_search_space(search_space)[source]

Update search space.

Search_space contains the information that user pre-defined.

Parameters

search_space (dict) –

class nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner[source]
generate_parameters(*args, **kwargs)[source]

Abstract method which provides a set of hyper-parameters.

This method will get called when the framework is about to launch a new trial, if user does not override generate_multiple_parameters().

The return value of this method will be received by trials via nni.get_next_parameter(). It should fit in the search space, though the framework will not verify this.

User code must override either this method or generate_multiple_parameters().

Parameters
  • parameter_id (int) – Unique identifier for requested hyper-parameters. This will later be used in receive_trial_result().

  • **kwargs – Unstable parameters which should be ignored by normal users.

Returns

The hyper-parameters, a dict in most cases, but could be any JSON-serializable type when needed.

Return type

any

Raises

nni.NoMoreTrialError – If the search space is fully explored, tuner can raise this exception.

import_data(data)[source]

Internal API under revising, not recommended for end users.

receive_trial_result(*args, **kwargs)[source]

Abstract method invoked when a trial reports its final result. Must override.

This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.

Parameters
update_search_space(space)[source]

Abstract method for updating the search space. Must override.

Tuners are advised to support updating search space at run-time. If a tuner can only set search space once before generating first hyper-parameters, it should explicitly document this behaviour.

Parameters

search_space – JSON object defined by experiment owner.

class nni.algorithms.hpo.networkmorphism_tuner.NetworkMorphismTuner(task='cv', input_width=32, input_channel=3, n_output_node=10, algorithm_name='Bayesian', optimize_mode='maximize', path='model_path', verbose=True, beta=2.576, t_min=0.0001, max_model_size=16777216, default_model_len=3, default_model_width=64)[source]

NetworkMorphismTuner is a tuner which using network morphism techniques.

n_classes

The class number or output node number (default: 10)

Type

int

input_shape

A tuple including: (input_width, input_width, input_channel)

Type

tuple

t_min

The minimum temperature for simulated annealing. (default: Constant.T_MIN)

Type

float

beta

The beta in acquisition function. (default: Constant.BETA)

Type

float

algorithm_name

algorithm name used in the network morphism (default: "Bayesian")

Type

str

optimize_mode

optimize mode “minimize” or “maximize” (default: "minimize")

Type

str

verbose

verbose to print the log (default: True)

Type

bool

bo

The optimizer used in networkmorphsim tuner.

Type

BayesianOptimizer

max_model_size

max model size to the graph (default: Constant.MAX_MODEL_SIZE)

Type

int

default_model_len

default model length (default: Constant.MODEL_LEN)

Type

int

default_model_width

default model width (default: Constant.MODEL_WIDTH)

Type

int

search_space
Type

dict

add_model(metric_value, model_id)[source]

Add model to the history, x_queue and y_queue

Parameters
  • metric_value (float) –

  • graph (dict) –

  • model_id (int) –

Returns

model

Return type

dict

generate()[source]

Generate the next neural architecture.

Returns

  • other_info (any object) – Anything to be saved in the training queue together with the architecture.

  • generated_graph (Graph) – An instance of Graph.

generate_parameters(parameter_id, **kwargs)[source]

Returns a set of trial neural architecture, as a serializable object.

Parameters

parameter_id (int) –

get_best_model_id()[source]

Get the best model_id from history using the metric value

get_metric_value_by_id(model_id)[source]

Get the model metric valud by its model_id

Parameters

model_id (int) – model index

Returns

the model metric

Return type

float

import_data(data)[source]

Internal API under revising, not recommended for end users.

Call the generators to generate the initial architectures for the search.

load_best_model()[source]

Get the best model by model id

Returns

load_model – the model graph representation

Return type

graph.Graph

load_model_by_id(model_id)[source]

Get the model by model_id

Parameters

model_id (int) – model index

Returns

load_model – the model graph representation

Return type

graph.Graph

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Record an observation of the objective function.

Parameters
  • parameter_id (int) – the id of a group of paramters that generated by nni manager.

  • parameters (dict) – A group of parameters.

  • value (dict/float) – if value is dict, it should have “default” key.

update(other_info, graph, metric_value, model_id)[source]

Update the controller with evaluation result of a neural architecture.

Parameters
  • other_info (any object) – In our case it is the father ID in the search tree.

  • graph (graph.Graph) – An instance of Graph. The trained neural architecture.

  • metric_value (float) – The final evaluated metric value.

  • model_id (int) –

update_search_space(search_space)[source]

Update search space definition in tuner by search_space in neural architecture.

class nni.algorithms.hpo.metis_tuner.MetisTuner(optimize_mode='maximize', no_resampling=True, no_candidates=False, selection_num_starting_points=600, cold_start_num=10, exploration_probability=0.9)[source]

Metis Tuner

More algorithm information you could reference here: https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/

optimize_mode

optimize_mode is a string that including two mode “maximize” and “minimize”

Type

str

no_resampling

True or False. Should Metis consider re-sampling as part of the search strategy? If you are confident that the training dataset is noise-free, then you do not need re-sampling.

Type

bool

no_candidates

True or False. Should Metis suggest parameters for the next benchmark? If you do not plan to do more benchmarks, Metis can skip this step.

Type

bool

selection_num_starting_points

How many times Metis should try to find the global optimal in the search space? The higher the number, the longer it takes to output the solution.

Type

int

cold_start_num

Metis need some trial result to get cold start. when the number of trial result is less than cold_start_num, Metis will randomly sample hyper-parameter for trial.

Type

int

exploration_probability

The probability of Metis to select parameter from exploration instead of exploitation.

Type

float

generate_parameters(parameter_id, **kwargs)[source]

Generate next parameter for trial

If the number of trial result is lower than cold start number, metis will first random generate some parameters. Otherwise, metis will choose the parameters by the Gussian Process Model and the Gussian Mixture Model.

Parameters

parameter_id (int) –

Returns

result

Return type

dict

import_data(data)[source]

Import additional data for tuning

Parameters

data (a list of dict) – each of which has at least two keys: ‘parameter’ and ‘value’.

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Tuner receive result from trial.

Parameters
  • parameter_id (int) – The id of parameters, generated by nni manager.

  • parameters (dict) – A group of parameters that trial has tried.

  • value (dict/float) – if value is dict, it should have “default” key.

update_search_space(search_space)[source]

Update the self.x_bounds and self.x_types by the search_space.json

Parameters

search_space (dict) –

class nni.algorithms.hpo.batch_tuner.BatchTuner[source]

BatchTuner is tuner will running all the configure that user want to run batchly.

Examples

The search space only be accepted like:

{'combine_params':
    { '_type': 'choice',
                '_value': '[{...}, {...}, {...}]',
    }
}
generate_parameters(parameter_id, **kwargs)[source]

Returns a dict of trial (hyper-)parameters, as a serializable object.

Parameters

parameter_id (int) –

Returns

A candidate parameter group.

Return type

dict

import_data(data)[source]

Import additional data for tuning

Parameters

data – a list of dictionarys, each of which has at least two keys, ‘parameter’ and ‘value’

is_valid(search_space)[source]

Check the search space is valid: only contains ‘choice’ type

Parameters

search_space (dict) –

Returns

If valid, return candidate values; else return None.

Return type

None or list

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Abstract method invoked when a trial reports its final result. Must override.

This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.

Parameters
update_search_space(search_space)[source]

Update the search space

Parameters

search_space (dict) –

class nni.algorithms.hpo.gp_tuner.GPTuner(optimize_mode='maximize', utility='ei', kappa=5, xi=0, nu=2.5, alpha=1e-06, cold_start_num=10, selection_num_warm_up=100000, selection_num_starting_points=250)[source]

GPTuner is a Bayesian Optimization method where Gaussian Process is used for modeling loss functions.

Parameters
  • optimize_mode (str) – optimize mode, ‘maximize’ or ‘minimize’, by default ‘maximize’

  • utility (str) – utility function (also called ‘acquisition funcition’) to use, which can be ‘ei’, ‘ucb’ or ‘poi’. By default ‘ei’.

  • kappa (float) – value used by utility function ‘ucb’. The bigger kappa is, the more the tuner will be exploratory. By default 5.

  • xi (float) – used by utility function ‘ei’ and ‘poi’. The bigger xi is, the more the tuner will be exploratory. By default 0.

  • nu (float) – used to specify Matern kernel. The smaller nu, the less smooth the approximated function is. By default 2.5.

  • alpha (float) – Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations. By default 1e-6.

  • cold_start_num (int) – Number of random exploration to perform before Gaussian Process. By default 10.

  • selection_num_warm_up (int) – Number of random points to evaluate for getting the point which maximizes the acquisition function. By default 100000

  • selection_num_starting_points (int) – Number of times to run L-BFGS-B from a random starting point after the warmup. By default 250.

generate_parameters(parameter_id, **kwargs)[source]

Method which provides one set of hyper-parameters. If the number of trial result is lower than cold_start_number, GPTuner will first randomly generate some parameters. Otherwise, choose the parameters by the Gussian Process Model.

Override of the abstract method in Tuner.

import_data(data)[source]

Import additional data for tuning.

Override of the abstract method in Tuner.

receive_trial_result(parameter_id, parameters, value, **kwargs)[source]

Method invoked when a trial reports its final result.

Override of the abstract method in Tuner.

update_search_space(search_space)[source]

Update the self.bounds and self.types by the search_space.json file.

Override of the abstract method in Tuner.

Assessor

class nni.assessor.Assessor[source]

Assessor analyzes trial’s intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.

This is the abstract base class for all assessors. Early stopping algorithms should inherit this class and override assess_trial() method, which receives intermediate results from trials and give an assessing result.

If assess_trial() returns AssessResult.Bad for a trial, it hints NNI framework that the trial is likely to result in a poor final accuracy, and therefore should be killed to save resource.

If an assessor want’s to be notified when a trial ends, it can also override trial_end().

To write a new assessor, you can reference MedianstopAssessor’s code as an example.

assess_trial(trial_job_id, trial_history)[source]

Abstract method for determining whether a trial should be killed. Must override.

The NNI framework has little guarantee on trial_history. This method is not guaranteed to be invoked for each time trial_history get updated. It is also possible that a trial’s history keeps updating after receiving a bad result. And if the trial failed and retried, trial_history may be inconsistent with its previous value.

The only guarantee is that trial_history is always growing. It will not be empty and will always be longer than previous value.

This is an example of how assess_trial() get invoked sequentially:

trial_job_id | trial_history   | return value
------------ | --------------- | ------------
Trial_A      | [1.0, 2.0]      | Good
Trial_B      | [1.5, 1.3]      | Bad
Trial_B      | [1.5, 1.3, 1.9] | Good
Trial_A      | [0.9, 1.8, 2.3] | Good
Parameters
  • trial_job_id (str) – Unique identifier of the trial.

  • trial_history (list) – Intermediate results of this trial. The element type is decided by trial code.

Returns

AssessResult.Good or AssessResult.Bad.

Return type

AssessResult

load_checkpoint()[source]

Internal API under revising, not recommended for end users.

save_checkpoint()[source]

Internal API under revising, not recommended for end users.

trial_end(trial_job_id, success)[source]

Abstract method invoked when a trial is completed or terminated. Do nothing by default.

Parameters
  • trial_job_id (str) – Unique identifier of the trial.

  • success (bool) – True if the trial successfully completed; False if failed or terminated.

class nni.assessor.AssessResult(value)[source]

Enum class for Assessor.assess_trial() return value.

Bad = False

The trial works poorly and should be early stopped.

Good = True

The trial works well.

class nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor(epoch_num=20, start_step=6, threshold=0.95, gap=1)[source]

CurvefittingAssessor uses learning curve fitting algorithm to predict the learning curve performance in the future. It stops a pending trial X at step S if the trial’s forecast result at target step is convergence and lower than the best performance in the history.

Parameters
  • epoch_num (int) – The total number of epoch

  • start_step (int) – only after receiving start_step number of reported intermediate results

  • threshold (float) – The threshold that we decide to early stop the worse performance curve.

assess_trial(trial_job_id, trial_history)[source]

assess whether a trial should be early stop by curve fitting algorithm

Parameters
  • trial_job_id (int) – trial job id

  • trial_history (list) – The history performance matrix of each trial

Returns

AssessResult.Good or AssessResult.Bad

Return type

bool

Raises

Exception – unrecognize exception in curvefitting_assessor

trial_end(trial_job_id, success)[source]

update the best performance of completed trial job

Parameters
  • trial_job_id (int) – trial job id

  • success (bool) – True if succssfully finish the experiment, False otherwise

class nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor(optimize_mode='maximize', start_step=0)[source]

MedianstopAssessor is The median stopping rule stops a pending trial X at step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S

Parameters
  • optimize_mode (str) – optimize mode, ‘maximize’ or ‘minimize’

  • start_step (int) – only after receiving start_step number of reported intermediate results

assess_trial(trial_job_id, trial_history)[source]
Parameters
  • trial_job_id (int) – trial job id

  • trial_history (list) – The history performance matrix of each trial

Returns

AssessResult.Good or AssessResult.Bad

Return type

bool

Raises

Exception – unrecognize exception in medianstop_assessor

trial_end(trial_job_id, success)[source]
Parameters
  • trial_job_id (int) – trial job id

  • success (bool) – True if succssfully finish the experiment, False otherwise

Advisor

class nni.runtime.msg_dispatcher_base.MsgDispatcherBase[source]

This is where tuners and assessors are not defined yet. Inherits this class to make your own advisor.

command_queue_worker(command_queue)[source]

Process commands in command queues.

enqueue_command(command, data)[source]

Enqueue command into command queues

handle_add_customized_trial(data)[source]

Experimental API. Not recommended for usage.

handle_import_data(data)[source]

Import previous data when experiment is resumed. :param data: a list of dictionaries, each of which has at least two keys, ‘parameter’ and ‘value’ :type data: list

handle_initialize(data)[source]

Initialize search space and tuner, if any This method is meant to be called only once for each experiment, after calling this method, dispatcher should send(CommandType.Initialized, ‘’), to set the status of the experiment to be “INITIALIZED”. :param data: search space :type data: dict

handle_report_metric_data(data)[source]

Called when metric data is reported or new parameters are requested (for multiphase). When new parameters are requested, this method should send a new parameter.

Parameters

data (dict) – a dict which contains ‘parameter_id’, ‘value’, ‘trial_job_id’, ‘type’, ‘sequence’. type: can be MetricType.REQUEST_PARAMETER, MetricType.FINAL or MetricType.PERIODICAL. REQUEST_PARAMETER is used to request new parameters for multiphase trial job. In this case, the dict will contain additional keys: trial_job_id, parameter_index. Refer to msg_dispatcher.py as an example.

Raises

ValueError – Data type is not supported

handle_request_trial_jobs(data)[source]

The message dispatcher is demanded to generate data trial jobs. These trial jobs should be sent via send(CommandType.NewTrialJob, nni.dump(parameter)), where parameter will be received by NNI Manager and eventually accessible to trial jobs as “next parameter”. Semantically, message dispatcher should do this send exactly data times.

The JSON sent by this method should follow the format of

{
    "parameter_id": 42
    "parameters": {
        // this will be received by trial
    },
    "parameter_source": "algorithm" // optional
}
Parameters

data (int) – number of trial jobs

handle_trial_end(data)[source]

Called when the state of one of the trials is changed

Parameters

data (dict) – a dict with keys: trial_job_id, event, hyper_params. trial_job_id: the id generated by training service. event: the job’s state. hyper_params: the string that is sent by message dispatcher during the creation of trials.

handle_update_search_space(data)[source]

This method will be called when search space is updated. It’s recommended to call this method in handle_initialize to initialize search space. No need to notify NNI Manager when this update is done. :param data: search space :type data: dict

process_command_thread(request)[source]

Worker thread to process a command.

run()[source]

Run the tuner. This function will never return unless raise.

class nni.algorithms.hpo.hyperband_advisor.Hyperband(R=60, eta=3, optimize_mode='maximize', exec_mode='parallelism')[source]

Hyperband inherit from MsgDispatcherBase rather than Tuner, because it integrates both tuner’s functions and assessor’s functions. This is an implementation that could fully leverage available resources or follow the algorithm process, i.e., high parallelism or serial. A single execution of Hyperband takes a finite budget of (s_max + 1)B.

Parameters
  • R (int) – the maximum amount of resource that can be allocated to a single configuration

  • eta (int) – the variable that controls the proportion of configurations discarded in each round of SuccessiveHalving

  • optimize_mode (str) – optimize mode, ‘maximize’ or ‘minimize’

  • exec_mode (str) – execution mode, ‘serial’ or ‘parallelism’

handle_add_customized_trial(data)[source]

Experimental API. Not recommended for usage.

handle_import_data(data)[source]

Import previous data when experiment is resumed. :param data: a list of dictionaries, each of which has at least two keys, ‘parameter’ and ‘value’ :type data: list

handle_initialize(data)[source]

callback for initializing the advisor :param data: search space :type data: dict

handle_report_metric_data(data)[source]
Parameters

data – it is an object which has keys ‘parameter_id’, ‘value’, ‘trial_job_id’, ‘type’, ‘sequence’.

Raises

ValueError – Data type not supported

handle_request_trial_jobs(data)[source]
Parameters

data (int) – number of trial jobs

handle_trial_end(data)[source]
Parameters

data (dict()) – it has three keys: trial_job_id, event, hyper_params trial_job_id: the id generated by training service event: the job’s state hyper_params: the hyperparameters (a string) generated and returned by tuner

handle_update_search_space(data)[source]

data: JSON object, which is search space

Utilities

nni.utils.merge_parameter(base_params, override_params)[source]

Update the parameters in base_params with override_params. Can be useful to override parsed command line arguments.

Parameters
  • base_params (namespace or dict) – Base parameters. A key-value mapping.

  • override_params (dict or None) – Parameters to override. Usually the parameters got from get_next_parameters(). When it is none, nothing will happen.

Returns

The updated base_params. Note that base_params will be updated inplace. The return value is only for convenience.

Return type

namespace or dict

nni.trace(cls_or_func: Optional[nni.common.serializer.T] = None, *, kw_only: bool = True) Union[nni.common.serializer.T, nni.common.serializer.Traceable][source]

Annotate a function or a class if you want to preserve where it comes from. This is usually used in the following scenarios:

  1. Care more about execution configuration rather than results, which is usually the case in AutoML. For example, you want to mutate the parameters of a function.

  2. Repeat execution is not an issue (e.g., reproducible, execution is fast without side effects).

When a class/function is annotated, all the instances/calls will return a object as it normally will. Although the object might act like a normal object, it’s actually a different object with NNI-specific properties. One exception is that if your function returns None, it will return an empty traceable object instead, which should raise your attention when you want to check whether the None is None.

When parameters of functions are received, it is first stored, and then a shallow copy will be passed to wrapped function/class. This is to prevent mutable objects gets modified in the wrapped function/class. When the function finished execution, we also record extra information about where this object comes from. That’s why it’s called “trace”. When call nni.dump, that information will be used, by default.

If kw_only is true, try to convert all parameters into kwargs type. This is done by inspecting the argument list and types. This can be useful to extract semantics, but can be tricky in some corner cases.

Warning

Generators will be first expanded into a list, and the resulting list will be further passed into the wrapped function/class. This might hang when generators produce an infinite sequence. We might introduce an API to control this behavior in future.

Example:

@nni.trace
def foo(bar):
    pass
nni.dump(obj: Any, fp: Optional[Any] = None, *, use_trace: bool = True, pickle_size_limit: int = 4096, allow_nan: bool = True, **json_tricks_kwargs) Union[str, bytes][source]

Convert a nested data structure to a json string. Save to file if fp is specified. Use json-tricks as main backend. For unhandled cases in json-tricks, use cloudpickle. The serializer is not designed for long-term storage use, but rather to copy data between processes. The format is also subject to change between NNI releases.

Parameters
  • obj (any) – The object to dump.

  • fp (file handler or path) – File to write to. Keep it none if you want to dump a string.

  • pickle_size_limit (int) – This is set to avoid too long serialization result. Set to -1 to disable size check.

  • allow_nan (bool) – Whether to allow nan to be serialized. Different from default value in json-tricks, our default value is true.

  • json_tricks_kwargs (dict) – Other keyword arguments passed to json tricks (backend), e.g., indent=2.

Returns

Normally str. Sometimes bytes (if compressed).

Return type

str or bytes

nni.load(string: Optional[str] = None, *, fp: Optional[Any] = None, ignore_comments: bool = True, **json_tricks_kwargs) Any[source]

Load the string or from file, and convert it to a complex data structure. At least one of string or fp has to be not none.

Parameters
  • string (str) – JSON string to parse. Can be set to none if fp is used.

  • fp (str) – File path to load JSON from. Can be set to none if string is used.

  • ignore_comments (bool) – Remove comments (starting with # or //). Default is true.

Returns

The loaded object.

Return type

any