Python API Reference of Auto Tune¶
Trial¶
-
nni.
get_next_parameter
()[source]¶ Get the hyper paremeters generated by tuner. For a multiphase experiment, it returns a new group of hyper parameters at each call of get_next_parameter. For a non-multiphase (multiPhase is not configured or set to False) experiment, it returns hyper parameters only on the first call for each trial job, it returns None since second call. This API should be called only once in each trial job of an experiment which is not specified as multiphase.
- Returns
A dict object contains the hyper parameters generated by tuner, the keys of the dict are defined in search space. Returns None if no more hyper parameters can be generated by tuner.
- Return type
dict
-
nni.
get_current_parameter
(tag=None)[source]¶ Get current hyper parameters generated by tuner. It returns the same group of hyper parameters as the last call of get_next_parameter returns.
- Parameters
tag (str) – hyper parameter key
-
nni.
report_intermediate_result
(metric)[source]¶ Reports intermediate result to NNI.
- Parameters
metric – serializable object.
-
nni.
report_final_result
(metric)[source]¶ Reports final result to NNI.
- Parameters
metric (serializable object) – Usually (for built-in tuners to work), it should be a number, or a dict with key “default” (a number), and any other extra keys.
-
nni.
get_experiment_id
()[source]¶ Get experiment ID.
- Returns
Identifier of current experiment
- Return type
str
-
nni.
get_trial_id
()[source]¶ Get trial job ID which is string identifier of a trial job, for example ‘MoXrp’. In one experiment, each trial job has an unique string ID.
- Returns
Identifier of current trial job which is calling this API.
- Return type
str
-
nni.
get_sequence_id
()[source]¶ Get trial job sequence nubmer. A sequence number is an integer value assigned to each trial job base on the order they are submitted, incremental starting from 0. In one experiment, both trial job ID and sequence number are unique for each trial job, they are of different data types.
- Returns
Sequence number of current trial job which is calling this API.
- Return type
int
Tuner¶
-
class
nni.tuner.
Tuner
[source]¶ Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.
This is the abstract base class for all tuners. Tuning algorithms should inherit this class and override
update_search_space()
,receive_trial_result()
, as well asgenerate_parameters()
orgenerate_multiple_parameters()
.After initializing, NNI will first call
update_search_space()
to tell tuner the feasible region, and then callgenerate_parameters()
one or more times to request for hyper-parameter configurations.The framework will train several models with given configuration. When one of them is finished, the final accuracy will be reported to
receive_trial_result()
. And then another configuration will be reqeusted and trained, util the whole experiment finish.If a tuner want’s to know when a trial ends, it can also override
trial_end()
.Tuners use parameter ID to track trials. In tuner context, there is a one-to-one mapping between parameter ID and trial. When the framework ask tuner to generate hyper-parameters for a new trial, an ID has already been assigned and can be recorded in
generate_parameters()
. Later when the trial ends, the ID will be reported totrial_end()
, andreceive_trial_result()
if it has a final result. Parameter IDs are unique integers.The type/format of search space and hyper-parameters are not limited, as long as they are JSON-serializable and in sync with trial code. For HPO tuners, however, there is a widely shared common interface, which supports
choice
,randint
,uniform
, and so on. Seedocs/en_US/Tutorial/SearchSpaceSpec.md
for details of this interface.[WIP] For advanced tuners which take advantage of trials’ intermediate results, an
Advisor
interface is under development.See also
Builtin
,HyperoptTuner
,EvolutionTuner
,SMACTuner
,GridSearchTuner
,NetworkMorphismTuner
,MetisTuner
,PPOTuner
,GPTuner
-
generate_multiple_parameters
(parameter_id_list, **kwargs)[source]¶ Callback method which provides multiple sets of hyper-parameters.
This method will get called when the framework is about to launch one or more new trials.
If user does not override this method, it will invoke
generate_parameters()
on each parameter ID.See
generate_parameters()
for details.User code must override either this method or
generate_parameters()
.- Parameters
parameter_id_list (list of int) – Unique identifiers for each set of requested hyper-parameters. These will later be used in
receive_trial_result()
.**kwargs – Unstable parameters which should be ignored by normal users.
- Returns
List of hyper-parameters. An empty list indicates there are no more trials.
- Return type
list
-
generate_parameters
(parameter_id, **kwargs)[source]¶ Abstract method which provides a set of hyper-parameters.
This method will get called when the framework is about to launch a new trial, if user does not override
generate_multiple_parameters()
.The return value of this method will be received by trials via
nni.get_next_parameter()
. It should fit in the search space, though the framework will not verify this.User code must override either this method or
generate_multiple_parameters()
.- Parameters
parameter_id (int) – Unique identifier for requested hyper-parameters. This will later be used in
receive_trial_result()
.**kwargs – Unstable parameters which should be ignored by normal users.
- Returns
The hyper-parameters, a dict in most cases, but could be any JSON-serializable type when needed.
- Return type
any
- Raises
nni.NoMoreTrialError – If the search space is fully explored, tuner can raise this exception.
-
receive_trial_result
(parameter_id, parameters, value, **kwargs)[source]¶ Abstract method invoked when a trial reports its final result. Must override.
This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.
- Parameters
parameter_id (int) – Unique identifier of used hyper-parameters, same with
generate_parameters()
.parameters – Hyper-parameters generated by
generate_parameters()
.value – Result from trial (the return value of
nni.report_final_result()
).**kwargs – Unstable parameters which should be ignored by normal users.
-
trial_end
(parameter_id, success, **kwargs)[source]¶ Abstract method invoked when a trial is completed or terminated. Do nothing by default.
- Parameters
parameter_id (int) – Unique identifier for hyper-parameters used by this trial.
success (bool) – True if the trial successfully completed; False if failed or terminated.
**kwargs – Unstable parameters which should be ignored by normal users.
-
update_search_space
(search_space)[source]¶ Abstract method for updating the search space. Must override.
Tuners are advised to support updating search space at run-time. If a tuner can only set search space once before generating first hyper-parameters, it should explicitly document this behaviour.
- Parameters
search_space – JSON object defined by experiment owner.
-
-
class
nni.algorithms.hpo.hyperopt_tuner.
HyperoptTuner
(algorithm_name, optimize_mode='minimize', parallel_optimize=False, constant_liar_type='min')[source]¶ HyperoptTuner is a tuner which using hyperopt algorithm.
-
generate_parameters
(parameter_id, **kwargs)[source]¶ Returns a set of trial (hyper-)parameters, as a serializable object.
- Parameters
parameter_id (int) –
- Returns
params
- Return type
dict
-
get_suggestion
(random_search=False)[source]¶ get suggestion from hyperopt
- Parameters
random_search (bool) – flag to indicate random search or not (default: {False})
- Returns
total_params – parameter suggestion
- Return type
dict
-
import_data
(data)[source]¶ Import additional data for tuning
- Parameters
data – a list of dictionarys, each of which has at least two keys, ‘parameter’ and ‘value’
-
miscs_update_idxs_vals
(miscs, idxs, vals, assert_all_vals_used=True, idxs_map=None)[source]¶ Unpack the idxs-vals format into the list of dictionaries that is misc.
- Parameters
idxs_map (dict) – idxs_map is a dictionary of id->id mappings so that the misc[‘idxs’] can
different numbers than the idxs argument. (contain) –
-
-
class
nni.algorithms.hpo.evolution_tuner.
EvolutionTuner
(optimize_mode='maximize', population_size=32)[source]¶ EvolutionTuner is tuner using navie evolution algorithm.
-
generate_multiple_parameters
(parameter_id_list, **kwargs)[source]¶ Returns multiple sets of trial (hyper-)parameters, as iterable of serializable objects. :param parameter_id_list: Unique identifiers for each set of requested hyper-parameters. :type parameter_id_list: list of int :param **kwargs: Not used
- Returns
A list of newly generated configurations
- Return type
list
-
generate_parameters
(parameter_id, **kwargs)[source]¶ This function will returns a dict of trial (hyper-)parameters. If no trial configration for now, self.credit plus 1 to send the config later
- Parameters
parameter_id (int) –
- Returns
One newly generated configuration.
- Return type
dict
-
receive_trial_result
(parameter_id, parameters, value, **kwargs)[source]¶ Record the result from a trial
- Parameters
parameter_id (int) –
parameters (dict) –
value (dict/float) – if value is dict, it should have “default” key. value is final metrics of the trial.
-
trial_end
(parameter_id, success, **kwargs)[source]¶ To deal with trial failure. If a trial fails, random generate the parameters and add into the population. :param parameter_id: Unique identifier for hyper-parameters used by this trial. :type parameter_id: int :param success: True if the trial successfully completed; False if failed or terminated. :type success: bool :param **kwargs: Not used
-
-
class
nni.algorithms.hpo.gridsearch_tuner.
GridSearchTuner
[source]¶ GridSearchTuner will search all the possible configures that the user define in the searchSpace. The only acceptable types of search space are
choice
,quniform
,randint
Type
choice
will select one of the options. Note that it can also be nested.Type
quniform
will receive three values [low
,high
,q
], where [low
,high
] specifies a range andq
specifies the interval. It will be sampled in a way that the first sampled value islow
, and each of the following values is ‘interval’ larger than the value in front of it.Type
randint
gives all possible intergers in range[low
,high
). Note thathigh
is not included.-
generate_parameters
(parameter_id, **kwargs)[source]¶ Generate parameters for one trial.
- Parameters
parameter_id (int) – The id for the generated hyperparameter
**kwargs – Not used
- Returns
One configuration from the expanded search space.
- Return type
dict
- Raises
NoMoreTrialError – If all the configurations has been sent, raise
NoMoreTrialError
.
-
import_data
(data)[source]¶ Import additional data for tuning
- Parameters
list – A list of dictionarys, each of which has at least two keys,
parameter
andvalue
-
receive_trial_result
(parameter_id, parameters, value, **kwargs)[source]¶ Receive a trial’s final performance result reported through
report_final_result()
by the trial. GridSearchTuner does not need trial’s results.
-
update_search_space
(search_space)[source]¶ Check if the search space is valid and expand it: support only
choice
,quniform
,randint
.- Parameters
search_space (dict) – The format could be referred to search space spec (https://nni.readthedocs.io/en/latest/Tutorial/SearchSpaceSpec.html).
-
-
class
nni.algorithms.hpo.networkmorphism_tuner.
NetworkMorphismTuner
(task='cv', input_width=32, input_channel=3, n_output_node=10, algorithm_name='Bayesian', optimize_mode='maximize', path='model_path', verbose=True, beta=2.576, t_min=0.0001, max_model_size=16777216, default_model_len=3, default_model_width=64)[source]¶ NetworkMorphismTuner is a tuner which using network morphism techniques.
-
n_classes
¶ The class number or output node number (default:
10
)- Type
int
-
input_shape
¶ A tuple including: (input_width, input_width, input_channel)
- Type
tuple
-
t_min
¶ The minimum temperature for simulated annealing. (default:
Constant.T_MIN
)- Type
float
-
beta
¶ The beta in acquisition function. (default:
Constant.BETA
)- Type
float
-
algorithm_name
¶ algorithm name used in the network morphism (default:
"Bayesian"
)- Type
str
-
optimize_mode
¶ optimize mode “minimize” or “maximize” (default:
"minimize"
)- Type
str
-
verbose
¶ verbose to print the log (default:
True
)- Type
bool
-
bo
¶ The optimizer used in networkmorphsim tuner.
- Type
BayesianOptimizer
-
max_model_size
¶ max model size to the graph (default:
Constant.MAX_MODEL_SIZE
)- Type
int
-
default_model_len
¶ default model length (default:
Constant.MODEL_LEN
)- Type
int
-
default_model_width
¶ default model width (default:
Constant.MODEL_WIDTH
)- Type
int
-
search_space
¶ - Type
dict
-
add_model
(metric_value, model_id)[source]¶ Add model to the history, x_queue and y_queue
- Parameters
metric_value (float) –
graph (dict) –
model_id (int) –
- Returns
model
- Return type
dict
-
generate
()[source]¶ Generate the next neural architecture.
- Returns
other_info (any object) – Anything to be saved in the training queue together with the architecture.
generated_graph (Graph) – An instance of Graph.
-
generate_parameters
(parameter_id, **kwargs)[source]¶ Returns a set of trial neural architecture, as a serializable object.
- Parameters
parameter_id (int) –
-
get_metric_value_by_id
(model_id)[source]¶ Get the model metric valud by its model_id
- Parameters
model_id (int) – model index
- Returns
the model metric
- Return type
float
-
load_best_model
()[source]¶ Get the best model by model id
- Returns
load_model – the model graph representation
- Return type
-
load_model_by_id
(model_id)[source]¶ Get the model by model_id
- Parameters
model_id (int) – model index
- Returns
load_model – the model graph representation
- Return type
-
receive_trial_result
(parameter_id, parameters, value, **kwargs)[source]¶ Record an observation of the objective function.
- Parameters
parameter_id (int) – the id of a group of paramters that generated by nni manager.
parameters (dict) – A group of parameters.
value (dict/float) – if value is dict, it should have “default” key.
-
update
(other_info, graph, metric_value, model_id)[source]¶ Update the controller with evaluation result of a neural architecture.
- Parameters
other_info (any object) – In our case it is the father ID in the search tree.
graph (Graph) – An instance of Graph. The trained neural architecture.
metric_value (float) – The final evaluated metric value.
model_id (int) –
-
-
class
nni.algorithms.hpo.metis_tuner.
MetisTuner
(optimize_mode='maximize', no_resampling=True, no_candidates=False, selection_num_starting_points=600, cold_start_num=10, exploration_probability=0.9)[source]¶ Metis Tuner
More algorithm information you could reference here: https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/
-
optimize_mode
¶ optimize_mode is a string that including two mode “maximize” and “minimize”
- Type
str
-
no_resampling
¶ True or False. Should Metis consider re-sampling as part of the search strategy? If you are confident that the training dataset is noise-free, then you do not need re-sampling.
- Type
bool
-
no_candidates
¶ True or False. Should Metis suggest parameters for the next benchmark? If you do not plan to do more benchmarks, Metis can skip this step.
- Type
bool
-
selection_num_starting_points
¶ How many times Metis should try to find the global optimal in the search space? The higher the number, the longer it takes to output the solution.
- Type
int
-
cold_start_num
¶ Metis need some trial result to get cold start. when the number of trial result is less than cold_start_num, Metis will randomly sample hyper-parameter for trial.
- Type
int
-
exploration_probability
¶ The probability of Metis to select parameter from exploration instead of exploitation.
- Type
float
-
generate_parameters
(parameter_id, **kwargs)[source]¶ Generate next parameter for trial
If the number of trial result is lower than cold start number, metis will first random generate some parameters. Otherwise, metis will choose the parameters by the Gussian Process Model and the Gussian Mixture Model.
- Parameters
parameter_id (int) –
- Returns
result
- Return type
dict
-
import_data
(data)[source]¶ Import additional data for tuning
- Parameters
data (a list of dict) – each of which has at least two keys: ‘parameter’ and ‘value’.
-
receive_trial_result
(parameter_id, parameters, value, **kwargs)[source]¶ Tuner receive result from trial.
- Parameters
parameter_id (int) – The id of parameters, generated by nni manager.
parameters (dict) – A group of parameters that trial has tried.
value (dict/float) – if value is dict, it should have “default” key.
-
-
class
nni.algorithms.hpo.batch_tuner.
BatchTuner
[source]¶ BatchTuner is tuner will running all the configure that user want to run batchly.
Examples
The search space only be accepted like:
{'combine_params': { '_type': 'choice', '_value': '[{...}, {...}, {...}]', } }
-
generate_parameters
(parameter_id, **kwargs)[source]¶ Returns a dict of trial (hyper-)parameters, as a serializable object.
- Parameters
parameter_id (int) –
- Returns
A candidate parameter group.
- Return type
dict
-
import_data
(data)[source]¶ Import additional data for tuning
- Parameters
data – a list of dictionarys, each of which has at least two keys, ‘parameter’ and ‘value’
-
is_valid
(search_space)[source]¶ Check the search space is valid: only contains ‘choice’ type
- Parameters
search_space (dict) –
- Returns
If valid, return candidate values; else return None.
- Return type
None or list
-
receive_trial_result
(parameter_id, parameters, value, **kwargs)[source]¶ Abstract method invoked when a trial reports its final result. Must override.
This method only listens to results of algorithm-generated hyper-parameters. Currently customized trials added from web UI will not report result to this method.
- Parameters
parameter_id (int) – Unique identifier of used hyper-parameters, same with
generate_parameters()
.parameters – Hyper-parameters generated by
generate_parameters()
.value – Result from trial (the return value of
nni.report_final_result()
).**kwargs – Unstable parameters which should be ignored by normal users.
-
-
class
nni.algorithms.hpo.gp_tuner.
GPTuner
(optimize_mode='maximize', utility='ei', kappa=5, xi=0, nu=2.5, alpha=1e-06, cold_start_num=10, selection_num_warm_up=100000, selection_num_starting_points=250)[source]¶ GPTuner is a Bayesian Optimization method where Gaussian Process is used for modeling loss functions.
- Parameters
optimize_mode (str) – optimize mode, ‘maximize’ or ‘minimize’, by default ‘maximize’
utility (str) – utility function (also called ‘acquisition funcition’) to use, which can be ‘ei’, ‘ucb’ or ‘poi’. By default ‘ei’.
kappa (float) – value used by utility function ‘ucb’. The bigger kappa is, the more the tuner will be exploratory. By default 5.
xi (float) – used by utility function ‘ei’ and ‘poi’. The bigger xi is, the more the tuner will be exploratory. By default 0.
nu (float) – used to specify Matern kernel. The smaller nu, the less smooth the approximated function is. By default 2.5.
alpha (float) – Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations. By default 1e-6.
cold_start_num (int) – Number of random exploration to perform before Gaussian Process. By default 10.
selection_num_warm_up (int) – Number of random points to evaluate for getting the point which maximizes the acquisition function. By default 100000
selection_num_starting_points (int) – Number of times to run L-BFGS-B from a random starting point after the warmup. By default 250.
-
generate_parameters
(parameter_id, **kwargs)[source]¶ Method which provides one set of hyper-parameters. If the number of trial result is lower than cold_start_number, GPTuner will first randomly generate some parameters. Otherwise, choose the parameters by the Gussian Process Model.
Override of the abstract method in
Tuner
.
-
import_data
(data)[source]¶ Import additional data for tuning.
Override of the abstract method in
Tuner
.
Assessor¶
-
class
nni.assessor.
Assessor
[source]¶ Assessor analyzes trial’s intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.
This is the abstract base class for all assessors. Early stopping algorithms should inherit this class and override
assess_trial()
method, which receives intermediate results from trials and give an assessing result.If
assess_trial()
returnsAssessResult.Bad
for a trial, it hints NNI framework that the trial is likely to result in a poor final accuracy, and therefore should be killed to save resource.If an assessor want’s to be notified when a trial ends, it can also override
trial_end()
.To write a new assessor, you can reference
MedianstopAssessor
’s code as an example.See also
Builtin
,MedianstopAssessor
,CurvefittingAssessor
-
assess_trial
(trial_job_id, trial_history)[source]¶ Abstract method for determining whether a trial should be killed. Must override.
The NNI framework has little guarantee on
trial_history
. This method is not guaranteed to be invoked for each timetrial_history
get updated. It is also possible that a trial’s history keeps updating after receiving a bad result. And if the trial failed and retried,trial_history
may be inconsistent with its previous value.The only guarantee is that
trial_history
is always growing. It will not be empty and will always be longer than previous value.This is an example of how
assess_trial()
get invoked sequentially:trial_job_id | trial_history | return value ------------ | --------------- | ------------ Trial_A | [1.0, 2.0] | Good Trial_B | [1.5, 1.3] | Bad Trial_B | [1.5, 1.3, 1.9] | Good Trial_A | [0.9, 1.8, 2.3] | Good
- Parameters
trial_job_id (str) – Unique identifier of the trial.
trial_history (list) – Intermediate results of this trial. The element type is decided by trial code.
- Returns
- Return type
-
-
class
nni.assessor.
AssessResult
(value)[source]¶ Enum class for
Assessor.assess_trial()
return value.-
Bad
= False¶ The trial works poorly and should be early stopped.
-
Good
= True¶ The trial works well.
-
-
class
nni.algorithms.hpo.curvefitting_assessor.
CurvefittingAssessor
(epoch_num=20, start_step=6, threshold=0.95, gap=1)[source]¶ CurvefittingAssessor uses learning curve fitting algorithm to predict the learning curve performance in the future. It stops a pending trial X at step S if the trial’s forecast result at target step is convergence and lower than the best performance in the history.
- Parameters
epoch_num (int) – The total number of epoch
start_step (int) – only after receiving start_step number of reported intermediate results
threshold (float) – The threshold that we decide to early stop the worse performance curve.
-
assess_trial
(trial_job_id, trial_history)[source]¶ assess whether a trial should be early stop by curve fitting algorithm
- Parameters
trial_job_id (int) – trial job id
trial_history (list) – The history performance matrix of each trial
- Returns
AssessResult.Good or AssessResult.Bad
- Return type
bool
- Raises
Exception – unrecognize exception in curvefitting_assessor
-
class
nni.algorithms.hpo.medianstop_assessor.
MedianstopAssessor
(optimize_mode='maximize', start_step=0)[source]¶ MedianstopAssessor is The median stopping rule stops a pending trial X at step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S
- Parameters
optimize_mode (str) – optimize mode, ‘maximize’ or ‘minimize’
start_step (int) – only after receiving start_step number of reported intermediate results
Advisor¶
-
class
nni.runtime.msg_dispatcher_base.
MsgDispatcherBase
[source]¶ This is where tuners and assessors are not defined yet. Inherits this class to make your own advisor.
-
handle_import_data
(data)[source]¶ Import previous data when experiment is resumed. :param data: a list of dictionaries, each of which has at least two keys, ‘parameter’ and ‘value’ :type data: list
-
handle_initialize
(data)[source]¶ Initialize search space and tuner, if any This method is meant to be called only once for each experiment, after calling this method, dispatcher should send(CommandType.Initialized, ‘’), to set the status of the experiment to be “INITIALIZED”. :param data: search space :type data: dict
-
handle_report_metric_data
(data)[source]¶ Called when metric data is reported or new parameters are requested (for multiphase). When new parameters are requested, this method should send a new parameter.
- Parameters
data (dict) – a dict which contains ‘parameter_id’, ‘value’, ‘trial_job_id’, ‘type’, ‘sequence’. type: can be MetricType.REQUEST_PARAMETER, MetricType.FINAL or MetricType.PERIODICAL. REQUEST_PARAMETER is used to request new parameters for multiphase trial job. In this case, the dict will contain additional keys: trial_job_id, parameter_index. Refer to msg_dispatcher.py as an example.
- Raises
ValueError – Data type is not supported
-
handle_request_trial_jobs
(data)[source]¶ The message dispatcher is demanded to generate
data
trial jobs. These trial jobs should be sent viasend(CommandType.NewTrialJob, json_tricks.dumps(parameter))
, whereparameter
will be received by NNI Manager and eventually accessible to trial jobs as “next parameter”. Semantically, message dispatcher should do thissend
exactlydata
times.The JSON sent by this method should follow the format of
{ "parameter_id": 42 "parameters": { // this will be received by trial }, "parameter_source": "algorithm" // optional }
- Parameters
data (int) – number of trial jobs
-
handle_trial_end
(data)[source]¶ Called when the state of one of the trials is changed
- Parameters
data (dict) – a dict with keys: trial_job_id, event, hyper_params. trial_job_id: the id generated by training service. event: the job’s state. hyper_params: the string that is sent by message dispatcher during the creation of trials.
-
-
class
nni.algorithms.hpo.hyperband_advisor.
Hyperband
(R=60, eta=3, optimize_mode='maximize', exec_mode='parallelism')[source]¶ Hyperband inherit from MsgDispatcherBase rather than Tuner, because it integrates both tuner’s functions and assessor’s functions. This is an implementation that could fully leverage available resources or follow the algorithm process, i.e., high parallelism or serial. A single execution of Hyperband takes a finite budget of (s_max + 1)B.
- Parameters
R (int) – the maximum amount of resource that can be allocated to a single configuration
eta (int) – the variable that controls the proportion of configurations discarded in each round of SuccessiveHalving
optimize_mode (str) – optimize mode, ‘maximize’ or ‘minimize’
exec_mode (str) – execution mode, ‘serial’ or ‘parallelism’
-
handle_import_data
(data)[source]¶ Import previous data when experiment is resumed. :param data: a list of dictionaries, each of which has at least two keys, ‘parameter’ and ‘value’ :type data: list
-
handle_initialize
(data)[source]¶ callback for initializing the advisor :param data: search space :type data: dict
-
handle_report_metric_data
(data)[source]¶ - Parameters
data – it is an object which has keys ‘parameter_id’, ‘value’, ‘trial_job_id’, ‘type’, ‘sequence’.
- Raises
ValueError – Data type not supported
Utilities¶
-
nni.utils.
merge_parameter
(base_params, override_params)[source]¶ Update the parameters in
base_params
withoverride_params
. Can be useful to override parsed command line arguments.- Parameters
base_params (namespace or dict) – Base parameters. A key-value mapping.
override_params (dict or None) – Parameters to override. Usually the parameters got from
get_next_parameters()
. When it is none, nothing will happen.
- Returns
The updated
base_params
. Note thatbase_params
will be updated inplace. The return value is only for convenience.- Return type
namespace or dict