{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Searching in DARTS search space\n\nIn this tutorial, we demonstrate how to search in the famous model space proposed in `DARTS`_.\n\nThrough this process, you will learn:\n\n* How to use the built-in model spaces from NNI's model space hub.\n* How to use one-shot exploration strategies to explore a model space.\n* How to customize evaluators to achieve the best performance.\n\nIn the end, we get a strong-performing model on CIFAR-10 dataset, which achieves up to 97.28% accuracy.\n\n.. attention::\n\n Running this tutorial requires a GPU.\n If you don't have one, you can set ``gpus`` in :class:`~nni.nas.evaluator.pytorch.Classification` to be 0,\n but do note that it will be much slower.\n\n\n## Use a pre-searched DARTS model\n\nSimilar to [the beginner tutorial of PyTorch](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html)_,\nwe begin with CIFAR-10 dataset, which is a image classification dataset of 10 categories.\nThe images in CIFAR-10 are of size 3x32x32, i.e., RGB-colored images of 32x32 pixels in size.\n\nWe first load the CIFAR-10 dataset with torchvision.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import nni\nimport torch\nfrom torchvision import transforms\nfrom torchvision.datasets import CIFAR10\nfrom nni.nas.evaluator.pytorch import DataLoader\n\nCIFAR_MEAN = [0.49139968, 0.48215827, 0.44653124]\nCIFAR_STD = [0.24703233, 0.24348505, 0.26158768]\n\ntransform_valid = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize(CIFAR_MEAN, CIFAR_STD),\n])\nvalid_data = nni.trace(CIFAR10)(root='./data', train=False, download=True, transform=transform_valid)\nvalid_loader = DataLoader(valid_data, batch_size=256, num_workers=6)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
If you are to use multi-trial strategies, wrapping CIFAR10 with :func:`nni.trace` and\n use DataLoader from ``nni.nas.evaluator.pytorch`` (instead of ``torch.utils.data``) are mandatory.\n Otherwise, it's optional.
`DARTS`_ is one of those papers that innovate both in search space and search strategy.\n In this tutorial, we will search on **model space** provided by DARTS with **search strategy** proposed by DARTS.\n We refer to them as *DARTS model space* (``DartsSpace``) and *DARTS strategy* (``DartsStrategy``), respectively.\n We did NOT imply that the :class:`~nni.nas.hub.pytorch.DARTS` space and\n :class:`~nni.nas.strategy.DARTS` strategy has to used together.\n You can always explore the DARTS space with another search strategy, or use your own strategy to search a different model space.
Please set ``fast_dev_run`` to False to reproduce the our claimed results.\n Otherwise, only a few mini-batches will be run.
It's worth mentioning that one-shot NAS also suffers from multiple drawbacks despite its computational efficiency.\n We recommend\n [Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization Gap](https://arxiv.org/abs/2008.01475)_\n and\n [How Does Supernet Help in Neural Architecture Search?](https://arxiv.org/abs/2010.08219)_ for interested readers.
The cell above is obtained via ``fast_dev_run`` (i.e., running only 1 mini-batch).
When ``fast_dev_run`` is turned off, we achieve a validation accuracy of 89.69% after training for 100 epochs.
``model_space`` has to be re-instantiated because a known limitation,\n i.e., one model space instance can't be reused across multiple experiments.