FBNet

Note

This one-shot NAS is still implemented under NNI NAS 1.0, and will be migrated to Retiarii framework in v2.4.

For the mobile application of facial landmark, based on the basic architecture of PFLD model, we have applied the FBNet (Block-wise DNAS) to design an concise model with the trade-off between latency and accuracy. References are listed as below:

FBNet is a block-wise differentiable NAS method (Block-wise DNAS), where the best candidate building blocks can be chosen by using Gumbel Softmax random sampling and differentiable training. At each layer (or stage) to be searched, the diverse candidate blocks are side by side planned (just like the effectiveness of structural re-parameterization), leading to sufficient pre-training of the supernet. The pre-trained supernet is further sampled for finetuning of the subnet, to achieve better performance.

PFLD is a lightweight facial landmark model for realtime application. The architecture of PLFD is firstly simplified for acceleration, by using the stem block of PeleeNet, average pooling with depthwise convolution and eSE module.

To achieve better trade-off between latency and accuracy, the FBNet is further applied on the simplified PFLD for searching the best block at each specific layer. The search space is based on the FBNet space, and optimized for mobile deployment by using the average pooling with depthwise convolution and eSE module etc.

Experiments

To verify the effectiveness of FBNet applied on PFLD, we choose the open source dataset with 106 landmark points as the benchmark:

The baseline model is denoted as MobileNet-V3 PFLD (Reference baseline), and the searched model is denoted as Subnet. The experimental results are listed as below, where the latency is tested on Qualcomm 625 CPU (ARMv8):

Model

Size

Latency

Validation NME

MobileNet-V3 PFLD

1.01MB

10ms

6.22%

Subnet

693KB

1.60ms

5.58%

Example

Example code

Please run the following scripts at the example directory.

The Python dependencies used here are listed as below:

numpy==1.18.5
opencv-python==4.5.1.48
torch==1.6.0
torchvision==0.7.0
onnx==1.8.1
onnx-simplifier==0.3.5
onnxruntime==1.7.0

Data Preparation

Firstly, you should download the dataset 106points dataset to the path ./data/106points . The dataset includes the train-set and test-set:

./data/106points/train_data/imgs
./data/106points/train_data/list.txt
./data/106points/test_data/imgs
./data/106points/test_data/list.txt

Quik Start

2. Finetune

After pre-training of the supernet, we can run below command to sample the subnet and conduct the finetuning:

python retrain.py --dev_id "0,1" --snapshot "./ckpt_save" --data_root "./data/106points" \
                  --supernet "./ckpt_save/supernet/checkpoint_best.pth"

The validation accuracy will be shown during training, and the model with best accuracy will be saved as ./ckpt_save/subnet/checkpoint_best.pth.

3. Export

After the finetuning of subnet, we can run below command to export the ONNX model:

python export.py --supernet "./ckpt_save/supernet/checkpoint_best.pth" \
                 --resume "./ckpt_save/subnet/checkpoint_best.pth"

ONNX model is saved as ./output/subnet.onnx, which can be further converted to the mobile inference engine by using MNN .

The checkpoints of pre-trained supernet and subnet are offered as below: