Neural Architecture Search
1st lightweight NAS challenge and moving beyond

Track 1 Certificates
Track 2 Certificates
Track 3 Certificates
Parameter sharing based OneshotNAS approaches can significantly reduce the training cost. However, there are still three issues to be urgently solved in the development of lightweight NAS. Among which, the consistence issue is one of major promblem of weight-sharing NAS. The performance of the network sampled from the supernet is inconsistent with the performance of the same network trained independently. This results in an incorrect evaluation and improper ranking of candidate performance. Track 1 tries to narrow the performance gap between candidates with the parameters  extracted from the shared parameters and the same architectures with the parameter trained independently. This track requires participants to submit pre-trained supernet using their own strategies. Then, we will evaluate the performance gap between candidates with the parameters   extracted  from the submitted supernet and performances provided by NAS-Bench.  Evaluation metric for track 1. In this track, we utilize Kendall metric, which is a common measurement of the correlation between two ranking, to evaluate the performance gap.
Click Here to jion!

Predicting the performance of any architecture accurately without training is very important. Based on this, we can not only deeply analyze architectures with good performances, but also architectures with poor performances. At the same time, we can also predict the optimal model structure that satisfies any hardware latency constraints. This competition provides a benchmark of the corresponding relationship between some (small sample) model structure and model accuracy. Participants can either directly train the dataset through black box approaches, or utilize white box method for hyperparameters estimation. 

Click Here to jion!
There is a lot of evidence that Neural Architecture Search can produce excellent models capable of ML tasks on well-known datasets -  datasets like CIFAR-10 and ImageNet where years of research have created a set of best practices to follow to achieve good results. However, far less attention has been devoted to investigating the "real-world" use case of NAS, where you're searching for a state-of-the-art architecture on an entirely novel task or dataset. In such a case, there is no existing set of best practices to build from, nor extensive research into optimal architectural patterns, augmentation policies, or hyperparameter selection. In essence, we are asking how well NAS algorithms can work “out-of-the-box” with little-to-no time for tuning. To explore this question, we've designed this competition to evaluate NAS algorithms over unseen novel tasks and datasets, while specifically eliminating outside influences like custom pre-training schedules, hyperparameters optimization, or data augmentation policies.In this competition, we ask competitors to produce a NAS algorithm that, when given an unseen task and dataset, outputs a well-performing robust PyTorch architecture. Finally, these results will be returned to the participants for inclusion in their papers.                                        click Here to jion!
                                 
本站使用百度智能门户搭建 管理登录