PONAS: Progressive One-shot Neural Architecture Search for Very Efficient Deployment

2021 
We propose a Progressive One-Shot Neural Architecture Search (PONAS) method to achieve a very efficient model searching for various hardware constraints. Given a constraint, most neural architecture search (NAS) methods either sample a set of sub-networks according to a pre-trained accuracy predictor, or adopt the evolutionary algorithm to evolve specialized networks from the supernet. Both approaches are time consuming. Here our key idea for very efficient deployment is, when searching the architecture space, constructing a table that stores the validation accuracy of all candidate blocks at all layers. For a stricter hardware constraint, the architecture of a specialized network can be efficiently determined based on this table by picking the best candidate blocks that yield the least accuracy loss. To accomplish this idea, we propose the PONAS method to combine advantages of progressive NAS and one-shot methods. A two-stage training scheme, including the meta training stage and the fine-tuning stage, is proposed to make the search process efficient and stable. During search, we evaluate candidate blocks in different layers and construct an accuracy table that is to be used in architecture searching. Comprehensive experiments verify that PONAS is extremely flexible, and is able to find architecture of a specialized network in around 10 seconds. In ImageNet classification, 76.29% top-1 accuracy can be obtained, which is comparable with the state of the arts.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    1
    Citations
    NaN
    KQI
    []