NestDNN: Resource-Aware Multi-Tenant On-Device Deep Learning For Continuous Mobile Vision

Authors:
Biyi Fang Michigan State University
Xiao Zeng Michigan State University
Mi Zhang Michigan State University

Introduction:

Mobile vision systems usually run multiple applications concurrently and their available resources at runtime are dynamic. In this paper, the authors present NestDNN, a framework that takes the dynamics of runtime resources into account to enable resourceaware multi-tenant on-device deep learning for mobile vision systems.

Abstract:

Mobile vision systems such as smartphones, drones, and augmented-reality headsets are revolutionizing our lives. These systems usually run multiple applications concurrently and their available resources at runtime are dynamic due to events such as starting new applications, closing existing applications, and application priority changes. In this paper, we present NestDNN, a framework that takes the dynamics of runtime resources into account to enable resourceaware multi-tenant on-device deep learning for mobile vision systems. NestDNN enables each deep learning model to ofer flexible resource-accuracy trade-ofs. At runtime, it dynamically selects the optimal resource-accuracy trade-of for each deep learning model to fit the model's resource demand to the system's available runtime resources. In doing so, NestDNN eficiently utilizes the limited resources in mobile vision systems to jointly maximize the performance of all the concurrently running applications. Our experiments show that compared to the resource-agnostic status quo approach, NestDNN achieves as much as 4.2% increase in inference accuracy, 2.0× increase in video frame processing rate and 1.7× reduction on energy consumption.

You may want to know: