FastDeepIoT: Towards Understanding And Optimizing Neural Network Execution Time On Mobile And Embedded Devices

Authors:
Shuochao Yao University of Illinois at Urbana Champaign
Yiran Zhao University of Illinois at Urbana Champaign
Huajie Shao University of Illinois at Urbana Champaign
Shengzhong Liu University of Illinois at Urbana Champaign
Dongxin Liu University of Illinois at Urbana Champaign
Lu Su State University of New York Buffalo
Tarek Abdelzaher University of Illinois at Urbana Champaign

Introduction:

the authors propose a novel framework, called FastDeepIoT, that uncovers the non-linear relation between neural network structure and execution time, then exploits that understanding to find network configurations that significantly improve the trade-of between execution time and accuracy on mobile and embedded devices. We evaluate FastDeepIoT using three diferent sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus.

Abstract:

Deep neural networks show great potential as solutions to many sensing application problems, but their excessive resource demand slows down execution time, pausing a serious impediment to deployment on low-end devices. To address this challenge, recent literature focused on compressing neural network size to improve performance. We show that changing neural network size does not proportionally afect performance attributes of interest, such as execution time. Rather, extreme run-time nonlinearities exist over the network configuration space. Hence, we propose a novel framework, called FastDeepIoT, that uncovers the non-linear relation between neural network structure and execution time, then exploits that understanding to find network configurations that significantly improve the trade-of between execution time and accuracy on mobile and embedded devices. FastDeepIoT makes two key contributions. First, FastDeepIoT automatically learns an accurate and highly interpretable execution time model for deep neural networks on the target device. This is done without prior knowledge of either the hardware specifications or the detailed implementation of the used deep learning library. Second, FastDeepIoT informs a compression algorithm how to minimize execution time on the profiled device without impacting accuracy. We evaluate FastDeepIoT using three diferent sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus. FastDeepIoT further reduces the neural network execution time by 48% to 78% and energy consumption by 37% to 69% compared with the state-of-the-art compression algorithms.

You may want to know: