DeepCache: Principled Cache For Mobile Deep Vision

Authors:
Mengwei Xu Peking University
Mengze Zhu Peking University
Yunxin Liu Microsoft Research
Felix Xiaozhu Lin Purdue University
Xuanzhe Liu Peking University

Introduction:

the authors present DeepCache, a principled cache design for deep learning inference in continuous mobile vision.

Abstract:

We present DeepCache, a principled cache design for deep learning inference in continuous mobile vision. DeepCache benefits model execution eficiency by exploiting temporal locality in input video streams. It addresses a key challenge raised by mobile vision: the cache must operate under video scene variation, while trading of among cacheability, overhead, and loss in model accuracy. At the input of a model, DeepCache discovers video temporal locality by exploiting the video's internal structure, for which it borrows proven heuristics from video compression; into the model, DeepCache propagates regions of reusable results by exploiting the model's internal structure. Notably, DeepCache eschews applying video heuristics to model internals which are not pixels but high-dimensional, dificult-to-interpret data. Our implementation of DeepCache works with unmodiifed deep learning models, requires zero developer's manual efort, and is therefore immediately deployable on of-theshelf mobile devices. Our experiments show that DeepCache saves inference execution time by 18% on average and up to 47%. DeepCache reduces system energy consumption by 20% on average.

You may want to know: