Near-Sensor Distributed DNN Processing for Augmented and Virtual Reality

2021 
Unteathered Augmented and Virtual Reality (AR/VR) devices are an emerging compute platform with unique opportunities and challenges. AR/VR devices use an array of sensors, including multiple cameras, to understand their surroundings and provide the user with an immersive experience. To deliver the functionality and performance, AR/VR devices rely on state-of-the-art algorithms including Deep Neural Networks (DNNs). These algorithms must operate in real time, and it presents a computational challenge for a mobile system. The emergence of on-sensor compute provides a possible solution to increase the processing capabilities of an AR/VR platform. In this work, we explore how to optimally map DNN models on an AR/VR compute platform that consists of an on-sensor processor and an edge processor to minimize energy and latency. We explore properties of popular DNN models, and the ideal network split locations, processor sizes, caching strategies, and the interactions between these design choices using a new Distributed Algorithm Simulator (DAS). Based on this study, we develop the basic principles on network split, parameter caching, and two-processor balancing to achieve near-optimal system designs. We show the addition of on-sensor processing to the existing Quest 2 VR platform can reduce MobileNetV3 inference energy by 64.6%. Finally, we demonstrate in a representative AR/VR platform how the minimum-energy configuration changes under the practical design constraints of memory size and silicon area, as well as the impact of future memory technologies.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []