On Exploring Image Resizing for Optimizing Criticality-based Machine Perception

2021 
On-board computing capacity remains a key bottleneck in modern machine inference pipelines that run on embedded hardware, such as aboard autonomous drones or cars. To mitigate this bottleneck, recent work proposed an architecture for segmenting input frames of complex modalities, such as video, and prioritizing downstream machine perception tasks based on criticality of the respective segments of the perceived scene. Criticality-based prioritization allows limited machine resources (of lower-end embedded GPUs) to be spent more judiciously on tracking more important objects first. This paper explores a novel dimension in criticality-based prioritization of machine perception; namely, the role of criticality-dependent image resizing as a way to improve the trade-off between perception quality and timeliness. Given an assessment of criticality (e.g., an object’s distance from the autonomous car), the scheduler is allowed to choose from several image resizing options (and related inference models) before passing the resized images to the perception module. Experiments on an AI-powered embedded platform with a real-world driving dataset demonstrate significant improvements in the trade-off between perception accuracy and response time when the proposed resizing algorithm is used. The improvement is attributed to two advantages of the proposed scheme: (i) improved preferential treatment of more critical objects by reducing time spent on less critical ones, and (ii) improved image batching within the GPU, thanks to re-sizing, leading to better resource utilization.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    1
    Citations
    NaN
    KQI
    []