Efficiency and productivity for decision making on low-power heterogeneous CPU+GPU SoCs

2020 
Markov decision processes provide a formal framework for a computer to make decisions autonomously and intelligently when the effects of its actions are not deterministic. This formalism has had tremendous success in many disciplines; however, its implementation on platforms with scarce computing capabilities and power, as it happens in robotics or autonomous driving, is still limited. To solve this computationally complex problem efficiently under these constraints, high-performance accelerator hardware and parallelized software come to the rescue. In particular, in this work, we evaluate off-line-tuned static and dynamic versus adaptive heterogeneous scheduling strategies for executing value iteration—a core procedure in many decision-making methods, such as reinforcement learning and task planning—on a low-power heterogeneous CPU+GPU SoC that only uses 10–15 W. Our experimental results show that by using CPU+GPU heterogeneous strategies, the computation time and energy required are considerably reduced. They can be up to 54% (61%) faster and 57% (65%) more energy-efficient with respect to multicore—TBB—(or GPU-only—OpenCL—) implementation. Additionally, we also explore the impact of increasing the abstraction level of the programming model to ease the programming effort. To that end, we compare the TBB+OpenCL vs. the TBB+oneAPI implementations of our heterogeneous schedulers, observing that oneAPI versions result in up to $$5\times$$ less programming effort and only incur in 3–8% of overhead if the scheduling strategy is selected carefully.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    7
    Citations
    NaN
    KQI
    []