Collaborative Deep Sensing by Dynamically Fusing Multiple Models

2021 
Smart activity sensing has gained more and more attention with the development of sensing devices and recognition techniques. For multi-modal sensing scenarios like smart home, fusing results of multiple models brings opportunities to achieve more comprehensive and accurate recognition, as well as challenges to coordinate a collection of models under strict resource limitations. In this paper, we firstly model the multi-modal sensing problem with strict orthogonal resource constraints. Then, for scenarios with fixed and changeable resource limitations, we propose two online decision methods correspondingly to optimize recognition accuracy through dynamically selecting models and fusing their predictions, given a pre-trained model library. Specifically, by utilizing reward feedback based on an actor-critic scheme, we deal with fixed resources and accuracy optimization in one shot. Furthermore, for changeable resources, we decouple resource allocation and model evaluation to support model portability with comparable accuracy. Three types of sensing devices and nine recognition models have been investigated in our work. Experiments show that our method improves the recognition accuracy compared to that achieved by a single-modality model, and also achieves higher accuracy with less resource costs compared to the end-to-end multi-modal model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []