Poster: Cross Labelling and Learning Unknown Activities Among Multimodal Sensing Data

2019 
One of the major challenges for fully enjoying the power of machine learning is the need for the high-quality labelled data. To tap-in the gold-mine of data generated by IoT devices with unprecedented volume and value, we discover and leverage the hidden connections among the multimodal data collected by various sensing devices. Different modal data can complete and learn from each other, but it is challenging to fuse multimodal data without knowing their perception (and thus the correct labels). In this work, we propose MultiSense, a paradigm for automatically mining potential perception, cross-labelling each modal data, and then improving the learning models over the set of multimodal data. We design innovative solutions for segmenting, aligning, and fusing multimodal data from different sensors. We implement our framework and conduct comprehensive evaluations on a rich set of data. Our results demonstrate that MultiSense significantly improves the data usability and the power of the learning models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    5
    References
    0
    Citations
    NaN
    KQI
    []