CARTMAN: Complex Activity Recognition Using Topic Models for Feature Generation from Wearable Sensor Data

2021 
The recognition of complex activities such as "having dinner" or "cooking" from wearable sensor data is an important problem in various healthcare, security and context-aware mobile and ubiquitous computing applications. In contrast to simple activities such as walking that involve single, indivisible repeated actions, recognizing complex activities such as "having dinner" is a harder sub-problem that may be composed of multiple interleaved or concurrent simple activities with different orderings each time. Most of prior work has focused on recognizing simple activities, used hand-crafted features, or did not perform classification using a state-of-the-art neural networks model. In this paper, we propose CARTMAN, a complex activity recognition method that uses Latent Dirichlet allocation (LDA) topic models to generate smartphone sensor features that capture the latent representation of complex activities. These LDA features are then classified using a DeepConvLSTM neural network with self-attention. DeepConvLSTM auto-learns the spatio-temporal features from the sensor data while the self-attention layer identifies and focuses on the predictive points within the time-series sensor data. Our CARTMAN approach outperforms the current state-of-the-art complex activity models and baseline models by 6-23% in macro and weighted F1-scores.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []