Deep Dilated Convolution on Multimodality Time Series for Human Activity Recognition

2018 
Convolutional Neural Networks (CNNs) is capable of automatically learning feature representations, CNN-based recognition algorithm has been an alternative method for human activity recognition. Even though general convolution operation followed by pooling could expand the receptive fields for extracting features, it will bring about information loss in feature representation. Due to that dilated convolutions not only could expand receptive field exponentially without changing the size of field map or pooling, but it also will not cause information loss, hence, we propose D 2 CL, a novel deep learning framework for human activity recognition using multi-model wearable sensors. This framework consists of dilated convolutional neural networks and recurrent neural networks. At first, learning from previous works, we add a general convolutional layer to map inputs into a hidden space for improving the capability of nonlinear representations. Subsequently, a stacked dilated convolutional networks automatically learn feature representations for inter-sensors and intra-sensors from hidden space. Then, given these learned features, two RNNs are applied to model their latent temporal dependencies. Finally, a softmax classifier at the topmost layer is utilized to recognize activities. To evaluate the performance of D 2 CL on activity recognition, we select two open datasets OPPORTUNITY and PAMAP2 for training and testing. Results show that our proposed model achieves a higher classification performance than the state-of-the-art DeepConvLSTM.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    19
    Citations
    NaN
    KQI
    []