Activity Recognition from Multi-modal Sensor Data Using a Deep Convolutional Neural Network
2018
Multi-modal data extracted from different sensors in a smart home can be fused to build models that recognize the daily living activities of residents. This paper proposes a Deep Convolutional Neural Network to perform the activity recognition task using the multi-modal data collected from a smart residential home. The dataset contains accelerometer data (composed of three perpendicular components of acceleration and the strength of the accelerometer signal received by four receivers), video data (15 time-series related to 2D and 3D center of mass and bounding box extracted from an RGB-D camera), and Passive Infra-Red sensor data. The performance of the Deep Convolutional Neural Network is compared to the Deep Belief Network. Experimental results revealed that the Deep Convolutional Neural Network with two pairs of convolutional and max pooling layers achieved better classification accuracy than the Deep Belief Network. The Deep Belief Network uses Restricted Boltzmann Machines for pre-training the network. When training deep learning models using classes with a high number of training samples, the DBN achieved 65.97% classification accuracy, whereas the CNN achieved 75.33% accuracy. The experimental results demonstrate the challenges of dealing with multi-modal data and highlight the importance of having a suitable number of samples within each class for sufficiently training and testing deep learning models.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
23
References
5
Citations
NaN
KQI