Multi-Task Learning for Location Prediction with Deep Multi-Model Ensembles

2019 
Location prediction plays an essential role for location-based service applications. It could support related applications such as traffic management, tourist attraction recommendations, and route planning. The movement of moving objects is usually affected by many complex factors such as time, space and user personalization. Therefore, it is quite challenging on how to comprehensively consider these factors to predict the next location of moving objects. Current research usually uses a recurrent neural network (RNN) to predict the next location, with a focus on temporal and spatial information, and user moving patterns. Although the prior model has achieved good results in location prediction, it omitted two main factors in a location prediction: fully capture of complex temporal and spatial characteristics and the impact of related tasks on location prediction. In regard to solve the mentioned problem, we propose a Multi-task and Model framework for Location prediction (MMLoc). In brief, we use CNN to extract spatial features, focusing on capturing the spatial association between the locations of the moving object, then use LSTM to extract the sequence and time attributes between the locations of the moving objects. Finally, the results of CNN and LSTM are integrated to form a multi-model component. The features obtained by the multi-model component are input into the multi-task component, and more significant features are represented from the associated tasks to enrich the model. We performed experiments on real datasets, and the experimental results demonstrate that our proposed model is superior to current advanced models in accuracy, recall, precision, and f1-score.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    2
    Citations
    NaN
    KQI
    []