Learning Temporal Point Processes Via Reinforcement Learning

Authors:
Shuang Li Georgia Institute of Technology
Benjamin Xiao Ant Financial
Shixiang Zhu Georgia Institute of Technology
Nan Du Google Brain
Yao Xie Georgia Institute of Technology
Le Song Ant Financial & Georgia Institute of Technology

Introduction:

Social goods, such as healthcare, smart city, and information networks, often produce ordered event data in continuous time.To alleviate the risk of model-misspecification in MLE, the authors propose to generate samples from the generative model and monitor the quality of the samples in the process of training until the samples and the real data are indistinguishable.

Abstract:

Social goods, such as healthcare, smart city, and information networks, often produce ordered event data in continuous time. The generative processes of these event data can be very complex, requiring flexible models to capture their dynamics. Temporal point processes offer an elegant framework for modeling event data without discretizing the time. However, the existing maximum-likelihood-estimation (MLE) learning paradigm requires hand-crafting the intensity function beforehand and cannot directly monitor the goodness-of-fit of the estimated model in the process of training. To alleviate the risk of model-misspecification in MLE, we propose to generate samples from the generative model and monitor the quality of the samples in the process of training until the samples and the real data are indistinguishable. We take inspiration from reinforcement learning (RL) and treat the generation of each event as the action taken by a stochastic policy. We parameterize the policy as a flexible recurrent neural network and gradually improve the policy to mimic the observed event distribution. Since the reward function is unknown in this setting, we uncover an analytic and nonparametric form of the reward function using an inverse reinforcement learning formulation. This new RL framework allows us to derive an efficient policy gradient algorithm for learning flexible point process models, and we show that it performs well in both synthetic and real data.

You may want to know: