Sequential Attend, Infer, Repeat: Generative Modelling Of Moving Objects

Authors:
Adam Kosiorek University of Oxford
Hyunjik Kim
Yee Whye Teh University of Oxford, DeepMind
Ingmar Posner Oxford University

Introduction:

The authors present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for image sequences.It can reliably discover and track objects through the sequence; it can also conditionally generate future frames, thereby simulating expected motion of objects.

Abstract:

We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for image sequences.It can reliably discover and track objects through the sequence; it can also conditionally generate future frames, thereby simulating expected motion of objects. This is achieved by explicitly encoding object numbers, locations and appearances in the latent variables of the model.SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et. al. 2016), including unsupervised learning, made possible by inductive biases present in the model structure.We use a moving multi-\textsc{mnist} dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how \textsc{sqair} overcomes them by leveraging temporal consistency of objects.Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision.

You may want to know: