End-to-End Differentiable Physics For Learning And Control

Authors:
Filipe De Avila Belbute-Peres Carnegie Mellon University
Kevin Smith MIT
Kelsey Allen MIT
Josh Tenenbaum MIT
J. Zico Kolter Carnegie Mellon University / Bosch Center for AI

Introduction:

The authors present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning.

Abstract:

We present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning. As a result, structured physics knowledge can be embedded into larger systems, allowing them, for example, to match observations by performing precise simulations, while achieves high sample efficiency. Specifically, in this paper we demonstrate how to perform backpropagation analytically through a physical simulator defined via a linear complementarity problem. Unlike traditional finite difference methods, such gradients can be computed analytically, which allows for greater flexibility of the engine. Through experiments in diverse domains, we highlight the system's ability to learn physical parameters from data, efficiently match and simulate observed visual behavior, and readily enable control via gradient-based planning methods. Code for the engine and experiments is included with the paper.

You may want to know: