Systematic Evaluation of Causal Discovery in Visual Model Based Reinforcement Learning

2021 
Inducing causal relationships from observations is a classic problem in machine learning. Most work in causality starts from the premise that the causal variables themselves have known semantics or are observed. However, for AI agents such as robots trying to make sense of their environment, the only observables are low-level variables like pixels in images. To generalize well, an agent must induce high-level variables, particularly those which are causal or are affected by causal variables. A central goal for AI and causality is thus the joint discovery of abstract representations and causal structure. In this work, we systematically evaluate the agent's ability to learn underlying causal structure. We note that existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs with many confounding factors. Hence, to facilitate research in learning the representation of high-level variables as well as causal structure among these variables, we present a suite of RL environments created to systematically probe the ability of methods to identify variables as well as causal structure among those variables. We evaluate various representation learning algorithms from literature and found that explicitly incorporating structure and modularity in the model can help causal induction in model-based reinforcement learning.
    • Correction
    • Source
    • Cite
    • Save
    0
    References
    2
    Citations
    [ { "id": "498146456", "title": "Neural Production Systems", "abstract": "Visual environments are structured, consisting of distinct objects or entities. These entities have properties -- both visible and latent -- that determine the manner in which they interact with one another. To partition images into entities, deep-learning researchers have proposed structural inductive biases such as slot-based architectures. To model interactions among entities, equivariant graph neural nets (GNNs) are used, but these are not particularly well suited to the task for two reasons. First, GNNs do not predispose interactions to be sparse, as relationships among independent entities are likely to be. Second, GNNs do not factorize knowledge about interactions in an entity-conditional manner. As an alternative, we take inspiration from cognitive science and resurrect a classic approach, production systems, which consist of a set of rule templates that are applied by binding placeholder variables in the rules to specific entities. Rules are scored on their match to entities, and the best fitting rules are applied to update entity properties. In a series of experiments, we demonstrate that this architecture achieves a flexible, dynamic flow of control and serves to factorize entity-specific and rule-based information. This disentangling of knowledge achieves robust future-state prediction in rich visual environments, outperforming state-of-the-art methods using GNNs, and allows for the extrapolation from simple (few object) environments to more complex environments.", "source_url": "https://arxiv.org/abs/2103.01937v1", "doc_type": 4, "year": 2021, "issue": 0, "volume": 0, "first_page": 0, "last_page": 0, "citation_count": 0, "reference_count": 66, "venue": { "id": 2114281678, "name": "arXiv: Artificial Intelligence", "abbr": "" }, "author": [ { "id": 1301701939, "name": "Anirudh Goyal" }, { "id": 1084414022, "name": "Aniket Didolkar" }, { "id": 1272574789, "name": "Nan Rosemary Ke" }, { "id": 1072594893, "name": "Charles Blundell" }, { "id": 1232627638, "name": "Philippe Beaudoin" }, { "id": 1319645977, "name": "Nicolas Heess" }, { "id": 1332441780, "name": "Michael C. Mozer" }, { "id": 1342155171, "name": "Yoshua Bengio" } ], "field": [ { "id": 2007822215, "name": "graph neural networks" }, { "id": 2024852807, "name": "Partition (number theory)" }, { "id": 2043059786, "name": "Extrapolation" }, { "id": 2030591755, "name": "Computer science" }, { "id": 2044566924, "name": "Factorization" }, { "id": 2045245616, "name": "Artificial neural network" }, { "id": 2001639652, "name": "Control flow" }, { "id": 2043297818, "name": "Equivariant map" }, { "id": 2008178123, "name": "Theoretical computer science" }, { "id": 2030941587, "name": "Template" } ], "cite": [ { "name": "GB/T 7714", "text": "GoyalAnirudh., DidolkarAniket., KeNanRosemary., et al. Neural Production Systems[J]. arXiv: Artificial Intelligence, 2021." }, { "name": "MLA", "text": "Goyal, Anirudh, et al. \"Neural Production Systems\" arXiv: Artificial Intelligence., 2021." }, { "name": "APA", "text": "GoyalAnirudh., DidolkarAniket., KeNanRosemary., BlundellCharles., BeaudoinPhilippe., HeessNicolas., ... & BengioYoshua. (2021). Neural Production Systems. arXiv: Artificial Intelligence." }, { "name": "BibTeX", "text": "@inproceedings{Acemap498146456,\n title=\"Neural Production Systems\",\n author=\"Anirudh {Goyal} and Aniket {Didolkar} and Nan Rosemary {Ke} and Charles {Blundell} and Philippe {Beaudoin} and Nicolas {Heess} and {Michael C. Mozer} and Yoshua {Bengio}\",\n journal=\"arXiv: Artificial Intelligence\",\n url=\"https://www.acemap.info/paper/498146456\",\n year=\"2021\"\n}" } ] }, { "id": "142239913", "title": "D'ya like DAGs? A Survey on Structure Learning and Causal Discovery", "abstract": "Causal reasoning is a crucial part of science and human intelligence. In order to discover causal relationships from data, we need structure discovery methods. We provide a review of background theory and a survey of methods for structure discovery. We primarily focus on modern, continuous optimization methods, and provide reference to further resources such as benchmark datasets and software packages. Finally, we discuss the assumptive leap required to take us from structure to causality.", "source_url": "https://arxiv.org/abs/2103.02582", "doc_type": 4, "year": 2021, "issue": 0, "volume": 0, "first_page": 0, "last_page": 0, "citation_count": 0, "reference_count": 245, "venue": { "id": 2111033329, "name": "arXiv: Learning", "abbr": "" }, "author": [ { "id": 1104828785, "name": "Matthew J. Vowels" }, { "id": 1177514293, "name": "Necati Cihan Camgöz" }, { "id": 1250623814, "name": "Richard Bowden" } ], "field": [ { "id": 2042464581, "name": "Software" }, { "id": 2030591755, "name": "Computer science" }, { "id": 2015639467, "name": "Continuous optimization" }, { "id": 2010527611, "name": "structure learning" }, { "id": 2017880399, "name": "Causality" }, { "id": 2024028298, "name": "Human intelligence" }, { "id": 2006037874, "name": "Data science" }, { "id": 2025406014, "name": "Causal reasoning" } ], "cite": [ { "name": "GB/T 7714", "text": "VowelsMatthewJ.., CamgözNecatiCihan., BowdenRichard., et al. D'ya like DAGs? A Survey on Structure Learning and Causal Discovery[J]. arXiv: Learning, 2021." }, { "name": "MLA", "text": "Vowels, Matthew J., et al. \"D'ya like DAGs? A Survey on Structure Learning and Causal Discovery\" arXiv: Learning., 2021." }, { "name": "APA", "text": "VowelsMatthewJ.., CamgözNecatiCihan., & BowdenRichard. (2021). D'ya like DAGs? A Survey on Structure Learning and Causal Discovery. arXiv: Learning." }, { "name": "BibTeX", "text": "@inproceedings{Acemap142239913,\n title=\"D'ya like DAGs? A Survey on Structure Learning and Causal Discovery\",\n author=\"{Matthew J. Vowels} and {Necati Cihan Camgöz} and Richard {Bowden}\",\n journal=\"arXiv: Learning\",\n url=\"https://www.acemap.info/paper/142239913\",\n year=\"2021\"\n}" } ] } ]