On the role of Actions and Machine Learning in Artificial Agent Perception.

2021 
Automation is the medium by which the human species can free itself from the burden of tasks it has already solved. Such tasks are omnipresent in our daily lives, at home or in a professional context. A great quest in research is to build agents that can act and reason in the real-world, automating those solved tasks. For that, agents have to build a perception of their environments, just like humans do.Directly programming those agents is infeasible because of the complexity of the world and its interaction. That is why learning-based approaches have been prevalent in research for the past 20 years. While supervised learning of algorithms using labeled data has provided numerous useful applications, having agents that perceive the world as well as biological agents do would require prohibitive amounts of labeled data. Research has thus opted for unsupervised or weakly-supervised approach for building software algorithms that learn from data and can then be embedded in real robots that solve tasks in the real world.Thus, among the Machine Learning approaches, we have several sub-fields that each tackle different aspects of perception that agents should have. State Representation Learning (SRL) focuses on learning representations of what the agents experience. SRL tries to mimic the ability of humans to summarize complex scenes into compositional objects and concepts. Continual Learning (CL) aims at solving the infamous catastrophic forgetting problem of neural networks, which forget everything they learned when presented with new data. Humans do not suffer from this problem as we have memory and selective forgetting mechanisms that allow to continually learn throughout our lives. We finally have Reinforcement Learning (RL), which aims at learning to solve a task by maximizing the reward associated to it, a mechanism that is also present among biological agents.On the other hand, we also have more original approaches that do not necessarily have the same performances but are based on promising paradigms that could allow breakthroughs. Developmental robotics (Dev-Rob) is a sub-field of robotics which aims at developing biological-inspired methods for learning on real robots. We also have what we shall call in this manuscript the Embodied Agent approaches, which are theoretical and practical considerations based on theories of perception developed in psychology. In these theories, the role of actions is crucial in the development of perception. We will use this as a basis for most of our contributions.In this thesis we contribute to those sub-fields of research by developing theoretical insights and application algorithms which aim at creating agents with deeper levels of perception of their bodies and the environment. Specifically, we develop two novels approaches for Continual SRL and Continual RL with applications to real robots. We extend a theory on disentanglement for Representation Learning, by showing the crucial role of actions in the learning. We finally propose a novel learning mechanism for embodied agents based on the sensory commutativity of action sequences: we take inspiration from EA theories and develop theoretical insights as well as learning algorithms for object detection and self-body discovery.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []