Distributed on-line reinforcement learning in a swarm of sterically interacting robots

2021 
While naturally occurring swarms thrive when crowded, physical interactions in robotic swarms are either avoided or carefully controlled, thus limiting their operational density. Designing behavioral strategies under such circumstances remains a challenge, even though it may offer an opportunity for exploring morpho-functional self-organized behaviors. In this paper, we explicitly consider dense swarms of robots where physical interactions are inevitable. We demonstrate experimentally that an a priori minor difference in the mechanical design of the robots leads to important differences in their dynamical behaviors when they evolve in crowded environments. We design Morphobots, which are Kilobots augmented with a 3D-printed exoskeleton. The exoskeleton not only significantly improves the motility and stability of the Kilobots, it also allows to encode physically two contrasting dynamical behaviors in response to an external force or a collision. This difference translates into distinct performances during self-organized aggregation when addressing a phototactic task. Having characterized the dynamical mechanism at the root of these differences, we implement a decentralized on-line evolutionary reinforcement learning algorithm in a swarm of Morphobots. We demonstrate the learning efficiency and show that the learning reduces the dependency on the morphology. We present a kinetic model that links the reward function to an effective phototactic policy. Our results are of relevance for the deployment of robust swarms of robots in a real environment, where robots are deemed to collide, and to be exposed to external forces.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    65
    References
    0
    Citations
    NaN
    KQI
    []