Multiagent Reinforcement Learning for Swarm Confrontation Environments

2019 
The swarm confrontation problem is always a hot research topic, which has attracted much attention. Previous research focuses on devising rules to improve the intelligence of the swarm, which is not suitable for complex scenarios. Multi-agent reinforcement learning has been used in some similar confrontation tasks. However, many of these works take centralized method to control all entities in a swarm, which is hard to meet the real-time requirement of practical systems. Recently, OpenAI proposes Multi-Agent Deep Deterministic Policy Gradient algorithm (MADDPG), which can be used for centralized training but decentralized execution in multi-agent environments. We examine the method in our constructed swarm confrontation environment and find that it is not easy to deal with complex scenarios. We propose two improved training methods, scenario-transfer training and self-play training, which greatly enhance the performance of MADDPG. Experimental results show that the scenario-transfer training accelerate the convergence speed by 50%, and the self-play training increases the winning rate of MADDPG from 42% to 96%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []