Inequity Aversion Improves Cooperation In Intertemporal Social Dilemmas

Authors:
Edward Hughes DeepMind
Joel Leibo DeepMind
Matthew Phillips DeepMind
Karl Tuyls DeepMind
Edgar Dueñez-Guzman DeepMind
Antonio García Castañeda DeepMind
Iain Dunning DeepMind
Tina Zhu DeepMind
Kevin McKee DeepMind
Raphael Koster DeepMind
Heather Roff DeepMind
Thore Graepel DeepMind

Introduction:

Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas.

Abstract:

Groups of humans are often able to find ways to cooperate with one another in complex, temporally extended social dilemmas. Models based on behavioral economics are only able to explain this phenomenon for unrealistic stateless matrix games. Recently, multi-agent reinforcement learning has been applied to generalize social dilemma problems to temporally and spatially extended Markov games. However, this has not yet generated an agent that learns to cooperate in social dilemmas as humans do. A key insight is that many, but not all, human individuals have inequity averse social preferences. This promotes a particular resolution of the matrix game social dilemma wherein inequity-averse individuals are personally pro-social and punish defectors. Here we extend this idea to Markov games and show that it promotes cooperation in several types of sequential social dilemma, via a profitable interaction with policy learnability. In particular, we find that inequity aversion improves temporal credit assignment for the important class of intertemporal social dilemmas. These results help explain how large-scale cooperation may emerge and persist.

You may want to know: