I work on dynamic games, market games and other problems with big, complex and interesting strategy spaces. I use dynamic programming, non-linear optimization, revealed preference, reinforcement learning and economic experiments.
I fully characterize the outcomes of a wide class of model-free reinforcement learning algorithms, such as Q-learning, in a prisoner’s dilemma. The behavior is studied in the limit as players explore their options sufficiently and eventually stop experimenting.
Whether the players learn to cooperate or defect can be determined in a closed form from the relationship between the learning rate and the payoffs of the game.
The results generalize to asymmetric learners and many experimentation rules with implications for the issue of algorithmic collusion.