Investigating emergent communication through language games and multi-agent reinforcement learning

Paul Van Eecke, Katrien Beuls and Jérôme Botoko Ekila

Learning emergent communication is a topic of great interest to the computational linguistics community as it provides a path towards achieving robust, flexible and adaptive language processing in computational systems. Today, computational models of emergent communication are studied through two main methodological paradigms: multi-agent reinforcement learning (MARL) and the language game paradigm. While both paradigms share their main objectives and employ strikingly similar methods, interaction between both communities has so far been surprisingly limited. This can to a large extent be ascribed to the use of different terminologies and experimental designs, which sometimes hinder the detection and interpretation of one another’s results and progress.

In this talk, we aim to remedy this situation by (i) formulating the challenge of re-conceptualising the language game experimental paradigm in the framework of multi-agent reinforcement learning, and (ii) providing both an alignment between their terminologies and a MARL-based reformulation of a canonical language game experiment. Tackling this challenge will enable future language game experiments to benefit from the rapid and promising methodological advances in the MARL community, while it will enable future MARL experiments on learning emergent communication to benefit from the insights and results gained through language game experiments. We strongly believe that this cross-pollination has the potential to lead to major breakthroughs in the modelling of how human-like languages can emerge and evolve in multi-agent systems.