Active Control of Complexity Growth in Language Games

Abstract: Social conventions are learned mostly at a young age, but are quite different from other domains, like for example sensorimotor skills. The first people to define conventions just picked an arbitrary alternative between several options: a side of the road to drive on, the design of an electric plug, or inventing a new word. Because of this, while setting a new convention in a population of interacting individuals, many competing options can arise, and lead to a situation of growing complexity if many parallel inventions happen. How do we deal with this issue?Humans often exhert an active control on their learning situation, by for example selecting activities that are neither too complex nor too simple. This behavior, in cases like sensorimotor learning, has been shown to help learn faster, better, and with fewer examples. Could such mechanisms also have an impact on the negotiation of social conventions ? A particular example of social convention is the lexicon: which words we associated with given meanings. Computational models of language emergence, called the Language Games, showed that it is possible for a population of agents to build a common language through only pairwise interactions. In particular, the Naming Game model focuses on the formation of the lexicon mapping words and meanings, and shows a typical burst of complexity before starting to discard options and find a final consensus. In this thesis, we introduce the idea of active learning and active control of complexity growth in the Naming Game, in the form of a topic choice policy: agents can choose the meaning they want to talk about in each interaction. Several strategies were introduced, and have a different impact on both the time needed to converge to a consensus and the amount of memory needed by individual agents. Firstly, we artificially constrain the memory of agents to avoid the local complexity burst. A few strategies are presented, some of which can have similar convergence speed as in the standard case. Secondly, we formalize what agents need to optimize, based on a representation of the average state of the population. A couple of strategies inspired by this notion help keep the memory usage low without having constraints, but also result in a faster convergence process. We then show that the obtained dynamics are close to an optimal behavior, expressed analytically as a lower bound to convergence time. Eventually, we designed an online user experiment to collect data on how humans would behave in the same model, which shows that they do have an active topic choice policy, and do not choose randomly. Contributions from this thesis also include a classification of the existing Naming Game models and an open-source framework to simulate them.

Insert your error message here, if the PDF cannot be displayed.