An insightful way to model collective behaviors in a network is to assume that the single units are rational entities that act in such a way to selfishly optimize their own utility. This is the basic assumption of classical game theory : an efficient and widely used modeling set-up for socio-economic networks and, more recently, also for engineering systems. In the latter, the single units' utilities are designed in such a way as to achieve desirable global behaviors (mechanism design). Formally, each unit is equipped with a utility function that maps its own state as well the ones of neighbor units (possibly the entire network) into a value. As time evolves, units modify their state at random times in order to possibly increase their own utility value. This action depends on the information available to the single units. When full information is available, a unit can perform the so-called 'best response' which amounts to choosing the state currently maximizing its utility. For certain classes of games (e.g., potential games ), classical results imply convergence to so-called Nash equilibria that are configurations whereby no unit has any incentive in changing its own state. For cases where full information on the current global state configuration is not available to the single units, other dynamics are of interest. Particularly relevant are those based on imitation mechanisms, whereby units compare their utility with that of their neighbors and on this basis, they may decide to copy the state of a better performing neighbor. The analysis of such dynamics is much more complex than best-response and convergence to Nash equilibria is not guaranteed even for potential games. A difficulty with the imitation dynamics is related to the fact that when a state disappears from the network, it will no longer be played by any unit. This shows that, in general, even if the underlying game possess a unique Nash equilibrium, convergence to it is no longer guaranteed from every initial condition. In the assumption that every units can possibly meet any other unit (fully mixed network), and that the number of units grows large, imitation dynamics as well many other evolutionary game dynamics can be described through continuous 'Eulerian type' variables (fractions of units in a certain state) governed by a deterministic system of ODEs (Ordinary differential equations). Below we present specific topics for thesis:
Stability analysis of evolutionary dynamics in mean field deterministic models. Goal of the thesis is to analyze the stability of the solutions of evolutionary game dynamics modeled through ODEs in relation with the properties of the underlying game. More specifically, it aims at
Stability analysis of evolutionary dynamics in mean field stochastic models. In fully mixed networks, for time ranges which are large with respect to the size of the network, the ODEs deterministic model for evolutionary dynamics is not longer, in general, an accurate approximation of the real behavior of the network. The correct description is in this case through a classical Markov process in continuous-time . Goal of the thesis is to analyze the behavior of the Markov process and its relation with the solution of the corresponding deterministic model described by ODEs. In particular, it aims at
Mutation and fixation probabilities for imitation dynamics. Imitation dynamics yield non-ergodic models: when all units in the network are in the same state, through imitation no new state can ever show up. In many contexts it is interesting to consider the possibility of spontaneous mutation, namely the possibility for all units (or for a part of them) to spontaneously change state without interaction with other units. This is a sort of exploration drift which is useful in mechanism design to avoid the system to get trapped in certain undesirable configurations. It is also quite natural when modeling for instance biological networks where mutation is a rather natural assumption and imitation can be interpreted as an invasion by units which are more fit. When a small mutation term is added to the imitation dynamics, the Markov process becomes ergodic and thus possesses a unique equilibrium probability measure. Interesting phenomena are captured in the double limit process, when noise vanishes and the size of population grows large. Remarkably, the two limits may not commute in general. One can consider the so called 'small noise limit' (vanishing noise with fixed population size), or viceversa, the 'large scale limit' (population size growing large with fixed noise). Goal of the thesis is to study these two limit regimes in connection with the underlying game. In particular, it aims at
References: M.J. Osborne and A. Rubinstein, "A course in Game Theory", MIT Press, 1994.  D. Monderer and L. Shapley, "Potential Games". Games and Economic Behavior. 14, pp. 124–143, 1996.  J. Hofbauer and K. Sigmund, "Evolutionary Game Dynamics", Bulletin of the American Mathematical Society, 46 (4), pp. 479-519, 2003.  W. H. Sandholm, "Population Games and Evolutionary Dynamics", MIT Press, December 2010.  D. Fudenberg and L. A. Imhof "Monotone imitation dynamics in large populations". Journal of Economic Theory 140, (1), pp. 229-245, 2008.
|Type of mathematics involved: Graphs, Probability, Markov chains, Differential equations.|