There are many directions for possible extensions:
more complex games (e.g., more agents, options, and outcomes)
more diverse and complex range of interactions
allowing for uncertainty (e.g., in range of options, their likelihoods, and payoff matrices)
Social situations are often messier than non-social ones, but the corresponding models must not be more complex.
In this chapter, the analysis of strategic games, population dynamics, and social networks illustrated some starting points for such models.
More detailed models will primarily require a clear conceptual grasp on the research questions to be addressed by them.
Each main topic of this chapter can easily fill entire books or careers.
Here are some pointers to sources of inspirations and ideas:
Axelrod & Hamilton (1981) is a classic on cooperation and the assessment of strategies in a tournament.
Nowé et al. (2012) provide an introduction to game theory and illustrate how reinforcement learning (RL) can be applied to repeated games and Markov games.
Szita (2012) further explores the connections between RL and games.
Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science
(4489), 1390–1396. https://doi.org/10.1126/science.7466396
Nowé, A., Vrancx, P., & De Hauwere, Y.-M. (2012). Game theory and multi-agent reinforcement learning. In M. Wiering & M. van Otterlo (Eds.), Reinforcement learning: State-of-the-art
(pp. 441–470). Springer. https://doi.org/10.1007/978-3-642-27645-3_14
Page, S. E. (2018). The model thinker: What you need to know to make data work for you. Basic Books.
Szita, I. (2012). Reinforcement learning in games. In M. Wiering & M. van Otterlo (Eds.), Reinforcement learning: State-of-the-art
(pp. 539–577). Springer. https://doi.org/10.1007/978-3-642-27645-3_17
Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications
. Cambridge University Press. https://doi.org/10.1017/CBO9780511815478