How agents reason about other agents' beliefs, goals, and strategies — from k-level thinking to neural Theory of Mind and LLM-based recursive reasoning
How agents learn faster and more robustly by training on the right task at the right time — from hand-crafted curricula to adversarial environment generation
How mean field theory lets you solve game-theoretic problems with millions of agents by replacing individual interactions with a statistical summary of the crowd
Learn how cooperative game theory and Shapley values provide a mathematically principled way to assign credit among collaborating agents, with practical Python implementations and connections to modern LLM agent teams.
How to specify complex, multi-step tasks for AI agents using finite-state automata called reward machines, enabling non-Markovian rewards and compositional task structure
Understand successor representations — the elegant middle ground between model-free and model-based RL that enables fast adaptation and transfer across tasks.