Learn how agents can master complex tasks from pre-collected experience logs without ever touching a live environment, using conservative Q-learning, implicit Q-learning, and the Decision Transformer.
Explore Karl Friston's Free Energy Principle: a unified theory where agents minimize surprise through belief updating and action, offering an alternative foundation to reward-based reinforcement learning
How AI agents generate, execute, and refine code as a reasoning medium, from classical program synthesis to modern REPL-based agent loops and SWE-bench architectures
How AI agents can learn continuously across tasks and environments without overwriting what they already know — the science and practice of lifelong machine learning
Explore multi-agent reinforcement learning: how multiple RL agents learn simultaneously, coordinate under uncertainty, and produce emergent strategies in cooperative, competitive, and mixed-motive settings
Understand how AI agents escape the curse of shortsightedness by learning reusable subgoals and temporally extended actions through the Options Framework
Learn how DSPy reframes prompt engineering as a compilation problem, letting agents automatically discover better instructions, few-shot examples, and reasoning strategies through optimization
How AI agents can move beyond correlation to understand cause and effect, enabling more robust planning, better tool use, and reliable interventions in the real world