TUTORIALS
Multi-objective optimization and its applications for Agent Systems
Responsible Mechanism Design
Don’t Trust Your Agents, Verify Them: Strategic Verification with VITAMIN
A Concise Introduction to LLM-based Multi-agent Systems
Domain Model Learning for Automated Planning
Tutorial on Optimization Techniques for Agent Coordination
Formal Methods for Safe Reinforcement Learning
Neuro-Symbolic Decision Making for Autonomous Agents
A Decade of Sparse Training: Why Do We Still Stick to Dense Training?
Approaches for Explainability in Autonomous Agents: From Intuitive Post-hoc Methods to Causal Understanding
Multi-objective optimization and its applications for Agent Systems
Monday, A.M.
Responsible Mechanism Design
Monday, P.M.
Don’t Trust Your Agents, Verify Them: Strategic Verification with VITAMIN
Monday, A.M.
A Concise Introduction to LLM-based Multi-agent Systems
Monday, P.M.
Domain Model Learning for Automated Planning
Tuesday, A.M.
Tutorial on Optimization Techniques for Agent Coordination
In this Tutorial on Optimization Techniques for Agent Coordination, we will discuss three fundamental approaches that have been proposed in the Multi-Agent Systems (MAS) literature to tackle coordination problems, one based on Distributed Constrained Optimization Problems (DCOPs), one based on Decentralized Auctions (DA) and one based on Coalition Formation (CF).
Monday, P.M.
Formal Methods for Safe Reinforcement Learning
The goal of the tutorial is to present the emerging relationship between Reinforcement Learning (RL) and Formal Methods (FM).
The topic is an active area of research that builds a bridge between the RL and verification communities. In a world where the safety of AI system is becoming more and more critical, bridging the gap between these communities is a crucial step forward to develop safe, reliable and trustworthy AI.
Reinforcement learning aims to optimize the behaviour of an agent in an unknown environment. However, in real-life scenarios, agents often have to avoid critical risks like breaking laws or injuring humans. Constrained RL was developed for this purpose. We
introduce the basics of (constrained) RL, by giving the classic formalism and quickly going through state-of-the-art algorithms. We then show that some natural temporal specifications cannot be captured by that classic formalism, and thus introduce Linear Temporal Logic (LTL), a logic widely-used to describe system specifications, as an alternative way to describe objectives and constraints in RL.
We introduce the basics of LTL and show how to solve the model-checking and synthesis problem for LTL specifications. Finally, we show how to use verification techniques to develop algorithms for RL with LTL objectives and rewards. Several state-of-the-art methods will thus be introduced (e.g., constrained MDPs, shielding, etc.), with a focus on those that yield formal guarantees. A range of real-life scenarios will be used throughout the tutorial to illustrate the core concepts of RL and TL, and to show how to apply the techniques and methods developed. These examples will range from robot motion planning to designing a controller for water tanks, going through developing an AI for games like Pac-Man.
Tuesday, P.M.
Neuro-Symbolic Decision Making for Autonomous Agents
Participants will explore how symbolic task knowledge can be represented in various ways, from reward machines to structured logic programs, enabling declarative representations of actions, constraints, and preferences for decision making.
The core of the tutorial presents a critical overview of leading NeSyRL frameworks that integrate these symbolic abstractions into RL algorithms, producing autonomous agents that balance interpretability, safe generalization, and data efficiency. Practical examples from both single- and multi-agent scenarios will complement the theoretical discussion, equipping attendees with methods and tools for neurosymbolic decision making.
Finally, the tutorial will highlight current trends and open challenges that are shaping the future of this rapidly evolving research field.
Tuesday, A.M.
A Decade of Sparse Training: Why Do We Still Stick to Dense Training?
Tuesday, P.M.
Approaches for Explainability in Autonomous Agents: From Intuitive Post-hoc Methods to Causal Understanding
As autonomous agents increasingly rely on complex decision-making mechanisms that are hard to interpret, explainability is often proposed as a prerequisite for Trustworthy and Responsible AI and for achieving effective human–agent interaction. Despite significant progress in the field of Explainable Artificial Intelligence (XAI), explainability in agents remains conceptually fragmented: generic post-hoc explanation techniques such as feature importance methods coexist with agent-level explanation approaches that rely on internal representations of decision-making, such as policies, rewards, goals and intentions, often without a clear understanding of how these approaches relate with each other.
This tutorial provides a structured, agent-centric introduction to explainability. This tutorial examines how intuitive post-hoc explanation techniques such as SHAP and LIME work with practical hands-on examples, discusses why they are widely used in agent settings, and identifies why they often fail to provide causal insight into agent behaviour. Building on this analysis, the tutorial critically examines how causality constrains what explanations can meaningfully provide, and how alternative explanation approaches address these limitations.
Furthermore, this tutorial surveys representative families of explanation methods used in autonomous agents, including feature-based post-hoc explanations, contrastive and counterfactual explanations, and intention-oriented approaches. For each family, the tutorial presents practical examples and discusses underlying assumptions, explanatory scope, and practical trade-offs, highlighting why different methods apply to different kinds of agent behaviours and settings. Rather than advocating for a particular framework, we discuss formal approaches that can be used to clarify how these explanation methods can relate with each other.
In summary, the goal of this tutorial is to provide participants with a clear understanding of what different explanation methods for agents can and cannot explain, when they are appropriate, and how the different approaches can be compared.