TUTORIALS

Multi-objective optimization and its applications for Agent Systems
Responsible Mechanism Design
Don’t Trust Your Agents, Verify Them: Strategic Verification with VITAMIN
A Concise Introduction to LLM-based Multi-agent Systems
Domain Model Learning for Automated Planning

 

Tutorial on Optimization Techniques for Agent Coordination
Formal Methods for Safe Reinforcement Learning
Neuro-Symbolic Decision Making for Autonomous Agents
A Decade of Sparse Training: Why Do We Still Stick to Dense Training?
Approaches for Explainability in Autonomous Agents: From Intuitive Post-hoc Methods to Causal Understanding

Multi-objective optimization and its applications for Agent Systems

This tutorial deals with Pareto-based multi-objective optimization and its applications for single and multiagent systems. Pareto optimality is a well-known approach to support decision-making when objectives are conflicting. It aims to find the Pareto-optimal set and its associated front. This set includes solutions that cannot be improved in any objective without deteriorating the performance in at least one other objective. The obtained Pareto-front exposes the performance trade-offs that are associated with the Pareto optimal solutions. This half-a-day tutorial aims to introduce the participants to: a) the fundamentals of Pareto-optimality, b) several leading algorithms to solve such problems, c) the applications for agent systems, d) the various advantages of this approach, and e) the research needs.

Monday, A.M.

Responsible Mechanism Design

Collective decision-making processes have a common pitfall: when things go awry, it is usually hard to identify a single person who should be blamed for the harmful outcome of a collective decision. Could collective decision-making processes be designed to avoid this? This question is at the core of the responsible mechanism design — a new interdisciplinary area of research on the border of artificial intelligence, game theory, logic, and philosophy to be introduced in this tutorial.

Monday, P.M.

Don’t Trust Your Agents, Verify Them: Strategic Verification with VITAMIN

This tutorial introduces VITAMIN, a modular and extensible framework for the strategic verification of multi-agent systems. It provides participants with a clear understanding of strategic logics and agent models, and hands-on experience with modelling and model checking strategic and temporal properties. The tutorial targets both newcomers and experienced researchers interested in practical strategic verification, with a focus on extensibility in logics and models.

Monday, A.M.

A Concise Introduction to LLM-based Multi-agent Systems

Large language models (LLMs) are increasingly used as core components of multi-agent systems, enabling new forms of interaction, coordination, and collective decision-making. This tutorial provides a concise, agent-centric introduction to LLM-based multi-agent systems (MAS), focusing on the interplay between LLMs and classical multi-agent principles. It highlights how multi-agent mechanisms—such as debate, role specialization, coordination protocols, and strategic interaction—can address key limitations of single LLM agents, including brittle reasoning and lack of robustness, as well as how LLMs can serve as powerful infrastructure for communication and coordination in multi-agent settings. Emphasis is placed on connections to core AAMAS concepts such as decision-making, incentives, interaction dynamics, and emergent behavior, offering a unified perspective on this rapidly evolving research area.

Monday, P.M.

Domain Model Learning for Automated Planning

This tutorial covers key concepts, methods, and algorithms for learning domain models for planning. Also, we will present available frameworks and tools one can use for domain model learning. Participants will then be familiar and have basic understanding of this area, which will allow them to know how it can fit their own research agendas or applications, as well as how to start doing research in this area.

Tuesday, A.M.

Tutorial on Optimization Techniques for Agent Coordination

Teams of agents often have to coordinate their decisions in a distributed manner to achieve both individual and shared goals. Examples include service-oriented computing, sensor network problems, and smart device coordination problems. Such a problem can be formalized and solved in different ways, but in general, the multi-agent coordination process is non-trivial and NP-hard to solve.
In this Tutorial on Optimization Techniques for Agent Coordination, we will discuss three fundamental approaches that have been proposed in the Multi-Agent Systems (MAS) literature to tackle coordination problems, one based on Distributed Constrained Optimization Problems (DCOPs), one based on Decentralized Auctions (DA) and one based on Coalition Formation (CF).

Monday, P.M.

Formal Methods for Safe Reinforcement Learning

The goal of the tutorial is to present the emerging relationship between Reinforcement Learning (RL) and Formal Methods (FM).

The topic is an active area of research that builds a bridge between the RL and verification communities. In a world where the safety of AI system is becoming more and more critical, bridging the gap between these communities is a crucial step forward to develop safe, reliable and trustworthy AI.

Reinforcement learning aims to optimize the behaviour of an agent in an unknown environment. However, in real-life scenarios, agents often have to avoid critical risks like breaking laws or injuring humans. Constrained RL was developed for this purpose. We

introduce the basics of (constrained) RL, by giving the classic formalism and quickly going through state-of-the-art algorithms. We then show that some natural temporal specifications cannot be captured by that classic formalism, and thus introduce Linear Temporal Logic (LTL), a logic widely-used to describe system specifications, as an alternative way to describe objectives and constraints in RL.

We introduce the basics of LTL and show how to solve the model-checking and synthesis problem for LTL specifications. Finally, we show how to use verification techniques to develop algorithms for RL with LTL objectives and rewards. Several state-of-the-art methods will thus be introduced (e.g., constrained MDPs, shielding, etc.), with a focus on those that yield formal guarantees. A range of real-life scenarios will be used throughout the tutorial to illustrate the core concepts of RL and TL, and to show how to apply the techniques and methods developed. These examples will range from robot motion planning to designing a controller for water tanks, going through developing an AI for games like Pac-Man.

Tuesday, P.M.

Neuro-Symbolic Decision Making for Autonomous Agents

This tutorial provides a theoretical and practical introduction to Neurosymbolic decision making, with a particular focus on Reinforcement Learning (NeSyRL) for autonomous agents. This emerging paradigm combines the strengths of symbolic reasoning (expressive abstraction and generalization) with the adaptability of Deep RL under uncertainty.

Participants will explore how symbolic task knowledge can be represented in various ways, from reward machines to structured logic programs, enabling declarative representations of actions, constraints, and preferences for decision making.

The core of the tutorial presents a critical overview of leading NeSyRL frameworks that integrate these symbolic abstractions into RL algorithms, producing autonomous agents that balance interpretability, safe generalization, and data efficiency. Practical examples from both single- and multi-agent scenarios will complement the theoretical discussion, equipping attendees with methods and tools for neurosymbolic decision making.
Finally, the tutorial will highlight current trends and open challenges that are shaping the future of this rapidly evolving research field.

Tuesday, A.M.

A Decade of Sparse Training: Why Do We Still Stick to Dense Training?

This tutorial targets researchers, practitioners, and advanced students in machine learning who seek to reduce the computational and energy costs of training large neural networks without sacrificing performance. Participants will learn the theory, algorithms, and system-level aspects of dynamic sparse training (DST) across supervised learning, reinforcement learning, generative Artificial Intelligence (AI), and truly sparse implementations. The tutorial combines algorithmic foundations with practical demonstrations using open-source code and commodity hardware, offering an innovative end-to-end perspective that connects cutting-edge research to executable systems. By the end, attendees will understand DST’s performance-efficiency trade-offs, gain hands-on experience, and join a dedicated Slack community to continue discussions, share results, and collaborate on advancing DST toward sustainable, Green AI.

Tuesday, P.M.

Approaches for Explainability in Autonomous Agents: From Intuitive Post-hoc Methods to Causal Understanding

As autonomous agents increasingly rely on complex decision-making mechanisms that are hard to interpret, explainability is often proposed as a prerequisite for Trustworthy and Responsible AI and for achieving effective human–agent interaction. Despite significant progress in the field of Explainable Artificial Intelligence (XAI), explainability in agents remains conceptually fragmented: generic post-hoc explanation techniques such as feature importance methods coexist with agent-level explanation approaches that rely on internal representations of decision-making, such as policies, rewards, goals and intentions, often without a clear understanding of how these approaches relate with each other.

This tutorial provides a structured, agent-centric introduction to explainability. This tutorial examines how intuitive post-hoc explanation techniques such as SHAP and LIME work with practical hands-on examples, discusses why they are widely used in agent settings, and identifies why they often fail to provide causal insight into agent behaviour. Building on this analysis, the tutorial critically examines how causality constrains what explanations can meaningfully provide, and how alternative explanation approaches address these limitations.

Furthermore, this tutorial surveys representative families of explanation methods used in autonomous agents, including feature-based post-hoc explanations, contrastive and counterfactual explanations, and intention-oriented approaches. For each family, the tutorial presents practical examples and discusses underlying assumptions, explanatory scope, and practical trade-offs, highlighting why different methods apply to different kinds of agent behaviours and settings. Rather than advocating for a particular framework, we discuss formal approaches that can be used to clarify how these explanation methods can relate with each other.

In summary, the goal of this tutorial is to provide participants with a clear understanding of what different explanation methods for agents can and cannot explain, when they are appropriate, and how the different approaches can be compared.

Tuesday, A.M.