Keynote Speakers
Konstantinos Sagonas
Senior Lecturer/Associate Professor
Department of Information Technology; Division of Computer Systems
Uppsala University
Title
Optimal Algorithms for Stateless Model Checking
Abstract
Stateless model checking (SMC) is a fully automatic testing and verification technique with low memory requirements. It checks concurrent programs for safety errors (e.g. program crashes, assertion violations, etc.) by systematically exploring all possible thread schedulings. SMC becomes effective when coupled with Dynamic Partial Order Reduction (DPOR), a runtime technique which introduces an equivalence on schedulings and reduces the amount of exploration. DPOR algorithms that are \emph{optimal} are particularly effective in that they guarantee to explore \emph{exactly} one execution from each equivalence class, thereby avoiding redundant work and managing to achieve sometimes exponential reduction over non-optimal techniques.
This invited talk will present a variety of recently proposed techniques for stateless model checking, focussing on DPOR algorithms that provide (different kinds of) optimality guarantees. We will review the challenges that are involved in designing such algorithms, and the time and space guarantees that some of these algorithms provide. Last, but not least, we will present SMC tools implementing these algorithms, and some results from code bases that these tools have been applied.
Bio
Konstantinos Sagonas received the PhD degree in computer science from Stony Brook University, in 1996. He is a faculty member in the School of Electrical and Computer Engineering of NTUA, Greece and at the Dept. of Information Technology of Uppsala University, Sweden. His research interests include programming languages and systems, concurrency and distribution, and techniques and tools for the effective analysis, and testing of programs.
Min Zhang
Full Professor
Software Engineering Institute
East China Normal University
Title
Safeguarding deep reinforcement learning systems via formal methods: From safety-by-design to runtime assurance
Abstract
Deep neural networks (DNNs) have shown remarkable potential in decision-making and control for deep reinforcement learning (DRL) systems, yet their complexity and lack of transparency can make it challenging to ensure the safety of their hosting environments, including the systems they operate on. Drawing on our experiences, we argue that formal methods are crucial in training AI models that are not only robust but also certifiable, ensuring system safety at every stage from training to deployment. We demonstrate that integrating formal methods is essential to provide a comprehensive safety guarantee for DRL systems throughout their lifecycle.
Bio
Min Zhang is a full professor at the Software Engineering Institute, East China Normal University. He earned his Ph.D. from the Japan Advanced Institute of Science and Technology in 2011 and joined East China Normal University in 2014. From 2019 to 2021, he served as a senior visiting professor at Nice University. His research interests primarily focus on formal methods for safety-critical systems, including real-time and AI-empowered systems. Recently, he has concentrated on the formal verification of deep neural networks and intelligent systems. He has co-authored over 80 papers, published in top-tier conferences such as CAV, TACAS, ASE, ICSE, NeurIPS, CVPR.