Keynote Speakers

Prof. Don Syme

Git Hub

Title

The Agentic Repository Automation Revolution

Abstract

Agentic repository automation revolutionizes how we build software: adding Continuous AI (CAI) to CI/CD. The future of development is individuals and teams equipped with automated agentic workflows (repository-bound agents) scouting ahead, cleaning up behind, proactively improving and relentlessly validating code. AI slop is solved by automated AI code improvement, repository maintenance  is solved by automated AI repository assistants, and difficult, neglected software engineering such as performance engineering and formal verification becomes more tractable.

This talk will cover repository automation, demonstrated with GitHub Agentic Workflows, and its incredible applications to
* Automated Code Improvement
* Automated Performance Improvement
* Automated Repository Maintenance
* Automated Formal Verification

Bio

Don Syme is a Principal Researcher at GitHub Next and Visting Professor at Kings College London, specializing in AI assisted software development and AI automation. He works with GitHub to make better developer technologies, and, through that, make people more productive and happier. In previous work he’s been the designer of the F# language and the co-originator of async/await, and has contributed to both the C# design and GitHub Copilot.

Prof. Paolo Tonella

Università della Svizzera italiana (USI)

Title

AI Testing: What have we learned? Where are we heading?

Abstract

In the last 10 years, the field of AI testing has grown exponentially in the software engineering community. The general idea has been to transfer software testing principles, approaches, and techniques to AI systems and deep learning models. Remarkable examples include test adequacy criteria, test generation, selection and prioritization, fault localization and repair, mutation testing. While such transfer has been generally successful, it has also exposed a number of foundational issues related to AI testing for which we still miss a convincing solution. In this talk, I will discuss such issues, starting from the relationship between model level and system level AI testing, to then consider the problem of adequacy assessment, i.e., the problem of deciding if a test set is strong enough to exercise the AI system under test. Then, I’ll consider the nature of faults and failures affecting AI models, as well as the ways in which they can be repaired, pointing at the main discrepancies with faults, failures and repair of traditional software systems. I will conclude with an outlook on the future of software development, in which LLMs are expected to be increasingly used as code generators. I will discuss quality assessment for code produced by LLMs and the way quality assessment is likely to be impacted by novel software development paradigms, such as vibe and agentic coding.

Bio

Paolo Tonella is Professor of Software Engineering and Director of the Software Institute at the Faculty of Informatics, Università della Svizzera italiana (USI), in Lugano, Switzerland. He has been Honorary Professor at University College London, UK, and Head of Software Engineering at Fondazione Bruno Kessler, Trento, Italy. Paolo Tonella held an ERC Advanced grant as Principal Investigator of the project PRECRIME (Self-assessment Oracles for Anticipatory Testing). Paolo Tonella has written over 200 peer reviewed conference papers and over 100 journal papers. His H-index (according to Google scholar) is 73; the number of citations 17k. In 2011 he was awarded the ICSE 2001 MIP (Most Influential Paper) award, for his paper: “Analysis and Testing of Web Applications”. He is/was in the editorial board of TOSEM, TSE and EMSE. He was Program Co-Chair of ISSTA 2025/ESEC-FSE 2023. His current research interests are in software testing, in particular approaches to ensure the dependability of machine learning based systems, automated testing of cyber physical systems, and security testing.