Dear all,
The next talk in the IARCS Verification Seminar Series will be given by
Shahaf Bassan, a senior PhD student in the Katz Lab at the Hebrew
University of Jerusalem specializing in explainable AI. The talk is
scheduled on Tuesday, June 10, at 1900 hrs IST (add to Google calendar
<https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=NTRxaDRxdG…>
).
The details of the talk can be found on our webpage (
https://fmindia.cmi.ac.in/vss/), and also appended to the body of this
email.
The Verification Seminar Series, an initiative by the Indian Association
for Research in Computing Science (IARCS), is a monthly, online
talk-series, broadly in the area of Formal Methods and Programming
Languages, with applications in Verification and Synthesis. The aim of this
talk-series is to provide a platform for Formal Methods researchers to
interact regularly. In addition, we hope that it will make it easier for
researchers to explore newer problems/areas and collaborate on them, and
for younger researchers to start working in these areas.
All are welcome to join.
Best regards,
Akash, Deepak, Madhukar, Srivathsan
=============================================================
Title: “Formal XAI”: Can we formally explain ML models?
Meeting Link:
https://us02web.zoom.us/j/89164094870?pwd=eUFNRWp0bHYxRVpwVVNoVUdHU0djQT09
(Meeting ID: 891 6409 4870, Passcode: 082194)
Abstract:
The goal of explainability is to make sense of the decisions made by
black-box ML models. Unfortunately, many existing explanation methods are
heuristic, which makes them unreliable. In this talk, I will present our
work on developing techniques that provide explanations with formal
guarantees, ensuring their trustworthiness. These techniques often rely on
formal verification, particularly neural network verification tools. In
addition, we examine these explanations from a theoretical perspective -
studying the computational challenges they pose and exploring ways to build
practical tools that address these challenges and enable the generation of
reliable explanations for ML models.
Bio: Shahaf Bassan is a senior PhD student in the Katz Lab at the Hebrew
University of Jerusalem specializing in explainable AI. His research
focuses on developing explanation techniques with formally provable
guarantees, at the intersection of explainability, formal verification, and
ML theory. His work spans both theoretical foundations and practical
applications. Sahaf has presented his research at leading conferences in
formal verification (e.g., TACAS) and machine learning (e.g., ICML, ICLR).
His research goal is to enhance trust in ML models by providing
trustworthy, verifiable explanations.