Dear all,
The next talk in the IARCS Verification Seminar Series will be given by
Siddhartha Prasad, a PhD student in Computer Science at Brown University
advised by Shriram Krishnamurthi. The talk is scheduled on Tuesday, March
10, at 1900 hrs IST (add to Google calendar
<https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=NzZhaGpqZ3AzODV1YnBxYmY4bXJtbHY1a20gdnNzLmlhcmNzQG0&tmsrc=vss.iarcs%40gmail.com>
).
The details of the talk can be found on our webpage (
https://fmindia.cmi.ac.in/vss/), and also appended to the body of this
email.
The Verification Seminar Series, an initiative by the Indian Association
for Research in Computing Science (IARCS), is a monthly, online
talk-series, broadly in the area of Formal Methods and Programming
Languages, with applications in Verification and Synthesis. The aim of this
talk-series is to provide a platform for Formal Methods researchers to
interact regularly. In addition, we hope that it will make it easier for
researchers to explore newer problems/areas and collaborate on them, and
for younger researchers to start working in these areas.
All are welcome to join.
Best regards,
Organizers, IARCS Verification Seminar Series
=============================================================
Title: Eventually, Understanding: Operationalizing Misconceptions in LTL
Meeting Link:
https://us02web.zoom.us/j/89164094870?pwd=eUFNRWp0bHYxRVpwVVNoVUdHU0djQT09
(Meeting ID: 891 6409 4870, Passcode: 082194)
Abstract:
Linear Temporal Logic (LTL) sits at the heart of verification and
synthesis, yet even experienced users routinely misunderstand what formulas
actually say. Over several years, we have studied LTL from a human-factors
perspective across classrooms and controlled experiments, building
instruments to surface the misconceptions people hold about its semantics
and use. These studies show that misconceptions are not isolated mistakes
but systematic misunderstandings that persist across levels of expertise
and directly affect specification quality.
This talk centers those misconceptions as the object of design. We present
LTL Tutor, a lightweight, curriculum-agnostic system that operationalizes
these insights into targeted practice with immediate, actionable feedback.
The tutor generates exercises grounded in common misconceptions, explains
errors in terms of the learner's answer versus the correct semantics, and
uses a concept-based mutation engine to adaptively drill the ideas each
learner struggles with. It is designed to fit into existing courses without
requiring changes to teaching style, while also supporting independent
learners with minimal prior instruction. By tying feedback to concrete
errors and surfacing the underlying misconceptions over time, the tutor
turns several years of research on how people misunderstand LTL into a
practical, deployable tool for both classrooms and self-guided study.
Bio: Siddhartha Prasad is a PhD student in Computer Science at Brown
University, advised by Shriram Krishnamurthi. His research takes a
programming-languages approach to improving how people express intent and
reason about program behavior, drawing on ideas from formal methods,
human-computer interaction, and cognitive science. He is especially
interested in how models of human cognition can inform the design of
languages, semantics, and interactive tools for understanding complex
computational structures. Previously, he was a software engineer at
Microsoft, where he worked both on Windows and Azure. He says that his
research interests are informed by his time as an engineer. In his own
words, "I have written code that doesn't do what I want it to, and I want
to spare everyone else the indignity."