Thursday May 28th, 3PM Paris/France. (Playback)
Andrew Sogokon (University of Southampton, UK)
Sound Continuous Invariant Generation (Slides)
Continuous invariants are important in deductive verification of hybrid and continuous systems. Just like discrete invariants are used to reason about correctness in discrete systems without unrolling their loops forever, continuous invariants are used to reason about differential equations without having to solve them. Automatic generation of continuous invariants remains one of the biggest practical challenges to the automation of formal proofs of safety for hybrid systems. This talk will focus on a recently developed tool (Pegasus - http://pegasus.keymaeraX.org) for generating continuous invariants automatically; the talk will describe the methods Pegasus employs for generating continuous invariants, along with some of the architectural aspects of its integration with the KeYmaera X proof assistant.
Exposés des semaines suivantes
Thursday June 4th, 3PM Paris/France. (Webinar link)
Joël Ouaknine (Max Planck Institute for Software Systems, Germany)
Thursday June 11th, 3PM Paris/France. (Webinar link)
Jean Goubault-Larrecq (ENS Paris-Saclay, France)
A probabilistic and non-deterministic call-by-push-value language
Denotational semantics of higher-order functional languages have long been known, but adding probabilistic choice leads to somewhat incongruous difficulties. Some of those difficulties have been circumvented recently, by forsaking traditional domain theory, using probabilistic coherence spaces, or quasi-Borel predomains. I will nonetheless work with good old domain theory, and I hope to convince you that those difficulties, which I will explain, disappear with a simple extension of Paul Blain Levy's call-by-push-value. That extension incorporates probabilistic choice, but also (demonic) non-deterministic choice. The epitome of this work is a full abstraction result, in the presence of a parallel if and a statistical termination tester operator - without which full abstraction fails. (Work presented at LICS'19.)
Thursday June 18th, 3PM Paris/France. (Webinar link)
Caterina Urban (Inria, France)
Perfectly Parallel Fairness Certification of Neural Networks
Recently, there is growing concern that machine-learning models, which currently assist or even automate decision making, reproduce, and in the worst case reinforce, bias of the training data. The development of tools and techniques for certifying fairness of these models or describing their biased behavior is, therefore, critical. In this paper, we propose a perfectly parallel static analysis for certifying causal fairness of feed-forward neural networks used for classification of tabular data. When certification succeeds, our approach provides definite guarantees, otherwise, it describes and quantifies the biased behavior. We design the analysis to be sound, in practice also exact, and configurable in terms of scalability and precision, thereby enabling pay-as-you-go certification. We implement our approach in an open-source tool and demonstrate its effectiveness on models trained with popular datasets.
Vos propositions sont les bienvenues !