• Paper Submission:
    May 20, 2021 May 27 2021

    Author Notification:
    June 10, 2021 June 17th 2021

    July 19 2021



PT (UTC-7)
<SYNT 2021>
8:00-8:05 Welcome
Session 1
Session Chair: Dave Parker  
08:05-09:05 Keynote: Enabling Provable Security at Scale
Neha Rungta (Amazon Web Services)

The cloud model can be viewed as a contract between customers and providers: customers program against the cloud model, and the cloud provider faithfully implements the model. In this talk, we focus on the problem of access control. Many users are moving sensitive workloads to the cloud and require that their access control policies comply with their governance requirements and security best practices. In this talk, I will present an overview of Zelkova, an SMT-based analysis engine, for verification of Amazon Web Services (AWS) access control policies that govern permissions across entire applications in the cloud. It uses traditional formal verification techniques such as language transformation and off-the-shelf SMT solvers. zelkova can be queried to explore the properties of policies e.g. whether some resource is "publicly" accessible, and is leveraged in features such as Amazon S3 Block Public Access, AWS Config Managed Rules, and Amazon Macie. Our experience shows that to leverage formal policy analysis users must have sufficient technical sophistication to realize the criteria important to them and be able to formalize the properties of interest as zelkova queries. In 2019, we launched a publicly available service IAM Access Analyzer that leverages Zelkova to automatically identify resources such as S3 buckets and IAM roles that are shared with an external entity. We help users understand whether their policy is correct, by abstracting the policy into a compact set of positive and declarative statements that precisely summarize who has access to a resource. Users can review the summary to decide whether the policy grants access according to their intentions.

09:05-09:25 DeepSynth: Automata Synthesis for Automatic Task Segmentation in Deep Reinforcement Learning
Mohammadhosein Hasanbeig, Natasha Jeppu, Alessandro Abate, Tom Melham and Daniel Kroening

This paper proposes DeepSynth, a method for effective training of deep Reinforcement Learning (RL) agents when the reward is sparse and non-Markovian, but at the same time progress towards the reward requires achieving an unknown sequence of high-level objectives. Our method employs a novel algorithm for synthesis of compact automata to uncover this sequential structure automatically. We synthesise a human-interpretable automaton from trace data collected by exploring the environment. The state space of the environment is then enriched with the synthesised automaton so that the generation of a control policy by deep RL is guided by the discovered structure encoded in the automaton. The proposed approach is able to cope with both high-dimensional, low-level features and unknown sparse non-Markovian rewards. We have evaluated DeepSynth's performance in a set of experiments that includes the Atari game Montezuma's Revenge. Compared to existing approaches, we obtain a reduction of two orders of magnitude in the number of iterations required for policy synthesis, and also a significant improvement in scalability.

09:25-09:45 Automatic Synthesis of Experiment Designs from Probabilistic Environment Specifications
Craig Innes, Yordan Hristov, Georg Kamaras, Subramanian Ramamoorthy

This paper presents an extension to the probabilistic programming language ProbRobScene, allowing users to automatically synthesize uniform experiment designs directly from environment specifications. We demonstrate its effectiveness on a number of environment specification snippets from tabletop manipulation, and show that our method generates reliably low-discrepancy designs.

09:45-10:00 Break
Session 2
Session Chair: Suguman Bansal   
10:00-10:40 Invited talk: Gamed-Based Predictive Runtime Monitoring
Sriram Sankaranarayanan (Colorado)

In this talk, we will present some recent results on predictive runtime monitoring approaches for cyber-physical systems using games. Runtime monitors check whether a system under deployment adheres to key temporal logic specifications. In the context of "autonomous" cyber-physical systems these include safety properties such as maintaining a minimum separation (collision avoidance), maintaining velocities within some safe bounds, or remaining inside a specific geographic region (geofencing). However, runtime monitoring of safety properties often requires predicting whether a safety property will be violated in a sufficiently long future time horizon to mitigate against the risk of an impending violation. In the talk, we will describe viability monitoring that checks whether the controller has a strategy to maintain a given safety property over a given future time horizon. Such a monitor naturally leads to a stochastic safety game played between the controller and an unknown stochastic environment. We present a fast approach to viability monitoring for the special case of linear stochastic systems. We will also discuss the potential advantages and disadvantages of viability monitoring. Moving beyond simple linear systems, we present some ongoing work on viability monitoring for cyber-physical systems with human operators that involve added complexities such as nonlinear vehicle models and complex safety properties.

10:40-10:55 SYNTCOMP results
Guillermo A. Pérez
10:55-11:10 Modernising Strix
Philipp Meyer and Salomon Sickert

We describe the architectural changes applied to Strix, a tool for LTL reactive synthesis, that were made in preparation for SYNTCOMP 2021. We replace the specialised translation from linear temporal logic (LTL) to deterministic parity automata (DPW) (as described in [8]) by a simpler and more general translation based on the recent ∆2-normalisation for LTL by [13] and Zielonka split trees. Further, we make use of a new parity game solving algorithm by [14]. These changes simplify overall design, put the tool on a cleaner theoretical foundation, and improve the performance.

11:10-11:25 Almost-Symbolic Synthesis via Delta-2-Normalisation for Linear Temporal Logic
Remco Abraham, Tom van Dijk and Salomon Sickert

The classic approach to synthesis of reactive systems from linear temporal logic (LTL) specifications involves the translation of the specification to a deterministic omega-automaton and computing a winning strategy for the corresponding game with an omega-regular winning condition. Unfortunately, this procedure has an unavoidable double-exponential blow-up in the worst-case and suffers from the state-explosion problem. To address this state-explosion problem in practice we propose an almost-symbolic version of this classic idea that performs the following steps: (1) normalisation of the specification into a Boolean combination of simple fragment of LTL, (2) translation of each simple subformula into a deterministic automaton, (3) encoding of each automaton into a binary decision diagram (BDD), (4) construction of a parity automaton (and thus game) by operations on the BDD, (5) symbolic computation of a winning strategy, and finally (6) extraction of a symbolic controller. We prototype this approach in the tool Otus, compare it against Strix, the winner of SYNTCOMP 2018-2020, on the SYNTCOMP benchmarks, and identify several specifications where Otus outperforms Strix.

11:25-11:40 Acacia-Bonsai: A Modern Implementation of Downset-Based LTL Realizability
Michaël Cadilhac and Guillermo Perez

We describe our implementation of downset-based algorithms used to solve the realizability problem for linear temporal logic (LTL). These algorithms were introduced by Filiot et al. in the 2010s and implemented in the tools Acacia and Acacia+ in C and Python. We identify degrees of freedom in the original algorithms and provide a complete rewriting of Acacia in C++20 articulated around genericity and leveraging modern techniques for better performances. These techniques include compile-time specialization of the algorithms, the use of SIMD registers to store vectors, and several preprocessing steps, some relying on efficient Binary Decision Diagram (BDD) libraries.

11:40-11:55 Improvements to LTLsynt
Florian Renkin, Philipp Schlehuber-Caissier, Alexandre Duret-Lutz and Adrien Pommellet

We summarize LTLsynt's evolution since 2018.

11:55-12:10 Break
Session 3
Session Chair: Sebastian Junges  
12:10-12:50 Invited talk: Using Program Synthesis to Build Compilers
Alvin Cheung (UC Berkeley)

Domain-specific languages (DSLs) are prevalent across many application domains. Such languages let developers easily express computations using high-level abstractions that result in performant implementations. To leverage DSLs, however, application developers need to master the DSL’s syntax and manually rewrite existing code. Compilers can aid in this effort, but building them requires designing and implementing syntax transformation rules, which is often a tedious and error-prone process. In this talk, I will discuss how we view compilation as program synthesis: given source code to compile, our goal is to synthesize a program written in the target DSL that is the most performant (in terms of execution time, etc) and is semantically equivalent to the input. As searching for all possible programs in the target language is intractable, I will describe verified lifting, where we first "lift" the source code into a high-level summary that captures the semantics of the input, and subsequently generate executable code in the target language from the summary. Much of these two steps is driven by synthesis. I will argue that this approach makes compilers much easier to construct and maintain, and will illustrate verified lifting using a number of DSL compilers we have constructed. This includes Dexter (, which translates C++ image processing codes into the Halide DSL, with the translated code now shipping with Adobe Photoshop. Bio: Alvin Cheung is an assistant professor in UC Berkeley's EECS Dept. His research focuses on designing new techniques to solve programming systems problems. Alvin's research has been recognized through multiple early career awards such as the US Presidential Early Career Award for Scientists and Engineers and the Sloan Fellowship, along with a number of best paper and demo awards, most recently with a best paper award at CHI 2021.

12:50-13:10 Synthesis of Compact Strategies for Coordination Programs
Kedar Namjoshi and Nisarg Patel

A coordination program guides the actions of independent agents towards a desired joint behavior. Examples include controllers for IoT devices or robot teams, and orchestrators of microservices. Coordination strategies are an attractive target for program synthesis, as the desired behavior can be easy to specify but difficult to implement. A practically important but often-unstated requirement is that a coordination program should not initiate unnecessary actions, which do not contribute to meeting the specification. A strategy that satisfies this requirement while meeting the specification is called “compact.” This paper introduces and formalizes the notion of compactness, and demon- strates that existing methods for program synthesis may produce non- compact strategies. The synthesis of compact strategies is carried out with a specification transformation which guarantees that the winning strategies for the transformed specification are precisely the compact strategies for the original. Following this transformation, existing syn- thesis methods may be applied to construct compact strategies, as is demonstrated with a prototype implementation.

13:10-13:30 Compositional Reinforcement Learning from Logical Specifications
Kishor Jothimurugan, Suguman Bansal, Osbert Bastani and Rajeev Alur

We study the problem of learning control policies for complex tasks given by logical specifications Typically, these approaches automatically generate a reward function from a given specification and use a suitable reinforcement learning algorithm to learn a policy that maximizes the expected reward. These approaches, however, scale poorly to complex tasks that require high-level planning. In this work, we develop a compositional learning approach, called \dirl, that leverages the specification to decompose the task into a high-level planning problem and a set of simpler reinforcement learning tasks. An evaluation of \dirl on a challenging control benchmark with continuous state and action spaces demonstrates that it outperforms state-of-the-art baselines.

13:30-13:45 Update on the SyGuS-IF standard
Andrew Reynolds
13:45-13:50 Closing
</SYNT 2021>