Verification of Distributed Systems

May 6--8, 2018
in conjunction with NETYS 2018 (May 9--11, 2018)
Essaouira, Morocco

Context and goal

Modern computer systems rely crucially on concurrency and distribution to meet increasing high performance requirements. The rise of new technologies such as multi-core processors and could-computing have pushed concurrent and distributed programming to the mainstream. However, the development of concurrent and distributed systems that are both performant and consistent is a huge challenge, both from the conceptual and practical viewpoints. Researchers and practitioners in the fields of databases, cloud computing, parallel programming, concurrency, programming languages, and verification have independently tackled this challenge focusing on unique problems arising in their respective domains. We believe that there is significant potential for synergy by bringing together researchers from these diverse areas to build upon insights and techniques from each other. The main goal of this workshop is to provide an opportunity for such a synergy and to foster collaboration between the participants. Participation in the workshop is by invitation only. The workshop follows a tradition of workshops on related topics; see, e.g., FRIDA 2016 held in Marrakech, Morocco, May 17, 2016, and CCDP 2015 held in Agadir, Morocco, May 11--13, 2015.

Topics of interest:

A central theme of the workshop will be the issue of correctness in the development of performant concurrent and distributed systems. In this workshop, we would like to explore the different correctness notions that are used in this context and to understand the relationship between them. We would like to investigate methods for specifying, verifying, and testing systems against these notions. Here are some suggested topics that are of interest to this workshop:

Organizers

Dates and venue

The workshop will take place in May 6--8. The venue is the same as for NETYS 2018.

Registration

Participants to the workshop must register using the same procedure as for NETYS 2018.

Accomodation

See http://netys.net/accommodation/

List of confirmed participants

Parosh Aziz Abdulla Uppsala
Mohamed Faouzi Atig Uppsala
Nikolaj Bjorner Microsoft Research
Ahmed Bouajjani Paris Diderot
Sylvain Conchon Paris Sud
Giorgio Delzanno Genova
Cezara Dragoi Inria, ENS Paris
Carla Ferreira Lisboa
Chris Hawblitzel Microsoft Research
Ranjit Jahala San Diego
Christoph Kirsch Salzburg
Igor Konnov Vienna
Bernhard Kragl IST Austria
Rupak Majumdar MPI Soft Syst
Ruzica Piskac Yale
Gustavo Petri Paris Diderot
Andreas Podelski Freiburg
Zvonimir Rakamaric Utah
Sergio Rajsbaum Mexico
Marc Shapiro Inria, Paris UPMC
Thomas Wies New York
Josef Widder Vienna
Yunyun Zhu Uppsala

Program:

SUNDAY MAY 06th  
09:00 – 09:45 Marc Shapiro  — Just-Right Consistency: As available as possible, consistent when necessary
09:45 – 10:30

Gustavo Petri  — Invariant Verification for State-based CRDTs

10:30 – 11:00 Coffee Break
11:00 – 11:45 Thomas Wies  — Flow Interfaces: Compositional Abstractions for Concurrent Data Structures
11:45 – 12:30 Nikolaj BjornerValidating Distributed Systems, a view from the trenches
12:30 – 14:30 Lunch
14:30 – 15:15 Carla Ferreira Developing techniques for highly available cloud applications
15:15 – 16:00 Ruzica PiskacAn Auditing Language for Preventing Correlated Failures in the Cloud
16:00 – 16:30 Coffee Break
16:30 – 17:15 Parosh AbdullaReplacing Store Buffers by Load Buffers in Total Store Ordering
17:15 – 18:00 Mohamed Faouzi AtigContext-bounded analysis for POWER
MONDAY May 07th
09:00 – 09:45 Sergio RajsbaumIndistinguishability : from basic computer science concepts to modern distributed computing approaches
09:45 – 10:30 Igor KonnovSynthesizing Distributed Algorithms with Parameterized Threshold Guards
10:30 – 11:00 Coffee Break
11:00 – 11:45 Christoph KirschSelfie: Towards Minimal Symbolic Execution
11:45 – 12:30 Zvonimir RakamaricData Race Detection for Industry-Scale OpenMP Applications
12:30 – 14:30 Lunch
14:30 – 15:15 Chris HawblitzelVerifying Secure Communication using F* and Z3
15:15 – 16:00 Sylvain ConchonParameterized Model Checking Modulo Weak Memory Models
16:00 – 16:30 Coffee Break
16:30 – 17:15

Giorgio DelzannoValidation of Internet of Things Frameworks
17:15 – 18:00 Ranjit JhalaProof by Logical Evaluation
20:00 – 23:00 Banquet
TUESDAY May 08th
09:00 – 09:45 Josef WidderCommunication-closed layers as paradigm for distributed systems
09:45 – 10:30 Cezara DragoiCommunication-closed asynchronous protocols
10:30 – 11:00 Coffee Break
11:00 – 11:45

Bernhard KraglSynchronizing the asynchronous

11:45 – 12:30 Ahmed BouajjaniChecking robustness against asynchrony
12:30 – 14:30 Lunch

Abstracts :

- Marc Shapiro

When accessing distributed data, the CAP theorem forces a choice between strong consistency (CP) and availability and responsiveness (AP) when the network can partition. To address this issue, we take an application-driven approach, Just-Right Consistency (JRC). The idea is to define a consistency model that remains as available as allowed by the application's specific invariants, and synchronises only when strictly necessary.

To preserve application safety, the JRC approach leverages common algorithmic patterns for maintaining invariants. The "ordered updates" and "atomic grouping" patterns do not require synchronisation and are thus compatible with availability. "Checking a data precondition" is CAP-sensitive, but does not require systematic synchronisation: if two updates do not negate each other's precondition, they may safely execute concurrently. They must synchronise only if one might negate the precondition of the other.

We developed several tools to support the JRC approach. Antidote is an open-source, highly-available and highly-scalable geo-replicated CRDT data store that guarantees transactional causal consistency (TCC). Static analysis tools (CISE, CEC and Repliss) verify that an application running under TCC guarantees safely guarantees its invariants and help generate extra synchronisation conditions when necessary. A SQL-like language lets developers express some common classes of invariants readily.

This research is supported in part by European FP7 project SyncFree <http://syncfree.lip6.fr/>, by European H2020 project LightKone <http://LightKone.eu/>, and by ANR project RainbowFS <http://RainbowFS.lip6.fr/>.

Joint work with Valter Balegas, Annette Bieniusa, Christoper Meiklejohn, Nuno Preguiça

- Gustavo Petri

In this talk I will address the problem of verifying program invariants for state-based Commutative Replicated Data Types (CRDTs). Our methodology is based on 1) checking that the state of the data structure forms a semi-lattice, sufficient to enforce the CRDT properties, and 2) using a simplified form of Rely/Guarantee reasoning to check the correctness of the invariants. The proof obligations can be discharged using conventional SMT solvers, and we are developing a tool to that end.

This talk is based on joint work with Sreeja Nair and Marc Shapiro.

- Thomas Wies

Concurrent separation logics have helped to significantly simplify correctness proofs for concurrent data structures. However, a recurring problem in such proofs is that data structure abstractions that work well in the sequential setting are much harder to reason about in a concurrent setting due to complex sharing and overlays. To solve this problem, we propose a novel approach to abstracting regions in the heap by encoding the data structure invariant into a local condition on each individual node. This condition may depend on a quantity associated with the node that is computed as a fixpoint over the entire heap graph. We refer to this quantity as a flow. Flows can encode both structural properties of the heap (e.g. the reachable nodes from the root form a tree) as well as data invariants (e.g. sortedness). We then introduce the notion of a flow interface, which expresses the relies and guarantees that a heap region imposes on its context to maintain the local flow invariant with respect to the global heap.

Our main technical result is that flow interfaces provide a new semantic model for separation logic assertions that admits general implementation-agnostic proof rules for reasoning about concurrent data structures. We have used our approach to obtain linearizability proofs for concurrent dictionary algorithms that cannot be easily expressed using the abstraction mechanisms provided by existing program logics.

- Nikolaj Bjorner

Before there was SMT, there were distributed systems, but not the way you may think until reading the next sentence. In this talk I will describe industrial validation experiences from 2000-2006 in the area of distributed file systems at a startup and in the Microsoft Core File Systems group. With emphasis on the Distributed File System Replication product, DFSR, the talk describes selected validation experiences using model checking and emulation combined with partial order reduction.

- Carla Ferreira

Building trustworthy cloud applications is inherently complex and error prone, and requires developers with a high level of expertise. In this talk we will discuss sound developing techniques that leverage recent theoretical advances to safely minimise, or altogether eschew, coordination.

- Ruzica Piskac

Today's cloud systems heavily rely on redundancy for reliability. Nevertheless, as cloud systems become ever more structurally complex, independent infrastructure components may unwittingly share deep dependencies. These unexpected common dependencies may result in correlated failures that undermine redundancy efforts. The state-of-the-art efforts, e.g., post-failure forensics, not only lead to prolonged failure recovery time in the face of structurally complex systems, but also fail to avoid expensive service downtime. In this talk, we present a series of work towards preventing correlated failures not only in a single cloud data center but also across multiple cloud providers. We show how our system helps the data center operators proactively audit correlated failure risks through: 1) automatically collecting dependencies, 2) constructing a fault graph to model the target system stacks, and 3) analyzing the fault graph to identify potential risks.
To ensure the practicality, efficiency, and accuracy of our approach, we further equip our system with a domain-specific auditing language framework, a set of high-performance auditing primitives based on SAT/SMT solvers, and an automatic correlated failure risk repair engine. This presentation is based on our OOPSLA'17 paper.

- Parosh Abdulla

We consider the verification of concurrent programs running under the classical TSO (Total Store Order) memory model. The model allows ``write to read'' relaxation corresponding to the addition of an unbounded store buffer (that contains pending store operations) between each processor and the main memory. In this talk, we introduce a novel semantics which we call the dual TSO semantics that replaces each store buffer by a load buffer that contains pending load operations. The flow of information is reversed, i.e., store operations are performed by the processes atomically on the main memory, while values of variables are propagated from the memory to the load buffers of the processes. We show that the two semantics are equivalent in the sense that a program will reach identical sets of states when run under the two semantics. Furthermore, we present a simple and effective reachability algorithm for checking safety properties of programs running under the dual semantics.

- Mohamed Faouzi Atig

We present an under-approximate reachability analysis algorithm for programs running under the POWER memory model, in the spirit of the work on context-bounded analysis intitiated by Qadeer et al. in 2005 for detecting bugs in concurrent programs (supposed to be running under the classical SC model). To that end, we first introduce a new notion of context-bounding that is suitable for reasoning about computations under POWER. Then, we provide a polynomial size reduction of the context-bounded state reachability problem under POWER to the same problem under SC. By leveraging the standard model checker CBMC, we have implemented a prototype tool and applied it on a set of benchmarks, showing the feasibility of our approach.

- Sergio Rajsbaum

The talk will present an introduction to indistinguishability, arguing it is a fundamental notion in computer science, that is behind many of the limitations we encounter, that imply lower bounds and impossibility results. Indeed, various fields have been concerned with indistinguishability, from their own perspectives: automata theory, learning, artificial intelligence, complexity, computability, formal methods, semantics, knowledge, cryptography, distributed computing. As soon as two objects interact, indistinguishability issues arise, from limitations to tell apart distinct situations. For instance, in an asynchronous distributed system, a process is unable to distinguish if another process is slow or it has crashed. Distributed computing is the paramount area that studies interactions, which is why indistinguishability is so pervasive there.

- Igor Konnov

Fault-tolerant distributed algorithms are notoriously hard to get right. In this talk we discuss our automated method that helps in that process: the designer provides specifications (the problem to be solved) and a sketch of a distributed algorithm that keeps arithmetic details unspecified. Our tool ByMC then automatically fills the missing parts.

Fault-tolerant distributed algorithms are typically parameterized, that is, they are designed to work for all numbers of processes n, and assumed numbers of faults t, provided some resilience condition holds; e.g., n > 3t. Despite this, most existing literature on automated synthesis of fault-tolerant distributed algorithms considers only the non-parameterized case, where the values of the parameters n and t are fixed to small values, e.g., n = 6. Our technique automatically synthesizes distributed algorithms for all parameter values. We focus on threshold-guarded distributed algorithms, where actions are taken only if a sufficiently large number of messages is received, e.g., more than t or n/2. Both expressions can be derived by choosing the right values for the coefficients a, b, and c, in the sketch of a threshold an + bt + c.
Our method takes as input a sketch of an asynchronous threshold-based fault-tolerant distributed algorithm -- where the guards are missing exact coefficients -- and then iteratively picks the values for the coefficients. It combines recent progress in parameterized model checking of distributed algorithms with counterexample-guided synthesis. Besides theoretical results on termination of the synthesis procedure, we give experimental evaluation that shows that our method can synthesize several distributed algorithms from the literature, e.g., Byzantine reliable broadcast and Byzantine one-step consensus.

The talk is based on the joint work with Marijana Lazic, Josef Widder, and Roderick Bloem.

- Christoph Kirsch

Selfie is a self-contained 64-bit, 10-KLOC implementation of (1) a self-compiling compiler written in a tiny subset of C called C* targeting a tiny subset of 64-bit RISC-V called RISC-U, (2) a self-executing RISC-U emulator, (3) a self-hosting hypervisor that virtualizes the emulated RISC-U machine, and (4) a prototypical symbolic execution engine that executes RISC-U symbolically. Selfie can compile, execute, and virtualize itself any number of times in a single invocation of the system given adequate resources. There is also a simple linker, disassembler, debugger, and profiler. C* supports only two data types, uint64_t and uint64_t*, and RISC-U features just 14 instructions, in particular for unsigned arithmetic only, which significantly simplifies reasoning about correctness. Selfie has originally been developed just for educational purposes but has recently become a research platform as well. In this talk, we demonstrate the capabilities of the system and discuss our ongoing effort in designing a minimal symbolic execution engine.

This is joint work with A. Abyaneh, M. Aigner, S. Arming, C. Barthel, S. Bauer, T. Hütter, A. Kollert, M. Lippautz, C. Mayer, P. Mayer, C. Moesl, S. Oblasser, C. Poncelet, S. Seidl, A. Sokolova, and M. Widmoser. Web: <http://selfie.cs.uni-salzburg.at>

- Zvonimir Rakamaric

High performance computing on-node parallelism is of extreme importance to maintain scalability across large clusters of thousands of distributed multicore nodes. While MPI is predominantly used to exploit the parallelism across nodes, OpenMP has become a standard choice for exploiting the on-node shared-memory parallelism. However, data races lurk as one of the most common and insidious source of bugs in shared-memory programming models, and OpenMP programs are not immune to them. For the past several years, we have been working on developing effective tools for detecting data races in industry-scale OpenMP applications. Our tools achieve high accuracy while maintaining low overheads in terms of runtime and memory even on large applications. We achieved this by exploiting the structured parallelism of OpenMP, combining static and dynamic analysis, and carefully optimizing our tools for performance. The tools we released, namely Archer and Sword, have been used by developers in national labs to find deep-seated data races in industry-scale OpenMP applications that have been eluding them for months. In this talk, I will describe Archer and Sword, and our experience and lessons learned in building them.

- Chris Hawblitzel

The Everest project aims to verify a high-performance implementation of TLS (Transport Layer Security), including the TLS protocol and the cryptographic primitives underlying the protocol. The verification makes heavy use of an SMT solver (Z3), but programs and proofs aren't written in Z3 directly -- they are written in a higher-level language that generates queries to Z3. For Everest's assembly language code, we've recently switched from using Dafny to using F* as our higher-level language for expressing proofs. Although both Dafny and F* generate queries to Z3, F* gives programmers the flexibility to customize the generated queries. Using this flexibility, we embed a domain-specific language into F* and prove in F* that the custom query generation for this language is sound. We then use the domain-specific language to verify assembly-language implementations of cryptographic primitives used in TLS, such as AES-GCM and Poly1305. The custom queries are considerably faster than the default queries generated by Dafny and F*. This sort of customization may be useful to other domain-specific verification languages, such as those for parallel and distributed systems.

- Sylvain Conchon

In this talk, I will present a modular framework for model checking parameterized weak memory array-based transition systems. Our approach follows the Model Checking Modulo Theories (MCMT) architecture of Ghilardi and Ranise. It is based on a combination of three main components: a backward reachability algorithm, an SMT solver and a theory of weak memory. We have implemented this framework in Cubicle-W, an extension of the Cubicle model checker. The architecture of our tool allows us to change the underlying memory model seamlessly (TSO, PSO, …). Our first experiments with a TSO-like memory model look promising.

- Giorgio Delzanno

I will present current work related to the formal specification and validation of Node.js, an asynchronous programming framework suitable for building SDKs for low cost devices that is widely applied in the development of IoT applications.
More specifically, I will discuss foundational issues related to the operational semantics of Node.js applications and validation methods based on testing and run-time verification developed in a research project funded by the University of Genova.

- Ranjit Jhala

SMT solvers' ability to reason about equality, arithmetic, strings, sets and maps, have transformed program analysis and model checking. However, SMT crucially relies on queries being restricted to the above theories which preclude the specification and verification of properties of user-defined functions.

In this talk, we will describe Proof by Logical Evaluation, a new method that enables automatic reasoning about user-defined functions, by showing how to mimic computation within SMT-logics. The key idea is to represent functions in a guarded form and repeatedly unfold function calls under enabled guards. We formalize a notion of an equational proof and show that PLE is complete, i.e. is guaranteed to find an equational proof if one exists. Furthermore, we show that PLE corresponds to a universal (or must) abstraction of the concrete semantics of the user-defined functions, and hence, terminates, yielding a precise and predictable means of automatically reasoning about user-defined functions.

- Josef Widder

Distributed computations are characterized by a partial order over events: two concurrent events at different processes may be re-ordered without changing the outcome of the computation. For systems that are composed of so-called communication-closed layers, this partial-order argument has been used to reduce the reasoning about distributed systems to a specific sequential form. I will discuss existing techniques for communication-closed layers, and applications to automated verification of state-of-the-art distributed systems.

- Cezara Dragoi

Communication close round-based models are a particular type of synchronous models that simplify the verification of fault-tolerant distributed systems.
We present a sound method to check that an asynchronous protocol is communication close. The verification conditions implied by this method can be automatically discarded using of the self SMT-solvers or static analysers.
Provided that an asynchronous protocol is communication close we define a code-to-code translation into the Heard-Of computational model, which is a communication close round-based model.

- Bernhard Kragl

Synchronous programs are easy to specify because the side effects of an operation are finished by the time the invocation of the operation returns to the caller. Asynchronous programs, on the other hand, are difficult to specify because there are side effects due to pending computation scheduled as a result of the invocation of an operation. They are also difficult to verify because of the large number of possible interleavings of concurrent asynchronous computation threads. We present synchronization, a new proof rule that simplifies the verification of asynchronous programs by introducing the fiction, for proof purposes, that asynchronous operations complete synchronously. Synchronization summarizes an asynchronous computation as immediate atomic effect. Modular verification is enabled via pending asynchronous calls (e.g., to callback procedures) in atomic summaries and a complementary proof rule that eliminates pending asynchronous calls when components and their specifications are composed. Our implementation in the context of a multi-layer refinement verification methodology showed promising utility on a collection of benchmark programs.

- Ahmed Bouajjani

Concurrent systems are in general used by their clients under strong assumptions on their visible behaviors. This allows a modular design approach: at the level of the client, these assumptions allow to reason in an abstract way about the behaviors of the invoked systems. For instance, the users of a shared memory may assume that the implementation of the memory is sequentially consistent, which means that it behaves according to the standard interleaving model where write/read operations are considered to be atomic, and immediately visible to all parallel users. In an another context, the clients of web services may consider that their requests are handled atomically in a serial way, and the clients of a network may assume that their communications are synchronous, etc. However, implementations of concurrent systems tend to parallelize operations and to relax atomicity and synchrony in order to increase the throughput of the system. This leads in general to a weakening in the guarantees of these implementations w.r.t. strong consistency models/concurrency semantics. In this talk, we will address to issue of checking that a given program of the client is robust against this kind of relaxations, i.e., the observable behaviors of the client are the same under both strong and relaxed semantics. Robustness corresponds to a correctness criterion that ensures the preservation of all properties that can be proved assuming the strong semantics. We show that this property can be checked efficiently in several cases by linear reductions to state reachability problems. We consider in particular the cases of event-driven programs with asynchronous procedure calls, and message-passing programs communicating through fifo channels.