Modularity for decidability of deductive verification with applications to distributed systems

Proof automation can substantially increase productivity in formal verification of complex systems. However, unpredictablility of automated provers in handling quantified formulas presents a major hurdle to usability of these tools. We propose to solve this problem not by improving the provers, but by using a modular proof methodology that allows us to produce decidable verification conditions. Decidability greatly improves predictability of proof automation, resulting in a more practical verification approach. We apply this methodology to develop verified implementations of distributed protocols, demonstrating its effectiveness.

Paper


Modularity for decidability of deductive verification with applications to distributed systems
 (Marcelo Taube and Giuliano Losa and Kenneth L. McMillan and Oded Padon and Mooly Sagiv and Sharon Shoham and James R. Wilcox and Doug Woos), In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018, Philadelphia, PA, USA, June 18-22, 2018, 2018. [bibtex] [pdf] [doi]

Duality

Duality is a solver for Constrained Horn Clauses based on Craig Interpolation and Lazy Annotation. A CHC solver can be used for many purposes, including the verification of recursive programs.

Lazy Annotation is a method of software model checking that performs a backtracking search for a symbolic counterexample. When the search backtracks, the program is annotated with a learned fact, derived as a Craig interpolant, that constrains future search. In this sense, the method is closely analogous to conflict-driven clause learning in SAT solvers.

Getting Duality

Duality was integrated as a Horn Logic solver in the Z3 theorem prover up to version 4.7.1. Releases of Z3 containing Duality can be obtained from the Z3 Github site. To use Duality for a Horn logic problem in the SMTLIB2 format, use the Z3 command-line option engine=duality.

Paper


Lazy Annotation Revisited
 (Kenneth L. McMillan), In Computer Aided Verification – 26th International Conference, CAV 2014, Held as Part of the Vienna Summer of Logic, VSL 2014, Vienna, Austria, July 18-22, 2014. Proceedings, 2014. [bibtex] [pdf] [doi]

Formal specification and testing of QUIC

QUIC is a new Internet secure transport protocol currently
in the process of IETF standardization. It is intended as a
replacement for the TLS/TCP stack and will be the basis of
HTTP/3, the next official version of the hypertext transfer
protocol. As a result, it is likely, in the near future, to carry
a substantial fraction of traffic on the Internet. We describe
our experience applying a methodology of compositional
specification-based testing to QUIC. We develop a formal
specification of the wire protocol, and use this specification
to generate automated randomized testers for implementations of QUIC. The testers effectively take one role of the
QUIC protocol, interacting with the other role to generate
full protocol executions, and verifying that the implementations conform to the formal specification. This form of testing
generates significantly more diverse stimuli and stronger correctness criteria than interoperability testing, the primary
method used to date to validate QUIC and its implementations. As a result, numerous implementation errors have
been found. These include some vulnerabilities at the protocol and implementation levels, such as an off-path denial
of service scenario and an information leak similar to the
“heartbleed” vulnerability in OpenSSL. TEST

Paper

Formal specification and testing of QUIC (Kenneth L. McMillan and Lenore D. Zuck), In Proceedings of ACM Special Interest Group on Data Communication (SIGCOMM’19), ACM, 2019. [bibtex] [pdf] [slides] video