Keynote Talk: Safety Verification of Deep Neural Networks

Marta Kwiatkowska
University of Oxford (UK)

Biography:

Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. She led the development of the PRISM model checker (www.prismmodelchecker.org), the leading software tool in the area and winner of the HVC Award 2016. Probabilistic model checking has been adopted in many diverse fields, including distributed computing, wireless networks, security, robotics, game theory, systems biology, DNA computing and nanotechnology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska  awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm in 2014 and  the Royal Society Milner Medal in 2018. Her recent work was supported by the ERC Advanced Grant VERIWARE « From software verification to ‘everyware’ verification » and the EPSRC Programme Grant on Mobile Autonomy. She is a Fellow of ACM and Member of Academia Europea.

Abstract:

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing a novel automated verification framework for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques work directly with the network code and, in contrast to existing methods, can offer guarantees that adversarial examples are found if they exist. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples.

 

 

Dates (New)

February 04, 2018

Abstract submission deadline

February 11, 2018

Paper submission deadline

March 26, 2018

Acceptance notification

April 04, 2018

Camera ready copy due

Proceedings

SpringerLNCS

Revised selected papers will be published as a post-proceedings in Springer's LNCS "Lecture Notes in Computer Science"

Partners & Sponsors

SponsorsNETYS28