报告时间：2020年7月4日 星期六 20:00-21:00
报告地点：腾讯会议ID：991 217 002
报告摘要：Deep neural networks (DNNs) have been applied in safety-critical domains such as self driving cars, aircraft collision avoidance systems, malware detection, etc. In such scenarios, it is important to give a safety guarantee to the robustness property, namely that outputs are invariant under small perturbations on the inputs. For this purpose, several algorithms and tools have been developed recently. In this talk, we present
PRODeep, a platform for robustness verification of DNNs. PRODeep incorporates constraint-based, abstraction-based, and optimisation-based robustness checking algorithms. It has a modular architecture, enabling easy comparison of different algorithms. With experimental results, we illustrate the use of the tool, and easy combination of those techniques.
报告人简介: Lijun Zhang is a research professor at State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences, and a visiting professor at Shenzhen University. Before this he was an associate professor at Language-Based Technology section, DTU Compute, Technical University of Denmark. Before this he was a postdoctoral researcher at University of Oxford. He gained a Diploma Degree and a PhD (Dr. Ing.) at Saarlandes University. His research interests include: probabilistic models, simulation reduction, decision algorithms for probabilistic simulation preorders, abstraction and model checking. Recently, he is working in combining automata learning techniques with model checking. He is leading the development of the model checker IscasMC.