The problem of verification is related to understanding the bounds of safe operation of Learning Enabled Components. The approach requires development of theoretical analysis and numerical computation techniques that can estimate the polygonal approximations of the outputs of the LEC given some bounds on the inputs provided to the LEC. The methods also contribute to the design of runtime reachable set computations that enable calculation of the safe operating regions for the autonomous vehicle in a given bounded time interval. Please see the following publications to learn more about this research area.
Publications
D. M. Lopez, T. T. Johnson, S. Bak, H.-D. Tran, and K. Hobbs, Evaluation of Neural Network Verification Methods for Air to Air Collision Avoidance, AIAA Journal of Air Transportation (JAT), Oct. 2022.
@article{lopez2022jat,
author = {Lopez, Diego Manzanas and Johnson, Taylor T. and Bak, Stanley and Tran, Hoang-Dung and Hobbs, Kerianne},
journal = {AIAA Journal of Air Transportation (JAT)},
title = {Evaluation of Neural Network Verification Methods for Air to Air Collision Avoidance},
year = {2022},
month = oct,
tag = {v},
volume = {},
number = {},
pages = {},
pdf = {research/lopez2022jat.pdf},
doi = {}
}
D. Manzanas Lopez, P. Musau, N. P. Hamilton, and T. T. Johnson, Reachability Analysis of a General Class of Neural Ordinary Differential Equations, in Formal Modeling and Analysis of Timed Systems, Cham, 2022, pp. 258–277.
@inproceedings{manzanas2022formats,
author = {Manzanas Lopez, Diego and Musau, Patrick and Hamilton, Nathaniel P. and Johnson, Taylor T.},
editor = {Bogomolov, Sergiy and Parker, David},
title = {Reachability Analysis of a General Class of Neural Ordinary Differential Equations},
booktitle = {Formal Modeling and Analysis of Timed Systems},
year = {2022},
tag = {v},
publisher = {Springer International Publishing},
address = {Cham},
pages = {258--277},
isbn = {978-3-031-15839-1}
}
Continuous deep learning models, referred to as Neural Ordinary Differential Equations (Neural ODEs), have received considerable attention over the last several years. Despite their burgeoning impact, there is a lack of formal analysis techniques for these systems. In this paper, we consider a general class of neural ODEs with varying architectures and layers, and introduce a novel reachability framework that allows for the formal analysis of their behavior. The methods developed for the reachability analysis of neural ODEs are implemented in a new tool called NNVODE. Specifically, our work extends an existing neural network verification tool to support neural ODEs. We demonstrate the capabilities and efficacy of our methods through the analysis of a set of benchmarks that include neural ODEs used for classification, and in control and dynamical systems, including an evaluation of the efficacy and capabilities of our approach with respect to existing software tools within the continuous-time systems reachability literature, when it is possible to do so.
X. Yang, T. Yamaguchi, H.-D. Tran, B. Hoxha, T. T. Johnson, and D. Prokhorov, Neural Network Repair with Reachability Analysis, in Formal Modeling and Analysis of Timed Systems, Cham, 2022, pp. 221–236.
@inproceedings{yang2022formats,
author = {Yang, Xiaodong and Yamaguchi, Tom and Tran, Hoang-Dung and Hoxha, Bardh and Johnson, Taylor T. and Prokhorov, Danil},
editor = {Bogomolov, Sergiy and Parker, David},
title = {Neural Network Repair with Reachability Analysis},
booktitle = {Formal Modeling and Analysis of Timed Systems},
year = {2022},
tag = {v},
publisher = {Springer International Publishing},
address = {Cham},
pages = {221--236},
isbn = {978-3-031-15839-1}
}
Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. This paper proposes a method to repair unsafe ReLU DNNs in safety-critical systems using reachability analysis. Our repair method uses reachability analysis to calculate the unsafe reachable domain of a DNN, and then uses a novel loss function to construct its distance to the safe domain during the retraining process. Since subtle changes of the DNN parameters can cause unexpected performance degradation, we also present a minimal repair approach where the DNN deviation is minimized. Furthermore, we explore applications of our method to repair DNN agents in deep reinforcement learning (DRL) with seamless integration with learning algorithms. Our method is evaluated on the ACAS Xu benchmark and a rocket lander system against the state-of-the-art method ART. Experimental results show that our repair approach can generate provably safe DNNs on multiple safety specifications with negligible performance degradation, even in the absence of training data (Code is available online at https://github.com/Shaddadi/veritex.git).
T. Bao, S. Chen, T. T. Johnson, P. Givi, S. Sammak, and X. Jia, Physics Guided Neural Networks for Spatio-temporal Super-resolution of Turbulent Flows, in The 38th Conference on Uncertainty in Artificial Intelligence, 2022.
@inproceedings{bao2022physics,
title = {Physics Guided Neural Networks for Spatio-temporal Super-resolution of Turbulent Flows},
author = {Bao, Tianshu and Chen, Shengyu and Johnson, Taylor T and Givi, Peyman and Sammak, Shervin and Jia, Xiaowei},
booktitle = {The 38th Conference on Uncertainty in Artificial Intelligence},
year = {2022},
tag = {v},
url = {https://openreview.net/forum?id=S98VJL8jcxq}
}
N. Hamilton, P. K. Robinette, and T. T. Johnson, Training Agents to Satisfy Timed and Untimed Signal Temporal Logic Specifications with Reinforcement Learning, in Software Engineering and Formal Methods, Cham, 2022, pp. 190–206.
@inproceedings{hamilton2022sefm,
author = {Hamilton, Nathaniel and Robinette, Preston K. and Johnson, Taylor T.},
editor = {Schlingloff, Bernd-Holger and Chai, Ming},
title = {Training Agents to Satisfy Timed and Untimed Signal Temporal Logic Specifications with Reinforcement Learning},
booktitle = {Software Engineering and Formal Methods},
year = {2022},
tag = {v},
publisher = {Springer International Publishing},
address = {Cham},
pages = {190--206},
isbn = {978-3-031-17108-6}
}
Reinforcement Learning (RL) depends critically on how reward functions are designed to capture intended behavior. However, traditional approaches are unable to represent temporal behavior, such as “do task 1 before doing task 2.” In the event they can represent temporal behavior, these reward functions are handcrafted by researchers and often require long hours of trial and error to shape the reward function just right to get the desired behavior. In these cases, the desired behavior is already known, the problem is generating a reward function to train the RL agent to satisfy that behavior. To address this issue, we present our approach for automatically converting timed and untimed specifications into a reward function, which has been implemented as the tool STLGym. In this work, we show how STLGym can be used to train RL agents to satisfy specifications better than traditional approaches and to refine learned behavior to better match the specification.
N. Hamilton, P. Musau, D. M. Lopez, and T. T. Johnson, Zero-Shot Policy Transfer in Autonomous Racing: Reinforcement Learning vs Imitation Learning, in 2022 IEEE International Conference on Assured Autonomy (ICAA), 2022, pp. 11–20.
@inproceedings{hamilton2022icaa,
author = {Hamilton, Nathaniel and Musau, Patrick and Lopez, Diego Manzanas and Johnson, Taylor T.},
booktitle = {2022 IEEE International Conference on Assured Autonomy (ICAA)},
title = {Zero-Shot Policy Transfer in Autonomous Racing: Reinforcement Learning vs Imitation Learning},
year = {2022},
tag = {v},
volume = {},
number = {},
pages = {11-20},
doi = {10.1109/ICAA52185.2022.00011},
pdf = {research/hamilton2022icaa.pdf}
}
P. Musau, N. Hamilton, D. M. Lopez, P. Robinette, and T. T. Johnson, On Using Real-Time Reachability for the Safety Assurance of Machine Learning Controllers, in 2022 IEEE International Conference on Assured Autonomy (ICAA), 2022, pp. 1–10.
@inproceedings{musau2022icaa,
author = {Musau, Patrick and Hamilton, Nathaniel and Lopez, Diego Manzanas and Robinette, Preston and Johnson, Taylor T.},
booktitle = {2022 IEEE International Conference on Assured Autonomy (ICAA)},
title = {On Using Real-Time Reachability for the Safety Assurance of Machine Learning Controllers},
year = {2022},
tag = {v},
volume = {},
number = {},
pages = {1-10},
doi = {10.1109/ICAA52185.2022.00010},
pdf = {research/musau2022icaa.pdf}
}
B. Serbinowski and T. T. Johnson, BehaVerify: Verifying Temporal Logic Specifications for Behavior Trees, in Software Engineering and Formal Methods, Cham, 2022, pp. 307–323.
@inproceedings{serbinowski2022sefm,
author = {Serbinowski, Bernard and Johnson, Taylor T.},
editor = {Schlingloff, Bernd-Holger and Chai, Ming},
title = {BehaVerify: Verifying Temporal Logic Specifications for Behavior Trees},
booktitle = {Software Engineering and Formal Methods},
year = {2022},
publisher = {Springer International Publishing},
address = {Cham},
tag = {v},
pages = {307--323},
isbn = {978-3-031-17108-6}
}
Behavior Trees, which originated in video games as a method for controlling NPCs but have since gained traction within the robotics community, are a framework for describing the execution of a task. BehaVerify is a tool that creates a nuXmv model from a py_tree. For composite nodes, which are standardized, this process is automatic and requires no additional user input. A wide variety of leaf nodes are automatically supported and require no additional user input, but customized leaf nodes will require additional user input to be correctly modeled. BehaVerify can provide a template to make this easier. BehaVerify is able to create a nuXmv model with over 100 nodes and nuXmv was able to verify various non-trivial LTL properties on this model, both directly and via counterexample. The model in question features parallel nodes, selector, and sequence nodes. A comparison with models based on BTCompiler indicates that the models created by BehaVerify perform better.
H.-D. Tran, W. Xiang, and T. T. Johnson, Verification Approaches for Learning-Enabled Autonomous Cyber–Physical Systems, IEEE Design Test, vol. 39, no. 1, pp. 24–34, 2022.
@article{TranIEEE2022,
author = {Tran, Hoang-Dung and Xiang, Weiming and Johnson, Taylor T.},
journal = {IEEE Design Test},
title = {Verification Approaches for Learning-Enabled Autonomous Cyber–Physical Systems},
year = {2022},
volume = {39},
tag = {v},
number = {1},
pages = {24-34},
doi = {10.1109/MDAT.2020.3015712}
}
Editor’s notes: Neural network control systems are often at the heart of autonomous systems. The authors classify existing verification methods for these systems and advocate the necessity of integrating verification techniques in the training process to enhance robustness. — Selma Saidi, TU Dortmund
T. T. Johnson, D. M. Lopez, L. Benet, M. Forets, S. Guadalupe, C. Schilling, R. Ivanov, T. J. Carpenter, J. Weimer, and I. Lee, ARCH-COMP21 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants, in 8th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH21), 2021, vol. 80, pp. 90–119.
@inproceedings{Johnson_ARCH_COMP21_Category_Report_Artificial,
author = {Johnson, Taylor T. and Lopez, Diego Manzanas and Benet, Luis and Forets, Marcelo and Guadalupe, Sebasti\textbackslash{}'an and Schilling, Christian and Ivanov, Radoslav and Carpenter, Taylor J. and Weimer, James and Lee, Insup},
title = {ARCH-COMP21 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants},
booktitle = {8th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH21)},
editor = {Frehse, Goran and Althoff, Matthias},
series = {EPiC Series in Computing},
volume = {80},
tag = {v},
pages = {90--119},
year = {2021},
publisher = {EasyChair},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/Jq4h},
doi = {10.29007/kfk9}
}
This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical systems (CPS), such as feedforward neural networks used as feedback controllers in closed-loop systems are considered, which is a class of systems classically known as intelligent control systems, or in more modern and specific terms, neural network control systems (NNCS). We more broadly refer to this category as AI and NNCS (AINNCS). The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2021. In the third edition of this AINNCS category at ARCH-COMP, three tools have been applied to solve seven different benchmark problems, (in alphabetical order): JuliaReach, NNV, and Verisig. JuliaReach is a new participant in this category, Verisig participated previously in 2019 and NNV has participated in all previous competitions. This report is a snapshot of the current landscape of tools and the types of benchmarks for which these tools are suited. Due to the diversity of problems, lack of a shared hardware platform, and the early stage of the competition, we are not ranking tools in terms of performance, yet the presented results combined with 2020 results probably provide the most complete assessment of current tools for safety verification of NNCS.
T. T. Johnson, ARCH-COMP21 Repeatability Evaluation Report, in 8th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH21), 2021, vol. 80, pp. 153–160.
@inproceedings{Johnson_ARCH_COMP21_Repeatability_Evaluation_Report,
author = {Johnson, Taylor T.},
title = {ARCH-COMP21 Repeatability Evaluation Report},
booktitle = {8th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH21)},
editor = {Frehse, Goran and Althoff, Matthias},
series = {EPiC Series in Computing},
volume = {80},
tag = {v},
pages = {153--160},
year = {2021},
publisher = {EasyChair},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/cfpN},
doi = {10.29007/zqdx}
}
This report presents the results of the repeatability evaluation for the 5th Interna- tional Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP’21). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2021, affiliated with the 7th IFAC Conference on Analysis and Design of Hybrid Systems (ADHS’21). In its fifth edition, seventeen tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve bench- mark problems for seven competition categories. The majority of participants adhered to the requirements for this year’s repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker), and several categories used performance evalua- tion information from a common execution platform. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.
S. Bak, C. Liu, and T. T. Johnson, The Second International Verification of Neural Networks Competition
(VNN-COMP 2021): Summary and Results, CoRR, vol. abs/2109.00498, 2021.
@article{Bak2021VNNCOMP,
author = {Bak, Stanley and Liu, Changliu and Johnson, Taylor T.},
title = {The Second International Verification of Neural Networks Competition
{(VNN-COMP} 2021): Summary and Results},
journal = {CoRR},
volume = {abs/2109.00498},
year = {2021},
url = {https://arxiv.org/abs/2109.00498},
eprinttype = {arXiv},
eprint = {2109.00498},
tag = {v},
timestamp = {Mon, 20 Sep 2021 16:29:41 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2109-00498.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
This report summarizes the second International Verification of Neural Networks Competition (VNN-COMP 2021), held as a part of the 4th Workshop on Formal Methods for ML-Enabled Autonomous Systems that was collocated with the 33rd International Conference on Computer-Aided Verification (CAV). Twelve teams participated in this competition. The goal of the competition is to provide an objective comparison of the state-of-the-art methods in neural network verification, in terms of scalability and speed. Along this line, we used standard formats (ONNX for neural networks and VNNLIB for specifications), standard hardware (all tools are run by the organizers on AWS), and tool parameters provided by the tool authors. This report summarizes the rules, benchmarks, participating tools, results, and lessons learned from this competition.
W. Xiang, H.-D. Tran, X. Yang, and T. T. Johnson, Reachable Set Estimation for Neural Network Control Systems: A Simulation-Guided Approach, IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 5, pp. 1821–1830, 2021.
@article{XiangTNNLS2021,
author = {Xiang, Weiming and Tran, Hoang-Dung and Yang, Xiaodong and Johnson, Taylor T.},
journal = {IEEE Transactions on Neural Networks and Learning Systems},
title = {Reachable Set Estimation for Neural Network Control Systems: A Simulation-Guided Approach},
year = {2021},
volume = {32},
tag = {v},
number = {5},
pages = {1821-1830},
doi = {10.1109/TNNLS.2020.2991090}
}
The vulnerability of artificial intelligence (AI) and machine learning (ML) against adversarial disturbances and attacks significantly restricts their applicability in safety-critical systems including cyber-physical systems (CPS) equipped with neural network components at various stages of sensing and control. This article addresses the reachable set estimation and safety verification problems for dynamical systems embedded with neural network components serving as feedback controllers. The closed-loop system can be abstracted in the form of a continuous-time sampled-data system under the control of a neural network controller. First, a novel reachable set computation method in adaptation to simulations generated out of neural networks is developed. The reachability analysis of a class of feedforward neural networks called multilayer perceptrons (MLPs) with general activation functions is performed in the framework of interval arithmetic. Then, in combination with reachability methods developed for various dynamical system classes modeled by ordinary differential equations, a recursive algorithm is developed for over-approximating the reachable set of the closed-loop system. The safety verification for neural network control systems can be performed by examining the emptiness of the intersection between the over-approximation of reachable sets and unsafe sets. The effectiveness of the proposed approach has been validated with evaluations on a robotic arm model and an adaptive cruise control system.
H.-D. Tran, N. Pal, P. Musau, D. M. Lopez, N. Hamilton, X. Yang, S. Bak, and T. T. Johnson, Robustness Verification of Semantic Segmentation Neural Networks Using Relaxed Reachability, in Computer Aided Verification, Cham, 2021, pp. 263–286.
@inproceedings{Tran2021CAVSegmentation,
author = {Tran, Hoang-Dung and Pal, Neelanjana and Musau, Patrick and Lopez, Diego Manzanas and Hamilton, Nathaniel and Yang, Xiaodong and Bak, Stanley and Johnson, Taylor T.},
editor = {Silva, Alexandra and Leino, K. Rustan M.},
title = {Robustness Verification of Semantic Segmentation Neural Networks Using Relaxed Reachability},
booktitle = {Computer Aided Verification},
year = {2021},
publisher = {Springer International Publishing},
address = {Cham},
tag = {v},
pages = {263--286},
isbn = {978-3-030-81685-8}
}
This paper introduces robustness verification for semantic segmentation neural networks (in short, semantic segmentation networks [SSNs]), building on and extending recent approaches for robustness verification of image classification neural networks. Despite recent progress in developing verification methods for specifications such as local adversarial robustness in deep neural networks (DNNs) in terms of scalability, precision, and applicability to different network architectures, layers, and activation functions, robustness verification of semantic segmentation has not yet been considered. We address this limitation by developing and applying new robustness analysis methods for several segmentation neural network architectures, specifically by addressing reachability analysis of up-sampling layers, such as transposed convolution and dilated convolution. We consider several definitions of robustness for segmentation, such as the percentage of pixels in the output that can be proven robust under different adversarial perturbations, and a robust variant of intersection-over-union (IoU), the typical performance evaluation measure for segmentation tasks. Our approach is based on a new relaxed reachability method, allowing users to select the percentage of a number of linear programming problems (LPs) to solve when constructing the reachable set, through a relaxation factor percentage. The approach is implemented within NNV, then applied and evaluated on segmentation datasets, such as a multi-digit variant of MNIST known as M2NIST. Thorough experiments show that by using transposed convolution for up-sampling and average-pooling for down-sampling, combined with minimizing the number of ReLU layers in the SSNs, we can obtain SSNs with not only high accuracy (IoU), but also that are more robust to adversarial attacks and amenable to verification. Additionally, using our new relaxed reachability method, we can significantly reduce the verification time for neural networks whose ReLU layers dominate the total analysis time, even in classification tasks.
J. A. Rosenfeld, B. P. Russo, R. Kamalapurkar, and T. T. Johnson, The Occupation Kernel Method for Nonlinear System Identification, arXiv: Optimization and Control, vol. abs/1909.11792, 2021.
@article{Rosenfeld2019TheOK,
title = {The Occupation Kernel Method for Nonlinear System Identification},
author = {Rosenfeld, Joel A. and Russo, Benjamin P. and Kamalapurkar, Rushikesh and Johnson, Taylor T.},
journal = {arXiv: Optimization and Control},
volume = {abs/1909.11792},
year = {2021},
url = {https://arxiv.org/abs/1909.117927},
eprinttype = {arXiv},
eprint = {1909.11792},
tag = {v},
timestamp = {Sat, 23 Jan 2021 01:20:54 +0100}
}
This manuscript presents a novel approach to nonlinear system identification leveraging densely defined Liouville operators and a new "kernel" function that represents an integration functional over a reproducing kernel Hilbert space (RKHS) dubbed an occupation kernel. The manuscript thoroughly explores the concept of occupation kernels in the contexts of RKHSs of continuous functions, and establishes Liouville operators over RKHS where several dense domains are found for specific examples of this unbounded operator. The combination of these two concepts allow for the embedding of a dynamical system into a RKHS, where function theoretic tools may be leveraged for the examination of such systems. This framework allows for trajectories of a nonlinear dynamical system to be treated as a fundamental unit of data for nonlinear system identification routine. The approach to nonlinear system identification is demonstrated to identify parameters of a dynamical system accurately, while also exhibiting a certain robustness to noise.
X. Yang, O. A. Beg, M. Kenigsberg, and T. T. Johnson, A Framework for Identification and Validation of Affine Hybrid Automata from Input-Output Traces, ACM Trans. Cyber-Phys. Syst., Jun. 2021.
@article{Yang2021,
author = {Yang, Xiaodong and Beg, Omar Ali and Kenigsberg, Matthew and Johnson, Taylor T},
title = {A Framework for Identification and Validation of Affine Hybrid Automata from Input-Output Traces},
year = {2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {2378-962X},
url = {https://doi.org/10.1145/3470455},
doi = {10.1145/3470455},
note = {Just Accepted},
tag = {v},
journal = {ACM Trans. Cyber-Phys. Syst.},
month = jun,
keywords = {System identification, Automata learning., Hybrid systems}
}
Automata-based modeling of hybrid and cyber-physical systems (CPS) is an important formal abstraction amenable to algorithmic analysis of its dynamic behaviors, such as in verification, fault identification, and anomaly detection. However, for realistic systems, especially industrial ones, identifying hybrid automata is challenging, due in part to inferring hybrid interactions, which involves inference of both continuous behaviors, such as through classical system identification, as well as discrete behaviors, such as through automata (e.g., L*) learning. In this paper, we propose and evaluate a framework for inferring and validating models of deterministic hybrid systems with linear ordinary differential equations (ODEs) from input/output execution traces. The framework contains algorithms for the approximation of continuous dynamics in discrete modes, estimation of transition conditions, and the inference of automata mode merging. The algorithms are capable of clustering trace segments and estimating their dynamic parameters, and meanwhile, deriving guard conditions that are represented by multiple linear inequalities. Finally, the inferred model is automatically converted to the format of the original system for the validation. We demonstrate the utility of this framework by evaluating its performance in several case studies as implemented through a publicly available prototype software framework called HAutLearn and compare it with a membership-based algorithm.
H.-D. Tran, D. M. Lopez, X. Yang, P. Musau, L. V. Nguyen, W. Xiang, S. Bak, and T. T. Johnson, Demo: The Neural Network Verification (NNV) Tool, in 2020 IEEE Workshop on Design Automation for CPS and IoT (DESTION), 2020, pp. 21–22.
@inproceedings{TranDestion2020,
author = {Tran, Hoang-Dung and Lopez, Diego Manzanas and Yang, Xiaodong and Musau, Patrick and Nguyen, Luan Viet and Xiang, Weiming and Bak, Stanley and Johnson, Taylor T.},
booktitle = {2020 IEEE Workshop on Design Automation for CPS and IoT (DESTION)},
title = {Demo: The Neural Network Verification (NNV) Tool},
year = {2020},
volume = {},
tag = {v},
number = {},
pages = {21-22},
doi = {10.1109/DESTION50928.2020.00010}
}
NNV (Neural Network Verification) is a framework for the verification of deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS) inspired by a collection of reachability algorithms that make use of a variety of set representations such as the star set. NNV supports exact and over-approximate reachability algorithms used to verify the safety and robustness of feed-forward neural networks (FFNNs). These two analysis schemes are also used for learning enabled CPS, i.e., closed-loop systems, and particularly in neural network control systems with linear models and FFNN controllers with piecewise-linear activation functions. Additionally, NNV supports over-approximate analysis for nonlinear plant models by combining the star set analysis used for FFNNs with the zonotope-based analysis for nonlinear plant dynamics provided by CORA. This demo paper demonstrates NNV’s capabilities by considering a case study of the verification of a learning-enabled adaptive cruise control system.
T. T. Johnson, D. M. Lopez, P. Musau, H.-D. Tran, E. Botoeva, F. Leofante, A. Maleki, C. Sidrane, J. Fan, and C. Huang, ARCH-COMP20 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants, in ARCH20. 7th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH20), 2020, vol. 74, pp. 107–139.
@inproceedings{Johnson_ARCH_COMP20_Category_Report_Artificial,
author = {Johnson, Taylor T and Lopez, Diego Manzanas and Musau, Patrick and Tran, Hoang-Dung and Botoeva, Elena and Leofante, Francesco and Maleki, Amir and Sidrane, Chelsea and Fan, Jiameng and Huang, Chao},
title = {ARCH-COMP20 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants},
booktitle = {ARCH20. 7th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH20)},
editor = {Frehse, Goran and Althoff, Matthias},
series = {EPiC Series in Computing},
volume = {74},
tag = {v},
pages = {107--139},
year = {2020},
publisher = {EasyChair},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/Jvwg},
doi = {10.29007/9xgv}
}
This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical systems (CPS), such as feedforward neural networks used as feedback controllers in closed-loop systems are considered, which is a class of systems classically known as intelligent control systems, or in more modern and specific terms, neural network control systems (NNCS). We more broadly refer to this category as AI and NNCS (AINNCS). The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2020. In the second edition of this AINNCS category at ARCH-COMP, four tools have been applied to solve seven different benchmark problems, (in alphabetical order): NNV, OVERT, ReachNN*, and VenMAS. This report is a snapshot of the current landscape of tools and the types of benchmarks for which these tools are suited. Due to the diversity of problems, lack of a shared hardware platform, and the early stage of the competition, we are not ranking tools in terms of performance, yet the presented results probably provide the most complete assessment of current tools for safety verification of NNCS.
H.-D. Tran, X. Yang, D. Manzanas Lopez, P. Musau, L. V. Nguyen, W. Xiang, S. Bak, and T. T. Johnson, NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems, in Computer Aided Verification, Cham, 2020, pp. 3–17.
@inproceedings{TranCAV2020,
author = {Tran, Hoang-Dung and Yang, Xiaodong and Manzanas Lopez, Diego and Musau, Patrick and Nguyen, Luan Viet and Xiang, Weiming and Bak, Stanley and Johnson, Taylor T.},
editor = {Lahiri, Shuvendu K. and Wang, Chao},
title = {NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems},
booktitle = {Computer Aided Verification},
year = {2020},
publisher = {Springer International Publishing},
address = {Cham},
tag = {v},
pages = {3--17},
isbn = {978-3-030-53288-8}
}
This paper presents the Neural Network Verification (NNV) software tool, a set-based verification framework for deep neural networks (DNNs) and learning-enabled cyber-physical systems (CPS). The crux of NNV is a collection of reachability algorithms that make use of a variety of set representations, such as polyhedra, star sets, zonotopes, and abstract-domain representations. NNV supports both exact (sound and complete) and over-approximate (sound) reachability algorithms for verifying safety and robustness properties of feed-forward neural networks (FFNNs) with various activation functions. For learning-enabled CPS, such as closed-loop control systems incorporating neural networks, NNV provides exact and over-approximate reachability analysis schemes for linear plant models and FFNN controllers with piecewise-linear activation functions, such as ReLUs. For similar neural network control systems (NNCS) that instead have nonlinear plant models, NNV supports over-approximate analysis by combining the star set analysis used for FFNN controllers with zonotope-based analysis for nonlinear plant dynamics building on CORA. We evaluate NNV using two real-world case studies: the first is safety verification of ACAS Xu networks, and the second deals with the safety verification of a deep learning-based adaptive cruise control system.
T. T. Johnson, ARCH-COMP20 Repeatability Evaluation Report, in ARCH20. 7th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH20), 2020, vol. 74, pp. 175–183.
@inproceedings{Johnson_ARCH_COMP20_Repeatability_Evaluation_Report,
author = {Johnson, Taylor T},
tag = {v},
title = {ARCH-COMP20 Repeatability Evaluation Report},
booktitle = {ARCH20. 7th International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH20)},
editor = {Frehse, Goran and Althoff, Matthias},
series = {EPiC Series in Computing},
volume = {74},
pages = {175--183},
year = {2020},
publisher = {EasyChair},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/3W11},
doi = {10.29007/8dp4}
}
This report presents the results of the repeatability evaluation for the 4th International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP’20). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2020, affiliated with the IFAC World Congress. In its fourth edition, twenty-eight tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve benchmark problems for seven competition categories. The majority of participants adhered to the requirements for this year’s repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker), and several categories used performance evaluation information from a common execution platform. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.
H.-D. Tran, S. Bak, W. Xiang, and T. T. Johnson, Verification of Deep Convolutional Neural Networks Using ImageStars, in Computer Aided Verification, Cham, 2020, pp. 18–42.
@inproceedings{TranCAV2020ImageStars,
author = {Tran, Hoang-Dung and Bak, Stanley and Xiang, Weiming and Johnson, Taylor T.},
editor = {Lahiri, Shuvendu K. and Wang, Chao},
title = {Verification of Deep Convolutional Neural Networks Using ImageStars},
booktitle = {Computer Aided Verification},
year = {2020},
publisher = {Springer International Publishing},
address = {Cham},
tag = {v},
pages = {18--42},
isbn = {978-3-030-53288-8}
}
Convolutional Neural Networks (CNN) have redefined state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output in even well-trained networks. Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology. Unfortunately, existing verification approaches have limited scalability in terms of the size of networks that can be analyzed. In this paper, we describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet. Our approach is based on a new set representation called the ImageStar, which enables efficient exact and over-approximative analysis of CNNs. ImageStars perform efficient set-based analysis by combining operations on concrete images with linear programming (LP). Our approach is implemented in a tool called NNV, and can verify the robustness of VGG networks with respect to a small set of input states, derived from adversarial attacks, such as the DeepFool attack. The experimental results show that our approach is less conservative and faster than existing zonotope and polytope methods.
D. M. Lopez, P. Musau, N. Hamilton, H.-D. Tran, and T. T. Jonhson, Case Study: Safety Verification of an Unmanned Underwater Vehicle, in 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp. 189–195.
@inproceedings{ManzanasSPW2020,
author = {Lopez, Diego Manzanas and Musau, Patrick and Hamilton, Nathaniel and Tran, Hoang-Dung and Jonhson, Taylor T.},
booktitle = {2020 IEEE Security and Privacy Workshops (SPW)},
title = {Case Study: Safety Verification of an Unmanned Underwater Vehicle},
year = {2020},
volume = {},
number = {},
pages = {189-195},
tag = {v},
doi = {10.1109/SPW50608.2020.00047}
}
This manuscript evaluates the safety of a neural network controller that seeks to ensure that an Unmanned Underwater Vehicle (UUV) does not collide with a static object in its path. To achieve this, we utilize methods that can determine the exact output reachable set of all the UUV’s components through the use of star-sets. The star-set is a computationally efficient set representation adept at characterizing large input spaces. It supports cheap and efficient computation of affine mapping operations and intersections with half-spaces. The system under consideration in this work represents a more complex system than Neural Network Control Systems (NNCS) previously considered in other works, and consists of a total of four components. Our experimental evaluation uses four different scenarios to show that our star-set based methods are scalable and can be efficiently used to analyze the safety of real-world cyber-physical systems (CPS).
U. Yu, Combining Reachable Set Computation with Neuron Coverage, PhD thesis, 2020.
@phdthesis{yuthesis2020,
author = {Yu, Ulysses},
year = {2020},
title = {Combining Reachable Set Computation with Neuron Coverage},
journal = {ProQuest Dissertations and Theses},
pages = {42},
tag = {v},
note = {Copyright - Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works; Last updated - 2022-02-02},
keywords = {Machine Learning; Verification; Deep Neural Networks; Computer science; Artificial intelligence; 0984:Computer science; 0800:Artificial intelligence},
isbn = {9798557010726},
language = {English},
url = {http://proxy.library.vanderbilt.edu/login?url=https://www.proquest.com/dissertations-theses/combining-reachable-set-computation-with-neuron/docview/2474960877/se-2?accountid=14816}
}
As the use of Deep Neural Networks (DNN) continues to increase in the field of Cyber Physical Systems (CPS), we need more methods of guaranteeing their safety. Two recent areas of research into safety of DNNs have been in reachability analysis and neuron coverage. Reachability analysis takes a high dimensional input set, such as hyperrectangles or polytopes, and propagates it through the network to create an output set, and certain properties are tested on the output set to verify the safety of the network. Neuron coverage takes an input set in and measures what proportion of the overall network’s logic is tested by the input set, with the goal of finding corner cases in DNN. In my thesis, I first propose a method for computing the volume of the star input set, which is a representation of polytopes used in reachability analysis. By computing the volume, we can find a general idea of how much of the overall input space is tested for by a given input set. Secondly, I propose a method for computing neuron coverage for reachable sets. Currently, neuron coverage is only used to test individual inputs, as opposed to continuous input sets, but in this paper, we extend the definition so that we can compute neuron coverage on sets such as the star set. We implement these techniques in the Neural Network Verification (NNV) toolbox, and test it on examples, such as the Aircraft Collision Avoidance System for unmanned aircraft (ACAS Xu).
H.-D. Tran, F. Cai, M. L. Diego, P. Musau, T. T. Johnson, and X. Koutsoukos, Safety verification of cyber-physical systems with reinforcement learning control, ACM Transactions on Embedded Computing Systems (TECS), vol. 18, no. 5s, pp. 1–22, 2019.
@article{tran2019safety,
title = {Safety verification of cyber-physical systems with reinforcement learning control},
author = {Tran, Hoang-Dung and Cai, Feiyang and Diego, Manzanas Lopez and Musau, Patrick and Johnson, Taylor T and Koutsoukos, Xenofon},
journal = {ACM Transactions on Embedded Computing Systems (TECS)},
volume = {18},
number = {5s},
tag = {v},
pages = {1--22},
year = {2019},
publisher = {ACM New York, NY, USA}
}
D. M. Lopez, P. Musau, H.-D. Tran, S. Dutta, T. J. Carpenter, R. Ivanov, and T. T. Johnson, ARCH-COMP19 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants, in ARCH19. 6th International Workshop on Applied Verification of Continuous and Hybrid Systems, 2019, vol. 61, pp. 103–119.
@inproceedings{Manzanas_ARCH_COMP19_Category_Report_Artificial,
author = {Lopez, Diego Manzanas and Musau, Patrick and Tran, Hoang-Dung and Dutta, Souradeep and Carpenter, Taylor J. and Ivanov, Radoslav and Johnson, Taylor T.},
title = {ARCH-COMP19 Category Report: Artificial Intelligence and Neural Network Control Systems (AINNCS) for Continuous and Hybrid Systems Plants},
booktitle = {ARCH19. 6th International Workshop on Applied Verification of Continuous and Hybrid Systems},
editor = {Frehse, Goran and Althoff, Matthias},
series = {EPiC Series in Computing},
volume = {61},
tag = {v},
pages = {103--119},
year = {2019},
publisher = {EasyChair},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/BFKs},
doi = {10.29007/rgv8}
}
This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical systems (CPS), such as feedforward neural networks used as feedback controllers in closed-loop systems are considered, which is a class of systems classically known as intelligent control systems, or in more modern and specific terms, neural network control systems (NNCS). For future iterations, we more broadly refer to this category as AI and NNCS (AINNCS). The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2019. In the first edition of this AINNCS category at ARCH-COMP, three tools have been applied to solve five different benchmark problems, (in alphabetical order): NNV, Sherlock, and Verisig. This report is a snapshot of the current landscape of tools and the types of benchmarks for which these tools are suited. Due to the diversity of problems and that this is the first iteration of this category, we are not ranking tools in terms of performance, yet the presented results probably provide the most complete assessment of tools for the safety verification of NNCS.
T. T. Johnson, ARCH-COMP19 Repeatability Evaluation Report, in ARCH19. 6th International Workshop on Applied Verification of Continuous and Hybrid Systems, 2019, vol. 61, pp. 162–169.
@inproceedings{Johnson_ARCH_COMP19_Repeatability_Evaluation_Report,
author = {Johnson, Taylor T.},
title = {ARCH-COMP19 Repeatability Evaluation Report},
booktitle = {ARCH19. 6th International Workshop on Applied Verification of Continuous and Hybrid Systems},
editor = {Frehse, Goran and Althoff, Matthias},
series = {EPiC Series in Computing},
volume = {61},
pages = {162--169},
year = {2019},
publisher = {EasyChair},
tag = {v},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/xvBM},
doi = {10.29007/wbl3}
}
This report presents the results of the repeatability evaluation for the 3rd International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP’19). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2019, affiliated with the Cyber-Physical Systems and Internet of Things (CPS-IoT Week’19). In its third edition, twenty-five tools submitted artifacts through a Git repository for the repeatability evaluation, applied to solve benchmark problems for eight competition categories. The majority of participants adhered to new requirements for this year’s repeatability evaluation, namely to submit scripts to automatically install and execute tools in containerized virtual environments (specifically Dockerfiles to execute within Docker). The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.
H.-D. Tran, D. Manzanas Lopez, P. Musau, X. Yang, L. V. Nguyen, W. Xiang, and T. T. Johnson, Star-Based Reachability Analysis of Deep Neural Networks, in Formal Methods – The Next 30 Years: Third World Congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings, Berlin, Heidelberg, 2019, pp. 670–686.
@inproceedings{Tran2019FM,
author = {Tran, Hoang-Dung and Manzanas Lopez, Diago and Musau, Patrick and Yang, Xiaodong and Nguyen, Luan Viet and Xiang, Weiming and Johnson, Taylor T.},
title = {Star-Based Reachability Analysis of Deep Neural Networks},
year = {2019},
isbn = {978-3-030-30941-1},
tag = {v},
publisher = {Springer-Verlag},
address = {Berlin, Heidelberg},
url = {https://doi.org/10.1007/978-3-030-30942-8_39},
doi = {10.1007/978-3-030-30942-8_39},
booktitle = {Formal Methods – The Next 30 Years: Third World Congress, FM 2019, Porto, Portugal, October 7–11, 2019, Proceedings},
pages = {670–686},
numpages = {17},
location = {Porto, Portugal}
}
This paper proposes novel reachability algorithms for both exact (sound and complete) and over-approximation (sound) analysis of deep neural networks (DNNs). The approach uses star sets as a symbolic representation of sets of states, which are known in short as stars and provide an effective representation of high-dimensional polytopes. Our star-based reachability algorithms can be applied to several problems in analyzing the robustness of machine learning methods, such as safety and robustness verification of DNNs. The star-based reachability algorithms are implemented in a software prototype called the neural network verification (NNV) tool that is publicly available for evaluation and comparison. Our experiments show that when verifying ACAS Xu neural networks on a multi-core platform, our exact reachability algorithm is on average about 19 times faster than Reluplex, a satisfiability modulo theory (SMT)-based approach. Furthermore, our approach can visualize the precise behavior of DNNs because the reachable states are computed in the method. Notably, in the case that a DNN violates a safety property, the exact reachability algorithm can construct a complete set of counterexamples. Our star-based over-approximate reachability algorithm is on average 118 times faster than Reluplex on the verification of properties for ACAS Xu networks, even without exploiting the parallelism that comes naturally in our method. Additionally, our over-approximate reachability is much less conservative than DeepZ and DeepPoly, recent approaches utilizing zonotopes and other abstract domains that fail to verify many properties of ACAS Xu networks due to their conservativeness. Moreover, our star-based over-approximate reachability algorithm obtains better robustness bounds in comparison with DeepZ and DeepPoly when verifying the robustness of image classification DNNs.
H.-D. Tran, P. Musau, D. Manzanas Lopez, X. Yang, L. V. Nguyen, W. Xiang, and T. T. Johnson, Parallelizable Reachability Analysis Algorithms for Feed-Forward Neural Networks, in 2019 IEEE/ACM 7th International Conference on Formal Methods in Software Engineering (FormaliSE), 2019, pp. 51–60.
@inproceedings{Tran2019Formalise,
author = {Tran, Hoang-Dung and Musau, Patrick and Manzanas Lopez, Diego and Yang, Xiaodong and Nguyen, Luan Viet and Xiang, Weiming and Johnson, Taylor T},
booktitle = {2019 IEEE/ACM 7th International Conference on Formal Methods in Software Engineering (FormaliSE)},
title = {Parallelizable Reachability Analysis Algorithms for Feed-Forward Neural Networks},
year = {2019},
volume = {},
number = {},
tag = {v},
pages = {51-60},
doi = {10.1109/FormaliSE.2019.00012}
}
Artificial neural networks (ANN) have displayed considerable utility in a wide range of applications such as image processing, character and pattern recognition, self-driving cars, evolutionary robotics, and non-linear system identification and control. While ANNs are able to carry out complicated tasks efficiently, they are susceptible to unpredictable and errant behavior due to irregularities that emanate from their complex non-linear structure. As a result, there have been reservations about incorporating them into safety-critical systems. In this paper, we present a reachability analysis method for feed-forward neural networks (FNN) that employ rectified linear units (ReLUs) as activation functions. The crux of our approach relies on three reachable-set computation algorithms, namely exact schemes, lazy-approximate schemes, and mixing schemes. The exact scheme computes an exact reachable set for FNN, while the lazy-approximate and mixing schemes generate an over-approximation of the exact reachable set. All schemes are designed efficiently to run on parallel platforms to reduce the computation time and enhance the scalability. Our methods are implemented in a toolbox called, NNV, and is evaluated using a set of benchmarks that consist of realistic neural networks with sizes that range from tens to a thousand neurons. Notably, NNV successfully computes and visualizes the exact reachable sets of the real world ACAS Xu deep neural networks (DNNs), which are a variant of a family of novel airborne collision detection systems known as the ACAS System X, using a representation of tens to hundreds of polyhedra.
H.-D. Tran, F. Cai, M. L. Diego, P. Musau, T. T. Johnson, and X. Koutsoukos, Safety Verification of Cyber-Physical Systems with Reinforcement Learning Control, ACM Trans. Embed. Comput. Syst., vol. 18, no. 5s, Oct. 2019.
@article{TranTECS2019,
author = {Tran, Hoang-Dung and Cai, Feiyang and Diego, Manzanas Lopez and Musau, Patrick and Johnson, Taylor T. and Koutsoukos, Xenofon},
title = {Safety Verification of Cyber-Physical Systems with Reinforcement Learning Control},
year = {2019},
issue_date = {October 2019},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {18},
number = {5s},
tag = {v},
issn = {1539-9087},
url = {https://doi.org/10.1145/3358230},
doi = {10.1145/3358230},
journal = {ACM Trans. Embed. Comput. Syst.},
month = oct,
articleno = {105},
numpages = {22},
keywords = {verification, reinforcement learning, Formal methods}
}
This paper proposes a new forward reachability analysis approach to verify safety of cyber-physical systems (CPS) with reinforcement learning controllers. The foundation of our approach lies on two efficient, exact and over-approximate reachability algorithms for neural network control systems using star sets, which is an efficient representation of polyhedra. Using these algorithms, we determine the initial conditions for which a safety-critical system with a neural network controller is safe by incrementally searching a critical initial condition where the safety of the system cannot be established. Our approach produces tight over-approximation error and it is computationally efficient, which allows the application to practical CPS with learning enable components (LECs). We implement our approach in NNV, a recent verification tool for neural networks and neural network control systems, and evaluate its advantages and applicability by verifying safety of a practical Advanced Emergency Braking System (AEBS) with a reinforcement learning (RL) controller trained using the deep deterministic policy gradient (DDPG) method. The experimental results show that our new reachability algorithms are much less conservative than existing polyhedra-based approaches. We successfully determine the entire region of the initial conditions of the AEBS with the RL controller such that the safety of the system is guaranteed, while a polyhedra-based approach cannot prove the safety properties of the system.
W. Xiang, H.-D. Tran, and T. T. Johnson, Nonconservative Lifted Convex Conditions for Stability of Discrete-Time Switched Systems Under Minimum Dwell-Time Constraint, IEEE Transactions on Automatic Control, vol. 64, no. 8, pp. 3407–3414, 2019.
@article{Xiang2019TAC,
author = {Xiang, Weiming and Tran, Hoang-Dung and Johnson, Taylor T.},
journal = {IEEE Transactions on Automatic Control},
title = {Nonconservative Lifted Convex Conditions for Stability of Discrete-Time Switched Systems Under Minimum Dwell-Time Constraint},
year = {2019},
volume = {64},
tag = {v},
number = {8},
pages = {3407-3414},
doi = {10.1109/TAC.2018.2879585}
}
In this note, a novel conception called virtual clock, which is defined by an artificial timer over a finite cycle, is introduced for stability analysis of discrete-time switched linear systems under minimum dwell-time constraint. Two necessary and sufficient conditions associated with a virtual clock with a sufficient length are proposed to ensure the global uniform asymptotic stability of discrete-time switched linear systems. For the two nonconservative stability criteria, the lifted version maintains the convexity in system matrices. Based on the lifted convex conditions, the extensions to ℓ 2 -gain computation and H ∞ control problems are presented in the sequel. In particular, a novel virtual-clock-dependent controller is designed, which outperforms the traditional mode-dependent and common gain controllers. Several numerical examples are provided to illustrate our theoretic results.
S. Bak, H.-D. Tran, and T. T. Johnson, Numerical Verification of Affine Systems with up to a Billion Dimensions, in Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, New York, NY, USA, 2019, pp. 23–32.
@inproceedings{Bak2019HSCC,
author = {Bak, Stanley and Tran, Hoang-Dung and Johnson, Taylor T.},
title = {Numerical Verification of Affine Systems with up to a Billion Dimensions},
year = {2019},
isbn = {9781450362825},
tag = {v},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3302504.3311792},
doi = {10.1145/3302504.3311792},
booktitle = {Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control},
pages = {23–32},
numpages = {10},
keywords = {hybrid systems, formal verification, reachability},
location = {Montreal, Quebec, Canada},
series = {HSCC '19}
}
Affine systems reachability is the basis of many verification methods. With further computation, methods exist to reason about richer models with inputs, nonlinear differential equations, and hybrid dynamics. As such, the scalability of affine systems verification is a prerequisite to scalable analysis for more complex systems. In this paper, we improve the scalability of affine systems verification, in terms of the number of dimensions (variables) in the system. The reachable states of affine systems can be written in terms of the matrix exponential, and safety checking can be performed at specific time steps with linear programming. Unfortunately, for large systems with many state variables, this direct approach requires an intractable amount of memory while using an intractable amount of computation time. We overcome these challenges by combining several methods that leverage common problem structure. Memory is reduced by exploiting initial states that are not full-dimensional and safety properties (outputs) over a few linear projections of the state variables. Computation time is saved by using numerical simulations to compute only projections of the matrix exponential relevant for the verification problem. Since large systems often have sparse dynamics, we use fast Krylov-subspace simulation methods based on the Arnoldi or Lanczos iterations. Our implementation produces accurate counter-examples when properties are violated and, in the extreme case with sufficient problem structure, is shown to analyze a system with one billion real-valued state variables.
H.-D. Tran, L. V. Nguyen, P. Musau, W. Xiang, and T. T. Johnson, Decentralized Real-Time Safety Verification for Distributed Cyber-Physical Systems, in Formal Techniques for Distributed Objects, Components, and Systems, Cham, 2019, pp. 261–277.
@inproceedings{Tran2019Decentralized,
author = {Tran, Hoang-Dung and Nguyen, Luan Viet and Musau, Patrick and Xiang, Weiming and Johnson, Taylor T.},
editor = {P{\'e}rez, Jorge A. and Yoshida, Nobuko},
title = {Decentralized Real-Time Safety Verification for Distributed Cyber-Physical Systems},
booktitle = {Formal Techniques for Distributed Objects, Components, and Systems},
year = {2019},
publisher = {Springer International Publishing},
address = {Cham},
pages = {261--277},
tag = {v},
isbn = {978-3-030-21759-4}
}
Safety-critical distributed cyber-physical systems (CPSs) have been found in a wide range of applications. Notably, they have displayed a great deal of utility in intelligent transportation, where autonomous vehicles communicate and cooperate with each other via a high-speed communication network. Such systems require an ability to identify maneuvers in real-time that cause dangerous circumstances and ensure the implementation always meets safety-critical requirements. In this paper, we propose a real-time decentralized safety verification approach for a distributed multi-agent CPS with the underlying assumption that all agents are time-synchronized with a low degree of error. In the proposed approach, each agent periodically computes its local reachable set and exchanges this reachable set with the other agents with the goal of verifying the system safety. Our method, implemented in Java, takes advantages of the timing information and the reachable set information that are available in the exchanged messages to reason about the safety of the whole system in a decentralized manner. Any particular agent can also perform local safety verification tasks based on their local clocks by analyzing the messages it receives. We applied the proposed method to verify, in real-time, the safety properties of a group of quadcopters performing a distributed search mission.
W. Xiang, D. M. Lopez, P. Musau, and T. T. Johnson, Reachable Set Estimation and Verification for Neural Network Models of Nonlinear Dynamic Systems, in Safe, Autonomous and Intelligent Vehicles, H. Yu, X. Li, R. M. Murray, S. Ramesh, and C. J. Tomlin, Eds. Cham: Springer International Publishing, 2019, pp. 123–144.
@inbook{Xiang2019SafeAutonomous,
author = {Xiang, Weiming and Lopez, Diego Manzanas and Musau, Patrick and Johnson, Taylor T.},
editor = {Yu, Huafeng and Li, Xin and Murray, Richard M. and Ramesh, S. and Tomlin, Claire J.},
title = {Reachable Set Estimation and Verification for Neural Network Models of Nonlinear Dynamic Systems},
booktitle = {Safe, Autonomous and Intelligent Vehicles},
year = {2019},
publisher = {Springer International Publishing},
address = {Cham},
tag = {v},
pages = {123--144},
isbn = {978-3-319-97301-2},
doi = {10.1007/978-3-319-97301-2_7},
url = {https://doi.org/10.1007/978-3-319-97301-2_7}
}
Neural networks have been widely used to solve complex real-world problems. Due to the complex, nonlinear, non-convex nature of neural networks, formal safety and robustness guarantees for the behaviors of neural network systems are crucial for their applications in safety-critical systems. In this paper, the reachable set estimation and safety verification problems for Nonlinear Autoregressive-Moving Average (NARMA) models in the forms of neural networks are addressed. The neural networks involved in the model are a class of feed-forward neural networks called Multi-Layer Perceptrons (MLPs). By partitioning the input set of an MLP into a finite number of cells, a layer-by-layer computation algorithm is developed for reachable set estimation of each individual cell. The union of estimated reachable sets of all cells forms an over-approximation of the reachable set of the MLP. Furthermore, an iterative reachable set estimation algorithm based on reachable set estimation for MLPs is developed for NARMA models. The safety verification can be performed by checking the existence of non-empty intersections between unsafe regions and the estimated reachable set. Several numerical examples are provided to illustrate the approach.
H.-D. Tran, L. V. Nguyen, P. Musau, W. Xiang, and T. T. Johnson, Real-Time Verification for Distributed Cyber-Physical Systems, CoRR, vol. abs/1909.09087, 2019.
@article{Tran2019RealTime,
author = {Tran, Hoang{-}Dung and Nguyen, Luan Viet and Musau, Patrick and Xiang, Weiming and Johnson, Taylor T.},
title = {Real-Time Verification for Distributed Cyber-Physical Systems},
journal = {CoRR},
volume = {abs/1909.09087},
year = {2019},
tag = {v},
url = {http://arxiv.org/abs/1909.09087},
eprinttype = {arXiv},
eprint = {1909.09087},
timestamp = {Sat, 23 Jan 2021 01:20:54 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-09087.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Safety-critical distributed cyber-physical systems (CPSs) have been found in a wide range of applications. Notably, they have displayed a great deal of utility in intelligent transportation, where autonomous vehicles communicate and cooperate with each other via a high-speed communication network. Such systems require an ability to identify maneuvers in real-time that cause dangerous circumstances and ensure the implementation always meets safety-critical requirements. In this paper, we propose a real-time decentralized reachability approach for safety verification of a distributed multi-agent CPS with the underlying assumption that all agents are time-synchronized with a low degree of error. In the proposed approach, each agent periodically computes its local reachable set and exchanges this reachable set with the other agents with the goal of verifying the system safety. Our method, implemented in Java, takes advantages of the timing information and the reachable set information that are available in the exchanged messages to reason about the safety of the whole system in a decentralized manner. Any particular agent can also perform local safety verification tasks based on their local clocks by analyzing the messages it receives. We applied the proposed method to verify, in real-time, the safety properties of a group of quadcopters performing a distributed search mission.
J. A. Rosenfeld, R. Kamalapurkar, B. Russo, and T. T. Johnson, Occupation Kernels and Densely Defined Liouville Operators for System Identification, in 2019 IEEE 58th Conference on Decision and Control (CDC), 2019.
@inproceedings{Rosenfeld_2019,
doi = {10.1109/cdc40024.2019.9029337},
url = {https://doi.org/10.1109%2Fcdc40024.2019.9029337},
year = {2019},
month = dec,
publisher = {{IEEE}},
tag = {v},
author = {Rosenfeld, Joel A. and Kamalapurkar, Rushikesh and Russo, Benjamin and Johnson, Taylor T.},
title = {Occupation Kernels and Densely Defined Liouville Operators for System Identification},
booktitle = {2019 {IEEE} 58th Conference on Decision and Control ({CDC})}
}
W. Xiang, P. Musau, A. A. Wild, D. M. Lopez, N. Hamilton, X. Yang, J. Rosenfeld, and T. T. Johnson, Verification for machine learning, autonomy, and neural networks survey, arXiv preprint arXiv:1810.01989, 2018.
@article{xiang2018verification,
title = {Verification for machine learning, autonomy, and neural networks survey},
author = {Xiang, Weiming and Musau, Patrick and Wild, Ayana A and Lopez, Diego Manzanas and Hamilton, Nathaniel and Yang, Xiaodong and Rosenfeld, Joel and Johnson, Taylor T},
journal = {arXiv preprint arXiv:1810.01989},
year = {2018},
tag = {v},
preprint = {https://arxiv.org/abs/1810.01989}
}
This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances in artificial intelligence (AI) and machine learning (ML) through approaches such as deep neural networks (DNNs), embedded in so-called learning enabled components (LECs) that accomplish tasks from classification to control. Recently, the formal methods and formal verification community has developed methods to characterize behaviors in these LECs with eventual goals of formally verifying specifications for LECs, and this article presents a survey of many of these recent approaches.
W. Xiang, H.-D. Tran, and T. T. Johnson, Output reachable set estimation and verification for multilayer neural networks, IEEE transactions on neural networks and learning systems, vol. 29, no. 11, pp. 5777–5783, 2018.
@article{xiang2018output,
title = {Output reachable set estimation and verification for multilayer neural networks},
author = {Xiang, Weiming and Tran, Hoang-Dung and Johnson, Taylor T},
journal = {IEEE transactions on neural networks and learning systems},
volume = {29},
tag = {v},
number = {11},
pages = {5777--5783},
year = {2018},
publisher = {IEEE}
}
T. T. Johnson, ARCH-COMP18 Repeatability Evaluation Report, in ARCH18. 5th International Workshop on Applied Verification of Continuous and Hybrid Systems, 2018, vol. 54, pp. 128–134.
@inproceedings{Johnson_ARCH_COMP18_Repeatability_Evaluation_Report,
author = {Johnson, Taylor T.},
title = {ARCH-COMP18 Repeatability Evaluation Report},
booktitle = {ARCH18. 5th International Workshop on Applied Verification of Continuous and Hybrid Systems},
editor = {Frehse, Goran},
series = {EPiC Series in Computing},
volume = {54},
pages = {128--134},
tag = {v},
year = {2018},
publisher = {EasyChair},
bibsource = {EasyChair, https://easychair.org},
issn = {2398-7340},
url = {https://easychair.org/publications/paper/9J6v},
doi = {10.29007/n9t3}
}
This report presents the results of the repeatability evaluation for the 2nd International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP’18). The competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2018. In its second edition, twenty-five tools submitted artifacts for the repeatability evaluation and applied to solve benchmark problems for seven competition categories. The repeatability results represent a snapshot of the current landscape of tools and the types of benchmarks for which they are particularly suited and for which others may repeat their analyses. Due to the diversity of problems in verification of continuous and hybrid systems, as well as basing on standard practice in repeatability evaluations, we evaluate the tools with pass and/or failing being repeatable.
D. M. Lopez, T. Johnson, H.-D. Tran, S. Bak, X. Chen, and K. L. Hobbs, Verification of Neural Network Compression of ACAS Xu Lookup Tables with Star Set Reachability, in AIAA Scitech 2021 Forum, .
@inbook{Manzanas2021AIAA,
author = {Lopez, Diego Manzanas and Johnson, Taylor and Tran, Hoang-Dung and Bak, Stanley and Chen, Xin and Hobbs, Kerianne L.},
title = {Verification of Neural Network Compression of ACAS Xu Lookup Tables with Star Set Reachability},
booktitle = {AIAA Scitech 2021 Forum},
chapter = {},
tag = {v},
pages = {},
doi = {10.2514/6.2021-0995},
url = {https://arc.aiaa.org/doi/abs/10.2514/6.2021-0995},
eprint = {https://arc.aiaa.org/doi/pdf/10.2514/6.2021-0995}
}
View Video Presentation: https://doi.org/10.2514/6.2021-0995.vidNeural network approximations have become attractive to compress data for automation and autonomy algorithms for use on storage-limited and processing-limited aerospace hard-ware. However, unless these neural network approximations can be exhaustively verified to be safe, they cannot be certified for use on aircraft. This manuscript evaluates the safety of a neural network approximation of the unmanned Airborne Collision Avoidance System (ACAS Xu). First, a set of ACAS Xu closed-loop benchmarks is introduced, based on a well-known open-loop benchmark, that are challenging to analyze for current verification tools due to the complexity and high-dimensional plant dynamics. Additionally, the system of switching and classification-based nature of the ACAS Xu neural network system adds another challenge to existing analysis methods. Experimental evaluation shows selected scenarios where the safety of the ownship aircraft’s neural network action selection is assessed with respect to an intruder aircraft over time in a closed loop control evaluation. Set-based analysis of the closed-loop benchmarks is performed using the Star Set representation using both the NNV tool and the nnenum tool, demonstrating that set-based analysis is becoming increasingly feasible for the verification of this class of systems.