The problem of assurance monitoring is related to developing mechanisms that can help ascertain the confidence in and suitability of a Learning Enabled Component when it is being used within the context of a CPS like cars. The mechanism is required because online conditions may differ from training distributions and the performance and low average error metrics from training stage are not necessarily a good measure of the correctness of the learning enabled controller online. Our team has developed a number of assurance monitors that can be chosen depending upon the architecture of the LEC and the learning approach being used. Follow the listed publications below for more details.
Publications
F. Cai, Z. Zhang, J. Liu, and X. Koutsoukos, Open Set Recognition using Vision Transformer with an Additional Detection Head. arXiv, 2022.
@misc{https://doi.org/10.48550/arxiv.2203.08441,
doi = {10.48550/ARXIV.2203.08441},
url = {https://arxiv.org/abs/2203.08441},
author = {Cai, Feiyang and Zhang, Zhenkai and Liu, Jie and Koutsoukos, Xenofon},
keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Open Set Recognition using Vision Transformer with an Additional Detection Head},
publisher = {arXiv},
tag = {am},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
S. Ramakrishna, Z. RahimiNasab, G. Karsai, A. Easwaran, and A. Dubey, Efficient Out-of-Distribution Detection Using Latent Space of β-VAE
for Cyber-Physical Systems, ACM Trans. Cyber-Phys. Syst., 2021.
@article{ramakrishna2021tcps,
author = {Ramakrishna, Shreyas and RahimiNasab, Zahra and Karsai, Gabor and Easwaran, Arvind and Dubey, Abhishek},
tag = {am},
title = {Efficient Out-of-Distribution Detection Using Latent Space of {\beta}-VAE
for Cyber-Physical Systems},
journal = {ACM Trans. Cyber-Phys. Syst.},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
year = {2021},
preprint = {https://arxiv.org/abs/2108.11800},
eprinttype = {arXiv}
}
Deep Neural Networks are actively being used in the design of autonomous Cyber-Physical Systems (CPSs). The advantage of these models is their ability to handle high-dimensional state-space and learn compact surrogate representations of the operational state spaces. However, the problem is that the sampled observations used for training the model may never cover the entire state space of the physical environment, and as a result, the system will likely operate in conditions that do not belong to the training distribution. These conditions that do not belong to training distribution are referred to as Out-of-Distribution (OOD). Detecting OOD conditions at runtime is critical for the safety of CPS. In addition, it is also desirable to identify the context or the feature(s) that are the source of OOD to select an appropriate control action to mitigate the consequences that may arise because of the OOD condition. In this paper, we study this problem as a multi-labeled time series OOD detection problem over images, where the OOD is defined both sequentially across short time windows (change points) as well as across the training data distribution. A common approach to solving this problem is the use of multi-chained one-class classifiers. However, this approach is expensive for CPSs that have limited computational resources and require short inference times. Our contribution is an approach to design and train a single β-Variational Autoencoder detector with a partially disentangled latent space sensitive to variations in image features. We use the feature sensitive latent variables in the latent space to detect OOD images and identify the most likely feature(s) responsible for the OOD. We demonstrate our approach using an Autonomous Vehicle in the CARLA simulator and a real-world automotive dataset called nuImages.
F. Cai, A. I. Ozdagli, N. Potteiger, and X. Koutsoukos, Inductive Conformal Out-of-distribution Detection based on Adversarial Autoencoders, in 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS), 2021, pp. 1–6.
@inproceedings{cai2021inductive,
title = {Inductive Conformal Out-of-distribution Detection based on Adversarial Autoencoders},
author = {Cai, Feiyang and Ozdagli, Ali I and Potteiger, Nicholas and Koutsoukos, Xenofon},
booktitle = {2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS)},
pages = {1--6},
tag = {am},
year = {2021},
organization = {IEEE}
}
F. Cai, A. I. Ozdagli, and X. Koutsoukos, Detection of Dataset Shifts in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression, arXiv preprint arXiv:2104.06613, 2021.
@article{cai2021detection,
title = {Detection of Dataset Shifts in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression},
author = {Cai, Feiyang and Ozdagli, Ali I and Koutsoukos, Xenofon},
journal = {arXiv preprint arXiv:2104.06613},
tag = {am},
preprint = {https://arxiv.org/abs/2104.06613},
year = {2021}
}
Cyber-physical systems (CPSs) use learning-enabled components (LECs) extensively to cope with various complex tasks under high-uncertainty environments. However, the dataset shifts between the training and testing phase may lead the LECs to become ineffective to make large-error predictions, and further, compromise the safety of the overall system. In our paper, we first provide the formal definitions for different types of dataset shifts in learning-enabled CPS. Then, we propose an approach to detect the dataset shifts effectively for regression problems. Our approach is based on the inductive conformal anomaly detection and utilizes a variational autoencoder for regression model which enables the approach to take into consideration both LEC input and output for detecting dataset shifts. Additionally, in order to improve the robustness of detection, layer-wise relevance propagation (LRP) is incorporated into our approach. We demonstrate our approach by using an advanced emergency braking system implemented in an open-source simulator for self-driving cars. The evaluation results show that our approach can detect different types of dataset shifts with a small number of false alarms while the execution time is smaller than the sampling period of the system.
D. Boursinos and X. Koutsoukos, Assurance monitoring of learning-enabled cyber-physical systems using inductive conformal prediction based on distance learning, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, vol. 35, no. 2, pp. 251–264, 2021.
@article{boursinos_koutsoukos_2021,
title = {Assurance monitoring of learning-enabled cyber-physical systems using inductive conformal prediction based on distance learning},
volume = {35},
doi = {10.1017/S089006042100010X},
number = {2},
tag = {am},
journal = {Artificial Intelligence for Engineering Design, Analysis and Manufacturing},
publisher = {Cambridge University Press},
author = {Boursinos, Dimitrios and Koutsoukos, Xenofon},
year = {2021},
pages = {251–264}
}
D. Boursinos and X. Koutsoukos, Reliable Probability Intervals For Classification Using Inductive Venn Predictors Based on Distance Learning, in 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS), 2021, pp. 1–7.
@inproceedings{boursinos2021reliable,
author = {Boursinos, Dimitrios and Koutsoukos, Xenofon},
booktitle = {2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS)},
title = {Reliable Probability Intervals For Classification Using Inductive Venn Predictors Based on Distance Learning},
year = {2021},
volume = {},
tag = {am},
number = {},
pages = {1-7},
doi = {10.1109/COINS51742.2021.9524144}
}
F. Cai and X. Koutsoukos, Real-time out-of-distribution detection in learning-enabled cyber-physical systems, in 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS), 2020, pp. 174–183.
@inproceedings{cai2020real,
title = {Real-time out-of-distribution detection in learning-enabled cyber-physical systems},
author = {Cai, Feiyang and Koutsoukos, Xenofon},
booktitle = {2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS)},
pages = {174--183},
tag = {am},
year = {2020},
organization = {IEEE}
}
F. Cai, J. Li, and X. Koutsoukos, Detecting adversarial examples in learning-enabled cyber-physical systems using variational autoencoder for regression, in 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp. 208–214.
@inproceedings{cai2020detecting,
title = {Detecting adversarial examples in learning-enabled cyber-physical systems using variational autoencoder for regression},
author = {Cai, Feiyang and Li, Jiani and Koutsoukos, Xenofon},
booktitle = {2020 IEEE Security and Privacy Workshops (SPW)},
pages = {208--214},
tag = {am},
year = {2020},
organization = {IEEE}
}
D. Boursinos and X. Koutsoukos, Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems, in 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp. 228–233.
@inproceedings{boursinos2020trusted,
author = {Boursinos, Dimitrios and Koutsoukos, Xenofon},
booktitle = {2020 IEEE Security and Privacy Workshops (SPW)},
title = {Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems},
year = {2020},
volume = {},
tag = {am},
number = {},
pages = {228-233},
doi = {10.1109/SPW50608.2020.00053}
}
D. Boursinos and X. Koutsoukos, Assurance Monitoring of Cyber-Physical Systems with Machine Learning Components, in Thirteenth International Tools and Methods of Competitive Engineering Symposium (TMCE 2020), 2020.
@inproceedings{boursinos2020assurance,
author = {Boursinos, Dimitrios and Koutsoukos, Xenofon},
title = {Assurance Monitoring of Cyber-Physical Systems with Machine Learning Components},
booktitle = {Thirteenth International Tools and Methods of Competitive Engineering Symposium (TMCE 2020)},
year = {2020},
tag = {am},
archiveprefix = {arXiv},
eprint = {2001.05014},
preprint = {https://arxiv.org/abs/2001.05014},
primaryclass = {cs.LG}
}
Machine learning components such as deep neural networks are used extensively in Cyber-Physical Systems (CPS). However, they may introduce new types of hazards that can have disastrous consequences and need to be addressed for engineering trustworthy systems. Although deep neural networks offer advanced capabilities, they must be complemented by engineering methods and practices that allow effective integration in CPS. In this paper, we investigate how to use the conformal prediction framework for assurance monitoring of CPS with machine learning components. In order to handle high-dimensional inputs in real-time, we compute nonconformity scores using embedding representations of the learned models. By leveraging conformal prediction, the approach provides well-calibrated confidence and can allow monitoring that ensures a bounded small error rate while limiting the number of inputs for which an accurate prediction cannot be made. Empirical evaluation results using the German Traffic Sign Recognition Benchmark and a robot navigation dataset demonstrate that the error rates are well-calibrated while the number of alarms is small. The method is computationally efficient, and therefore, the approach is promising for assurance monitoring of CPS.
D. Boursinos and X. Koutsoukos, Improving Prediction Confidence in Learning-Enabled Autonomous Systems, in Dynamic Data Driven Applications Systems, Cham, 2020, pp. 217–224.
@inproceedings{boursinos2020improving,
author = {Boursinos, Dimitrios and Koutsoukos, Xenofon},
editor = {Darema, Frederica and Blasch, Erik and Ravela, Sai and Aved, Alex},
title = {Improving Prediction Confidence in Learning-Enabled Autonomous Systems},
booktitle = {Dynamic Data Driven Applications Systems},
year = {2020},
tag = {am},
publisher = {Springer International Publishing},
address = {Cham},
pages = {217--224},
isbn = {978-3-030-61725-7},
doi = {10.1007/978-3-030-61725-7_26}
}
Autonomous systems use extensively learning-enabled components such as deep neural networks (DNNs) for prediction and decision making. In this paper, we utilize a feedback loop between learning-enabled components used for classification and the sensors of an autonomous system in order to improve the confidence of the predictions. We design a classifier using Inductive Conformal Prediction (ICP) based on a triplet network architecture in order to learn representations that can be used to quantify the similarity between test and training examples. The method allows computing confident set predictions with an error rate predefined using a selected significance level. A feedback loop that queries the sensors for a new input is used to further refine the predictions and increase the classification accuracy. The method is computationally efficient, scalable to high-dimensional inputs, and can be executed in a feedback loop with the system in real-time. The approach is evaluated using a traffic sign recognition dataset and the results show that the error rate is reduced.
F. Cai, J. Li, and X. Koutsoukos, Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression, in 2020 IEEE Security and Privacy Workshops (SPW), 2020, pp. 208–214.
@inproceedings{Cai2020,
author = {Cai, Feiyang and Li, Jiani and Koutsoukos, Xenofon},
booktitle = {2020 IEEE Security and Privacy Workshops (SPW)},
title = {Detecting Adversarial Examples in Learning-Enabled Cyber-Physical Systems using Variational Autoencoder for Regression},
year = {2020},
volume = {},
number = {},
tag = {am},
pages = {208-214},
doi = {10.1109/SPW50608.2020.00050}
}
Learning-enabled components (LECs) are widely used in cyber-physical systems (CPS) since they can handle the uncertainty and variability of the environment and increase the level of autonomy. However, it has been shown that LECs such as deep neural networks (DNN) are not robust and adversarial examples can cause the model to make a false prediction. The paper considers the problem of efficiently detecting adversarial examples in LECs used for regression in CPS. The proposed approach is based on inductive conformal prediction and uses a regression model based on variational autoencoder. The architecture allows to take into consideration both the input and the neural network prediction for detecting adversarial, and more generally, out-of-distribution examples. We demonstrate the method using an advanced emergency braking system implemented in an open source simulator for self-driving cars where a DNN is used to estimate the distance to an obstacle. The simulation results show that the method can effectively detect adversarial examples with a short detection delay.