{"title":"A Strategy for Boundary Adherence and Exploration in Black-Box Testing of Autonomous Vehicles","authors":"John M. Thompson, Quentin Goss, M. Akbaş","doi":"10.1109/MOST57249.2023.00028","DOIUrl":null,"url":null,"abstract":"The validation of artificial intelligence (AI) controlled vehicles is a vexing challenge. Black box testing of decision making in these vehicles has been used to abstract out the inner complexity of the AI. In scenario-based black box testing, the AI is placed within a scenario, and the input space for that scenario is explored. The fundamental metric of “how well tested the system is for that scenario” is based on the input state space coverage. Since most of these spaces have a high number of dimensions, it is critical to sample the state space efficiently and identify performance boundaries for the vehicle under test. In this paper, we propose a boundary adherence approach for autonomous vehicle validation that can explore the boundary between targeted and non-targeted behavior. This paper significantly improves and extends our previous approach that focused on generic black-box testing of AI systems by optimizing the algorithm itself, adding new tools for exploration, and applying the strategy to scenario-based AV testing. We provide an example regression of a scenario which illustrates the ability to model boundaries after they have been explored. Further results on higher dimensions show differing adherence strategies can improve exploration efficiency and how boundary exploration focuses on more “interesting” scenarios. Upon exploring the boundary, we found that predictions can be made about whether or not the system will result in targeted behavior.","PeriodicalId":338621,"journal":{"name":"2023 IEEE International Conference on Mobility, Operations, Services and Technologies (MOST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Mobility, Operations, Services and Technologies (MOST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MOST57249.2023.00028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The validation of artificial intelligence (AI) controlled vehicles is a vexing challenge. Black box testing of decision making in these vehicles has been used to abstract out the inner complexity of the AI. In scenario-based black box testing, the AI is placed within a scenario, and the input space for that scenario is explored. The fundamental metric of “how well tested the system is for that scenario” is based on the input state space coverage. Since most of these spaces have a high number of dimensions, it is critical to sample the state space efficiently and identify performance boundaries for the vehicle under test. In this paper, we propose a boundary adherence approach for autonomous vehicle validation that can explore the boundary between targeted and non-targeted behavior. This paper significantly improves and extends our previous approach that focused on generic black-box testing of AI systems by optimizing the algorithm itself, adding new tools for exploration, and applying the strategy to scenario-based AV testing. We provide an example regression of a scenario which illustrates the ability to model boundaries after they have been explored. Further results on higher dimensions show differing adherence strategies can improve exploration efficiency and how boundary exploration focuses on more “interesting” scenarios. Upon exploring the boundary, we found that predictions can be made about whether or not the system will result in targeted behavior.