2023 IEEE International Conference on Assured Autonomy (ICAA)最新文献

筛选
英文 中文
Privacy-Aware Blockchain-Based AV Parking System Registration Scheme 基于区块链的自动驾驶汽车停车系统注册方案
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00030
Alexander Haastrup, Muhammad Hataba, Ahmed B. T. Sherif, M. Elsersy
{"title":"Privacy-Aware Blockchain-Based AV Parking System Registration Scheme","authors":"Alexander Haastrup, Muhammad Hataba, Ahmed B. T. Sherif, M. Elsersy","doi":"10.1109/ICAA58325.2023.00030","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00030","url":null,"abstract":"Innovation and automation remain at the forefront of every evolving technological trend, and Autonomous Vehicles (AV) are an acute instance of this. As research continues on ways to improve the efficiency and cost-effectiveness of AVs, the aspect of parking systems is being brought to light. A crucial element of an effective AV parking scheme is having strong user privacy and cybersecurity capabilities to defend against numerous online attackers and protect users’ sensitive information. In this paper, we present a privacy-preserving blockchain-based registration scheme for AV parking systems capable of achieving the aforementioned imperative. Our proposed scheme focuses on a modified kNN encryption technique to encrypt and match the interested parking spots of AV users with the available parking slots of participating parking lots in the form of vector matrices. Additionally, the blockchain provides secure payment fairness and transparency to ensure transactional satisfaction from both the AV user and the parking lots without the need for a financial third party. Our security and privacy analysis further indicate that our proposed scheme is robust and efficient.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116944036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Copyright Page 版权页
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/icaa58325.2023.00003
{"title":"Copyright Page","authors":"","doi":"10.1109/icaa58325.2023.00003","DOIUrl":"https://doi.org/10.1109/icaa58325.2023.00003","url":null,"abstract":"","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122868858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting 使用形式方法引导迭代提示消除大型语言模型的幻觉
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00029
Susmit Jha, Sumit Kumar Jha, P. Lincoln, Nathaniel D. Bastian, Alvaro Velasquez, S. Neema
{"title":"Dehallucinating Large Language Models Using Formal Methods Guided Iterative Prompting","authors":"Susmit Jha, Sumit Kumar Jha, P. Lincoln, Nathaniel D. Bastian, Alvaro Velasquez, S. Neema","doi":"10.1109/ICAA58325.2023.00029","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00029","url":null,"abstract":"Large language models (LLMs) such as ChatGPT have been trained to generate human-like responses to natural language prompts. LLMs use a vast corpus of text data for training, and can generate coherent and contextually relevant responses to a wide range of questions and statements. Despite this remarkable progress, LLMs are prone to hallucinations making their application to safety-critical applications such as autonomous systems difficult. The hallucinations in LLMs refer to instances where the model generates responses that are not factually accurate or contextually appropriate. These hallucinations can occur due to a variety of factors, such as the model’s lack of real-world knowledge, the influence of biased or inaccurate training data, or the model’s tendency to generate responses based on statistical patterns rather than a true understanding of the input. While these hallucinations are a nuisance in tasks such as text summarization and question-answering, they can be catastrophic when LLMs are used in autonomy-relevant applications such as planning. In this paper, we focus on the application of LLMs in autonomous systems and sketch a novel self-monitoring and iterative prompting architecture that uses formal methods to detect these errors in the LLM response automatically. We exploit the dialog capability of LLMs to iteratively steer them to responses that are consistent with our correctness specification. We report preliminary experiments that show the promise of the proposed approach on tasks such as automated planning.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131168460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Proposed V-Model for Verification, Validation, and Safety Activities for Artificial Intelligence 人工智能验证、确认和安全活动的v模型建议
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00017
Benjamin J. Schumeg, Frank Marotta, Benjamin D. Werner
{"title":"Proposed V-Model for Verification, Validation, and Safety Activities for Artificial Intelligence","authors":"Benjamin J. Schumeg, Frank Marotta, Benjamin D. Werner","doi":"10.1109/ICAA58325.2023.00017","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00017","url":null,"abstract":"The Department of Defense strives to continuously develop and acquire systems that utilize novel technologies and methods for implementing new and complex mission requirements. One of the identified technologies with high impact and benefit to the Warfighter is the integration of Artificial Intelligence (AI) and Machine Learning (ML). Current AI models and methods have added layers of complexity to achieving a satisfactory level of verification and validation (V&V), possibly resulting in elevated risks with fewer mitigations. Regardless of the type of applications for AI technology within the DoD, the technology implementation must be verified, validated, and ultimately any residual risks accepted. This paper looks to introduce a V-model concept for Artificial Intelligence and Machine Learning, to include an outline of proposed activities that the development, assurance, and evaluation communities can follow. By following this proposed assessment, these organizations can increase their understanding and knowledge of the system, mitigating risk and helping to achieve justified confidence.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133389006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Watchdog For Assuring COLREG Compliance of Autonomous Unmanned Surface Vessels That Include Artificial Intelligence 确保包括人工智能在内的自主无人水面船只符合COLREG的监管机构
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00020
Joshua Prucnal, D. Scheidt
{"title":"Watchdog For Assuring COLREG Compliance of Autonomous Unmanned Surface Vessels That Include Artificial Intelligence","authors":"Joshua Prucnal, D. Scheidt","doi":"10.1109/ICAA58325.2023.00020","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00020","url":null,"abstract":"This paper discusses the progress of a research project investigating novel methods for assuring COLREGs compliance for autonomous unmanned surface vessels. The COLREGs are regulations set by the Convention on the International Regulations for Preventing Collisions at Sea, 1972. The COLREGs provide instruction for how vessels should maneuver with respect to each other to avoid collisions. The motivating example for this research is a scenario where an unmanned surface vessel (USV) that has not yet violated the COLREGs is in a multi-vessel scenario where it is subject to conflicting rules, leading to no admissible actions. The methods detailed in this paper enable a Watchdog software to recognize such situations as they evolve and take action to prevent the USV’s primary controller from reaching a state where it has no admissible action.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125526806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Out-of-Distribution Performance of Deep Neural Networks Using Model Conformance 利用模型一致性预测深度神经网络的分布外性能
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00011
Ramneet Kaur, Susmit Jha, Anirban Roy, O. Sokolsky, Insup Lee
{"title":"Predicting Out-of-Distribution Performance of Deep Neural Networks Using Model Conformance","authors":"Ramneet Kaur, Susmit Jha, Anirban Roy, O. Sokolsky, Insup Lee","doi":"10.1109/ICAA58325.2023.00011","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00011","url":null,"abstract":"With the increasingly high interest in using Deep Neural Networks (DNN) in safety-critical cyber-physical systems, such as autonomous vehicles, providing assurance about the safe deployment of these models becomes ever more important. The safe deployment of deep learning models in the real world where the inputs can vary from the training environment of the models requires characterizing the performance and the uncertainty in the prediction of these models, particularly on novel and out-of-distribution (OOD) inputs. This has motivated the development of methods to predict the accuracy of DNN in novel (unseen during training) environments. These methods, however, assume access to some labeled data from the novel environment which is unrealistic in many real-world settings. We propose an approach for predicting the accuracy of a DNN classifier under a shift from its training distribution without assuming access to labels of the inputs drawn from the shifted distribution. We demonstrate the efficacy of the proposed approach on two autonomous driving datasets namely the GTSRB dataset for image classification, and the ONCE dataset with synchronized feeds from LiDAR and cameras used for object detection. We show that the proposed approach is applicable for predicting accuracy on different modalities (image from camera, and point cloud from LiDAR) of the input data.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124574701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Safe Explainable Agents for Autonomous Navigation using Evolving Behavior Trees 基于进化行为树的自主导航安全可解释智能体
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00014
Nicholas Potteiger, X. Koutsoukos
{"title":"Safe Explainable Agents for Autonomous Navigation using Evolving Behavior Trees","authors":"Nicholas Potteiger, X. Koutsoukos","doi":"10.1109/ICAA58325.2023.00014","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00014","url":null,"abstract":"Machine learning and reinforcement learning are increasingly used to solve complex tasks in autonomous systems. However, autonomous agents represented by large neural networks are not transparent leading to their assurability and trustworthiness becoming critical challenges. Large models also result in a lack of interpretability which causes severe obstacles related to trust in autonomous agents and human-machine teaming. In this paper, we leverage the hierarchical structure of behavior trees and hierarchical reinforcement learning to develop a neurosymbolic model architecture for autonomous agents. The proposed model, referred to as Evolving Behavior Trees (EBTs), integrates the required components to represent the learning tasks as well as the switching between tasks to achieve complex long-term goals. We design an agent for autonomous navigation and we evaluate the approach against a state-of-the-art hierarchical reinforcement learning method using a Maze Simulation Environment. The results show autonomous agents represented by EBTs can be trained efficiently. The approach incorporates explicit safety constraints into the model and incurs significantly fewer safety violations during training and execution. Further, the model provides explanations for the behavior of the autonomous agent by associating the state of the executing EBT with agent actions.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117213608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Safety Fallback Controller for Improved Collision Avoidance 一种改进避碰的安全后撤控制器
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00026
D. Genin, Elizabeth Dietrich, Yanni Kouskoulas, A. Schmidt, Marin Kobilarov, Kapil D. Katyal, S. Sefati, Subhransu Mishra, I. Papusha
{"title":"A Safety Fallback Controller for Improved Collision Avoidance","authors":"D. Genin, Elizabeth Dietrich, Yanni Kouskoulas, A. Schmidt, Marin Kobilarov, Kapil D. Katyal, S. Sefati, Subhransu Mishra, I. Papusha","doi":"10.1109/ICAA58325.2023.00026","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00026","url":null,"abstract":"We present an implementation of a formally verified safety fallback controller for improved collision avoidance in an autonomous vehicle research platform. Our approach uses a primary trajectory planning system that aims for collision-free navigation in the presence of pedestrians and other vehicles, and a fallback controller that guards its behavior. The safety fallback controller excludes the possibility of collisions by accounting for nondeterministic uncertainty in the dynamics of the vehicle and moving obstacles, and takes over the primary controller as necessary. We demonstrate the system in an experimental set-up that includes simulations and real-world tests with a 1/5-scale vehicle. In stressing simulation scenarios, the safety fallback controller significantly reduces the number of collisions.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127315657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI Forensics
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00023
Samuel Lefcourt, Gregory Falco
{"title":"AI Forensics","authors":"Samuel Lefcourt, Gregory Falco","doi":"10.1109/ICAA58325.2023.00023","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00023","url":null,"abstract":"Artificial intelligence is now a daily topic of public discussion. Not only are intelligent systems such as autonomous vehicles taking the streets, but recommendation algorithms are shaping human behavior. While AI offers a significant increase in efficiency, it can also indirectly cause harm to humans by replacing jobs, violating privacy and even threatening autonomy. To keep track of these cases in which AI has negative implications on a human, databases of AI incidents have been created. We extend this idea of AI incidents to not only include times whereby an AI system caused a real-world harm, but also when it introduces a benefit. Prior work in adjacent fields has defined taxonomies and standard procedures for root cause analysis, digital forensics, AI risk management, and more. Despite these frameworks, there is no means to investigate an AI system to discover the root cause of an incident. We aim to evaluate the body of knowledge that leads to introducing the field of AI Forensics. AI forensics can serve as a postmortem analysis of AI incidents to discover the primary harm catalyst.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125940747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assured Point Cloud Perception 保证点云感知
2023 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2023-06-01 DOI: 10.1109/ICAA58325.2023.00025
Chris R. Serrano, A. Nogin, Michael A. Warren
{"title":"Assured Point Cloud Perception","authors":"Chris R. Serrano, A. Nogin, Michael A. Warren","doi":"10.1109/ICAA58325.2023.00025","DOIUrl":"https://doi.org/10.1109/ICAA58325.2023.00025","url":null,"abstract":"Existing work on verification of neural networks has largely focused on the image domain, where issues of adversarial robustness are the main concern. In this paper, we exploit the geometric nature of point cloud data that makes it a natural domain in which neural network verification technology can provide even stronger guarantees. We illustrate this in the context of estimation of surface normals by showing how neural network verification can be used to analyze correctness properties related to this task, thereby allowing proofs of correctness that provide universally quantified guarantees over positive measure sets of patches. Whereas previous applications of neural network verification to point clouds have focused on the task of classification, here we apply neural network verification to point cloud regression. Our contribution includes a novel representation of local point cloud patches invariant to point cloud density, as well as small network architectures that can be more readily analyzed by existing neural network verification tools and may be more suitable for deployment on size, weight and power constrained platforms than state-of-the-art architectures. Our approach allows for a model trained only in simulation to successfully transfer to diverse real-world systems (including on a US Army autonomous vehicle platform) and sensors without any additional training or fine-tuning. Applying our input representation to existing approaches achieves improved performance on unoriented surface normals in low-noise environments.","PeriodicalId":190198,"journal":{"name":"2023 IEEE International Conference on Assured Autonomy (ICAA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128227501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信