Establishing vehicular ground truth

Pallavi Meharia, Biswajit Panja, D. Agrawal
{"title":"Establishing vehicular ground truth","authors":"Pallavi Meharia, Biswajit Panja, D. Agrawal","doi":"10.1109/VNC.2016.7835932","DOIUrl":null,"url":null,"abstract":"One of the most challenging problems faced by developers and engineers is the ability to hypothesize human behaviour. The study of user behaviour has always been an integral part of security analysis and threat detection. However, it takes on more incentive in the context of autonomous vehicles. Given such a dynamic context, quick intuitions may prove to be very misleading; resulting in misconceptions about the technology, its impact, and the nature of innovation. Considering the potential magnitude of the ramification from this technology, it is advisable to maintain caution and design a solution which accounts for all possible vulnerabilities. This works presents a novel architecture towards securing intelligent vehicles from physical roadside compromise. It has been designed with the purpose of questioning everything the vehicle is seeing, and verifying whether there is any legitimacy involved in what it's registering as being observed. With this work, an evaluation of a classification system is presented for scenarios where a vehicle maybe susceptible to physical damage. In the present study, we experimentally investigate the possibility of masquerading fake road side units (such as road signs) to override typical driving behaviour. Driving data was logged for participants who drove a vehicle in a fixed loop measuring approximately ∼1.4 miles in the city of Cincinnati. The collected data was then split into testing and training samples; wherein classifiers were trained and the model evaluated against the same. Our results indicate that by using a 80–20 split, 96% of masquerading attacks could be identified accurately.","PeriodicalId":352428,"journal":{"name":"2016 IEEE Vehicular Networking Conference (VNC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Vehicular Networking Conference (VNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VNC.2016.7835932","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

One of the most challenging problems faced by developers and engineers is the ability to hypothesize human behaviour. The study of user behaviour has always been an integral part of security analysis and threat detection. However, it takes on more incentive in the context of autonomous vehicles. Given such a dynamic context, quick intuitions may prove to be very misleading; resulting in misconceptions about the technology, its impact, and the nature of innovation. Considering the potential magnitude of the ramification from this technology, it is advisable to maintain caution and design a solution which accounts for all possible vulnerabilities. This works presents a novel architecture towards securing intelligent vehicles from physical roadside compromise. It has been designed with the purpose of questioning everything the vehicle is seeing, and verifying whether there is any legitimacy involved in what it's registering as being observed. With this work, an evaluation of a classification system is presented for scenarios where a vehicle maybe susceptible to physical damage. In the present study, we experimentally investigate the possibility of masquerading fake road side units (such as road signs) to override typical driving behaviour. Driving data was logged for participants who drove a vehicle in a fixed loop measuring approximately ∼1.4 miles in the city of Cincinnati. The collected data was then split into testing and training samples; wherein classifiers were trained and the model evaluated against the same. Our results indicate that by using a 80–20 split, 96% of masquerading attacks could be identified accurately.
建立车辆地面真相
开发人员和工程师面临的最具挑战性的问题之一是假设人类行为的能力。对用户行为的研究一直是安全分析和威胁检测的重要组成部分。然而,在自动驾驶汽车的背景下,这就更有动力了。在这样一个动态的背景下,快速的直觉可能会被证明是非常误导的;导致人们对这项技术、它的影响和创新的本质产生误解。考虑到该技术的潜在影响,建议保持谨慎,并设计一个考虑到所有可能漏洞的解决方案。这项工作提出了一种新的架构,以确保智能车辆免受物理路边妥协。它的设计目的是质疑车辆所看到的一切,并验证其注册为观察到的内容是否具有合法性。通过这项工作,对车辆可能容易受到物理损坏的情况进行了分类系统的评估。在本研究中,我们通过实验研究了伪装假路边单位(如道路标志)来覆盖典型驾驶行为的可能性。记录了在辛辛那提市大约1.4英里的固定环路中驾驶车辆的参与者的驾驶数据。然后将收集到的数据分为测试样本和训练样本;其中分类器被训练,模型被评估。我们的结果表明,使用80-20分割,可以准确识别96%的伪装攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信