Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen
{"title":"自动驾驶系统中驾驶员对人工智能的理解:一个恶意停车标志的研究","authors":"Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen","doi":"10.1177/15553434221117001","DOIUrl":null,"url":null,"abstract":"Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.","PeriodicalId":46342,"journal":{"name":"Journal of Cognitive Engineering and Decision Making","volume":"16 1","pages":"237 - 251"},"PeriodicalIF":2.2000,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign\",\"authors\":\"Katherine R. Garcia, S. Mishler, Y. Xiao, Congjiao Wang, B. Hu, J. Still, Jing Chen\",\"doi\":\"10.1177/15553434221117001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.\",\"PeriodicalId\":46342,\"journal\":{\"name\":\"Journal of Cognitive Engineering and Decision Making\",\"volume\":\"16 1\",\"pages\":\"237 - 251\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2022-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Cognitive Engineering and Decision Making\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/15553434221117001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cognitive Engineering and Decision Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15553434221117001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
Drivers’ Understanding of Artificial Intelligence in Automated Driving Systems: A Study of a Malicious Stop Sign
Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.