Panos Constantinides , Eric Monteiro , Lars Mathiassen
{"title":"人类与人工智能联合执行任务:从自动驾驶系统的不确定性中学习","authors":"Panos Constantinides , Eric Monteiro , Lars Mathiassen","doi":"10.1016/j.infoandorg.2024.100502","DOIUrl":null,"url":null,"abstract":"<div><p>High uncertainty tasks such as making a medical diagnosis, judging a criminal justice case and driving in a big city have a very low margin for error because of the potentially devastating consequences for human lives<em>.</em> In this paper, we focus on how humans learn from uncertainty while performing a high uncertainty task with AI systems. We analyze Tesla's autonomous driving systems (ADS), a type of AI system, drawing on crash investigation reports, published reports on formal simulation tests and YouTube recordings of informal simulation tests by amateur drivers. Our empirical analysis provides insights into how varied levels of uncertainty tolerance have implications for how humans learn from uncertainty in real-time and over time to jointly perform the driving task with Tesla's ADS. Our core contribution is a theoretical model that explains human-AI joint task performance. Specifically, we show that, the interdependencies between different modes of AI use including <em>uncontrolled automation</em>, <em>limited automation</em>, <em>expanded automation</em>, and <em>controlled automation</em> are dynamically shaped through humans' learning from uncertainty. We discuss how humans move between these modes of AI use by increasing, reducing, or reinforcing their uncertainty tolerance. We conclude by discussing implications for the design of AI systems, policy into delegation in joint task performance, as well as the use of data to improve learning from uncertainty.</p></div>","PeriodicalId":47253,"journal":{"name":"Information and Organization","volume":"34 2","pages":"Article 100502"},"PeriodicalIF":5.7000,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1471772724000022/pdfft?md5=aa15c890f5d2c514a3e18cf7ce241793&pid=1-s2.0-S1471772724000022-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Human-AI joint task performance: Learning from uncertainty in autonomous driving systems\",\"authors\":\"Panos Constantinides , Eric Monteiro , Lars Mathiassen\",\"doi\":\"10.1016/j.infoandorg.2024.100502\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>High uncertainty tasks such as making a medical diagnosis, judging a criminal justice case and driving in a big city have a very low margin for error because of the potentially devastating consequences for human lives<em>.</em> In this paper, we focus on how humans learn from uncertainty while performing a high uncertainty task with AI systems. We analyze Tesla's autonomous driving systems (ADS), a type of AI system, drawing on crash investigation reports, published reports on formal simulation tests and YouTube recordings of informal simulation tests by amateur drivers. Our empirical analysis provides insights into how varied levels of uncertainty tolerance have implications for how humans learn from uncertainty in real-time and over time to jointly perform the driving task with Tesla's ADS. Our core contribution is a theoretical model that explains human-AI joint task performance. Specifically, we show that, the interdependencies between different modes of AI use including <em>uncontrolled automation</em>, <em>limited automation</em>, <em>expanded automation</em>, and <em>controlled automation</em> are dynamically shaped through humans' learning from uncertainty. We discuss how humans move between these modes of AI use by increasing, reducing, or reinforcing their uncertainty tolerance. We conclude by discussing implications for the design of AI systems, policy into delegation in joint task performance, as well as the use of data to improve learning from uncertainty.</p></div>\",\"PeriodicalId\":47253,\"journal\":{\"name\":\"Information and Organization\",\"volume\":\"34 2\",\"pages\":\"Article 100502\"},\"PeriodicalIF\":5.7000,\"publicationDate\":\"2024-01-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S1471772724000022/pdfft?md5=aa15c890f5d2c514a3e18cf7ce241793&pid=1-s2.0-S1471772724000022-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Organization\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1471772724000022\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Organization","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1471772724000022","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Human-AI joint task performance: Learning from uncertainty in autonomous driving systems
High uncertainty tasks such as making a medical diagnosis, judging a criminal justice case and driving in a big city have a very low margin for error because of the potentially devastating consequences for human lives. In this paper, we focus on how humans learn from uncertainty while performing a high uncertainty task with AI systems. We analyze Tesla's autonomous driving systems (ADS), a type of AI system, drawing on crash investigation reports, published reports on formal simulation tests and YouTube recordings of informal simulation tests by amateur drivers. Our empirical analysis provides insights into how varied levels of uncertainty tolerance have implications for how humans learn from uncertainty in real-time and over time to jointly perform the driving task with Tesla's ADS. Our core contribution is a theoretical model that explains human-AI joint task performance. Specifically, we show that, the interdependencies between different modes of AI use including uncontrolled automation, limited automation, expanded automation, and controlled automation are dynamically shaped through humans' learning from uncertainty. We discuss how humans move between these modes of AI use by increasing, reducing, or reinforcing their uncertainty tolerance. We conclude by discussing implications for the design of AI systems, policy into delegation in joint task performance, as well as the use of data to improve learning from uncertainty.
期刊介绍:
Advances in information and communication technologies are associated with a wide and increasing range of social consequences, which are experienced by individuals, work groups, organizations, interorganizational networks, and societies at large. Information technologies are implicated in all industries and in public as well as private enterprises. Understanding the relationships between information technologies and social organization is an increasingly important and urgent social and scholarly concern in many disciplinary fields.Information and Organization seeks to publish original scholarly articles on the relationships between information technologies and social organization. It seeks a scholarly understanding that is based on empirical research and relevant theory.