Marvin Braun, Maike Greve, Alfred Benedikt Brendel, Lutz M. Kolbe
{"title":"监督人工智能的人类-优化错误检测的设计调查","authors":"Marvin Braun, Maike Greve, Alfred Benedikt Brendel, Lutz M. Kolbe","doi":"10.1080/12460125.2023.2260518","DOIUrl":null,"url":null,"abstract":"ABSTRACTArtificial Intelligence (AI) fundamentally changes the way we work by introducing new capabilities. Human tasks shift towards a supervising role where the human confirms or disconfirms the presented decision. In this study, we utilise the signal detection theory to investigate and explain how the performance of human error detection is influenced by specific information design. We conducted two online experiments in the context of AI-supported information extraction and measured the ability of participants to validate the extracted information. In the first experiment, we investigated the mechanism of information provided prior to conducting the error detection task. In the second experiment, we manipulated the design of the presented information during the task and investigated its effect. Both manipulations significantly impacted the error detection performance of humans. Hence our study provides important insights for developing AI-based decision support systems and contributes to the theoretical understanding of human-AI collaboration.KEYWORDS: Supervisionartificial intelligenceerror detectiondecision makingsignal detection theory AcknowledgmentsWe acknowledge that a previous version of study 2 has received valuable feedback on the European Conference on Information Systems 2022.Disclosure statementNo potential conflict of interest was reported by the author(s).Ethics statementThe present research constitutes a non-interventional study, specifically focused on surveys and data analysis, wherein no direct intervention, manipulation, or experimentation on human participants is involved. As a result, this study falls under the category where ethical approval is not required.","PeriodicalId":45565,"journal":{"name":"Journal of Decision Systems","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Humans supervising Artificial intelligence – Investigation of Designs to optimize error detection\",\"authors\":\"Marvin Braun, Maike Greve, Alfred Benedikt Brendel, Lutz M. Kolbe\",\"doi\":\"10.1080/12460125.2023.2260518\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTArtificial Intelligence (AI) fundamentally changes the way we work by introducing new capabilities. Human tasks shift towards a supervising role where the human confirms or disconfirms the presented decision. In this study, we utilise the signal detection theory to investigate and explain how the performance of human error detection is influenced by specific information design. We conducted two online experiments in the context of AI-supported information extraction and measured the ability of participants to validate the extracted information. In the first experiment, we investigated the mechanism of information provided prior to conducting the error detection task. In the second experiment, we manipulated the design of the presented information during the task and investigated its effect. Both manipulations significantly impacted the error detection performance of humans. Hence our study provides important insights for developing AI-based decision support systems and contributes to the theoretical understanding of human-AI collaboration.KEYWORDS: Supervisionartificial intelligenceerror detectiondecision makingsignal detection theory AcknowledgmentsWe acknowledge that a previous version of study 2 has received valuable feedback on the European Conference on Information Systems 2022.Disclosure statementNo potential conflict of interest was reported by the author(s).Ethics statementThe present research constitutes a non-interventional study, specifically focused on surveys and data analysis, wherein no direct intervention, manipulation, or experimentation on human participants is involved. As a result, this study falls under the category where ethical approval is not required.\",\"PeriodicalId\":45565,\"journal\":{\"name\":\"Journal of Decision Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2023-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Decision Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/12460125.2023.2260518\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"OPERATIONS RESEARCH & MANAGEMENT SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Decision Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/12460125.2023.2260518","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
Humans supervising Artificial intelligence – Investigation of Designs to optimize error detection
ABSTRACTArtificial Intelligence (AI) fundamentally changes the way we work by introducing new capabilities. Human tasks shift towards a supervising role where the human confirms or disconfirms the presented decision. In this study, we utilise the signal detection theory to investigate and explain how the performance of human error detection is influenced by specific information design. We conducted two online experiments in the context of AI-supported information extraction and measured the ability of participants to validate the extracted information. In the first experiment, we investigated the mechanism of information provided prior to conducting the error detection task. In the second experiment, we manipulated the design of the presented information during the task and investigated its effect. Both manipulations significantly impacted the error detection performance of humans. Hence our study provides important insights for developing AI-based decision support systems and contributes to the theoretical understanding of human-AI collaboration.KEYWORDS: Supervisionartificial intelligenceerror detectiondecision makingsignal detection theory AcknowledgmentsWe acknowledge that a previous version of study 2 has received valuable feedback on the European Conference on Information Systems 2022.Disclosure statementNo potential conflict of interest was reported by the author(s).Ethics statementThe present research constitutes a non-interventional study, specifically focused on surveys and data analysis, wherein no direct intervention, manipulation, or experimentation on human participants is involved. As a result, this study falls under the category where ethical approval is not required.