{"title":"Exploring Trust With the AI Incident Database","authors":"Jeff C. Stanley, Stephen L. Dorton","doi":"10.1177/21695067231198084","DOIUrl":null,"url":null,"abstract":"Engineering trustworthy artificial intelligence (AI) is important to adoption and appropriate use, but there are challenges to implementing trustworthy AI systems. It is difficult to translate trust studies from the laboratory to the field. It is also difficult to operationalize “trustworthy AI” frameworks and principles to inform the actual development of AI. We address these challenges with an approach based in reported incidents of trust loss “in the wild.” We systematically identified 30 cases of trust loss in the AI Incident Database to gain insight into how and why humans lose trust in AI in various contexts. These factors could be codified into the development cycle in various forms such as checklists and design patterns to manage trust in AI systems and avoid similar incidents in the future. Because it is based in real incidents, this approach offers recommendations that are concrete and actionable for teams addressing real use cases with AI systems.","PeriodicalId":74544,"journal":{"name":"Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual meeting","volume":"13 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting. Human Factors and Ergonomics Society. Annual meeting","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/21695067231198084","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Engineering trustworthy artificial intelligence (AI) is important to adoption and appropriate use, but there are challenges to implementing trustworthy AI systems. It is difficult to translate trust studies from the laboratory to the field. It is also difficult to operationalize “trustworthy AI” frameworks and principles to inform the actual development of AI. We address these challenges with an approach based in reported incidents of trust loss “in the wild.” We systematically identified 30 cases of trust loss in the AI Incident Database to gain insight into how and why humans lose trust in AI in various contexts. These factors could be codified into the development cycle in various forms such as checklists and design patterns to manage trust in AI systems and avoid similar incidents in the future. Because it is based in real incidents, this approach offers recommendations that are concrete and actionable for teams addressing real use cases with AI systems.