Carter Cousineau , Rozita Dara , Ataharul Chowdhury
{"title":"Trustworthy AI: AI developers’ lens to implementation challenges and opportunities","authors":"Carter Cousineau , Rozita Dara , Ataharul Chowdhury","doi":"10.1016/j.dim.2024.100082","DOIUrl":null,"url":null,"abstract":"<div><div>As organizations continue to embrace the use of artificial intelligence (AI) systems, it is crucial to ensure that these AI systems can be trusted. However, there is still a significant gap between research on trustworthy AI and its implementation in real-world applications. To address this issue, we sought to explore the perspectives of AI developers and the challenges they face in creating trustworthy AI systems. This exploratory study involved conducting interviews with 19 AI developers. We identified key challenges faced by AI developers due to the immature state of trustworthy AI, inconsistent global regulatory landscape, a lack of standardized definitions of key concepts, limited tools and standards for practical implementation in organizations. This paper provides recommendations for organizations to invest in trustworthy AI processes and practices, this includes building a foundation for trustworthy AI specific to their organization, adopting an organizational approach to trustworthy AI culture, and providing proper data infrastructures to support AI developers in creating trustworthy AI systems. By investing in trustworthy AI practices, organizations can prepare for evolving regulations and ensure that their AI systems are reliable and trustworthy.</div></div>","PeriodicalId":72769,"journal":{"name":"Data and information management","volume":"9 2","pages":"Article 100082"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data and information management","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2543925124000184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As organizations continue to embrace the use of artificial intelligence (AI) systems, it is crucial to ensure that these AI systems can be trusted. However, there is still a significant gap between research on trustworthy AI and its implementation in real-world applications. To address this issue, we sought to explore the perspectives of AI developers and the challenges they face in creating trustworthy AI systems. This exploratory study involved conducting interviews with 19 AI developers. We identified key challenges faced by AI developers due to the immature state of trustworthy AI, inconsistent global regulatory landscape, a lack of standardized definitions of key concepts, limited tools and standards for practical implementation in organizations. This paper provides recommendations for organizations to invest in trustworthy AI processes and practices, this includes building a foundation for trustworthy AI specific to their organization, adopting an organizational approach to trustworthy AI culture, and providing proper data infrastructures to support AI developers in creating trustworthy AI systems. By investing in trustworthy AI practices, organizations can prepare for evolving regulations and ensure that their AI systems are reliable and trustworthy.