Mohammad Naiseh , Auste Simkute , Baraa Zieni , Nan Jiang , Raian Ali
{"title":"C-XAI:设计支持信任校准的 XAI 工具的概念框架","authors":"Mohammad Naiseh , Auste Simkute , Baraa Zieni , Nan Jiang , Raian Ali","doi":"10.1016/j.jrt.2024.100076","DOIUrl":null,"url":null,"abstract":"<div><p>Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100076"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000027/pdfft?md5=b4038e407ec2450c0ab8e0c8949eebfe&pid=1-s2.0-S2666659624000027-main.pdf","citationCount":"0","resultStr":"{\"title\":\"C-XAI: A conceptual framework for designing XAI tools that support trust calibration\",\"authors\":\"Mohammad Naiseh , Auste Simkute , Baraa Zieni , Nan Jiang , Raian Ali\",\"doi\":\"10.1016/j.jrt.2024.100076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.</p></div>\",\"PeriodicalId\":73937,\"journal\":{\"name\":\"Journal of responsible technology\",\"volume\":\"17 \",\"pages\":\"Article 100076\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666659624000027/pdfft?md5=b4038e407ec2450c0ab8e0c8949eebfe&pid=1-s2.0-S2666659624000027-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of responsible technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666659624000027\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659624000027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
C-XAI: A conceptual framework for designing XAI tools that support trust calibration
Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.