{"title":"评估时间对可解释人工智能信任校准的影响","authors":"Ezekiel Bernardo, R. Seva","doi":"10.54941/ahfe1003280","DOIUrl":null,"url":null,"abstract":"Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the Effect of Time on Trust Calibration of Explainable Artificial Intelligence\",\"authors\":\"Ezekiel Bernardo, R. Seva\",\"doi\":\"10.54941/ahfe1003280\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.\",\"PeriodicalId\":405313,\"journal\":{\"name\":\"Artificial Intelligence and Social Computing\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence and Social Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54941/ahfe1003280\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1003280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating the Effect of Time on Trust Calibration of Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) has played a significant role in human-computer interaction. The cognitive resources it carries allow humans to understand the complex algorithm powering Artificial Intelligence (AI), virtually resolving the acceptance and adoption barrier from the lack of transparency. This resulted in more systems leveraging XAI and triggering interest and efforts to develop newer and more capable techniques. However, though the research stream is expanding, little is known about the extent of its effectiveness on end-users. Current works have only measured XAI effects on either moment time effect or compared it cross-sectionally on various types of users. Filling this out can improve the understanding of existing studies and provide practical limitations on its use for trust calibration. To address this gap, a multi-time research experiment was conducted with 103 participants to use and evaluate XAI in an image classification application for three days. Measurement that was considered is on perceived usefulness for its cognitive contribution, integral emotions for affective change, trust, and reliance, and was analyzed via covariance-based structural equation modelling. Results showed that time only moderates the path from cognitive to trust and reliance as well as trust to reliance, with its effect dampening through time. On the other hand, affective change has remained consistent in all interactions. This shows that if an AI system uses XAI over a longer time frame, prioritization should be on its affective properties (i.e., things that will trigger emotional change) rather than purely on its cognitive purpose to maximize the positive effect of XAI.