ACM Transactions on Interactive Intelligent Systems最新文献

筛选
英文 中文
RadarSense: Accurate Recognition of Mid-air Hand Gestures with Radar Sensing and Few Training Examples RadarSense:基于雷达传感的空中手势的准确识别和少量训练实例
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-31 DOI: 10.1145/3589645
Arthur Sluÿters, S. Lambot, J. Vanderdonckt, Radu-Daniel Vatavu
{"title":"RadarSense: Accurate Recognition of Mid-air Hand Gestures with Radar Sensing and Few Training Examples","authors":"Arthur Sluÿters, S. Lambot, J. Vanderdonckt, Radu-Daniel Vatavu","doi":"10.1145/3589645","DOIUrl":"https://doi.org/10.1145/3589645","url":null,"abstract":"Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (N=118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths and antenna configurations but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent, i.e., independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"40 1","pages":"1 - 45"},"PeriodicalIF":3.4,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86834309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RadarSense: Accurate Recognition of Mid-Air Hand Gestures with Radar Sensing and Few Training Examples RadarSense:基于雷达传感的空中手势的准确识别和少量训练实例
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-31 DOI: https://dl.acm.org/doi/10.1145/3589645
Arthur SluŸters, Sébastien Lambot, Jean Vanderdonckt, Radu-Daniel Vatavu
{"title":"RadarSense: Accurate Recognition of Mid-Air Hand Gestures with Radar Sensing and Few Training Examples","authors":"Arthur SluŸters, Sébastien Lambot, Jean Vanderdonckt, Radu-Daniel Vatavu","doi":"https://dl.acm.org/doi/10.1145/3589645","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589645","url":null,"abstract":"<p>Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (<i>N</i> = 118) scientific papers on radar sensing, unveiling a large variety of radar technology of different operating frequencies and bandwidths, antenna configurations, but also various gesture recognition techniques. Although highly accurate, these techniques require a large amount of training data that depend on the type of radar. Therefore, the training results cannot be easily transferred to other radars. To address this aspect, we introduce a new gesture recognition pipeline that implements advanced full-wave electromagnetic modeling and inversion to retrieve physical characteristics of gestures that are radar independent, <i>i.e.</i>, independent of the source, antennas, and radar-hand interactions. Inversion of radar signals further reduces the size of the dataset by several orders of magnitude, while preserving the essential information. This approach is compatible with conventional gesture recognizers, such as those based on template matching, which only need a few training examples to deliver high recognition accuracy rates. To evaluate our gesture recognition pipeline, we conducted user-dependent and user-independent evaluations on a dataset of 16 gesture types collected with the Walabot, a low-cost off-the-shelf array radar. We contrast these results with those obtained for the same gesture types collected with an ultra-wideband radar made of a vector network analyzer with a single horn antenna and with a computer vision sensor, respectively. Based on our findings, we suggest some design implications to support future development in radar-based gesture recognition.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"51 4","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIMEADE: From AI Explanations to Advice Taking 莱姆德:从人工智能解释到建议采纳
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-28 DOI: https://dl.acm.org/doi/10.1145/3589345
Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld
{"title":"LIMEADE: From AI Explanations to Advice Taking","authors":"Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld","doi":"https://dl.acm.org/doi/10.1145/3589345","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589345","url":null,"abstract":"<p>Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for <i>transparent</i> learning models (e.g., linear models and GA<sup>2</sup>Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for <i>opaque</i> models, little attention has been given to advice methods for opaque models. This paper introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post-hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on seventy real-world models across two broad domains: image classification and text recommendation. We show our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"52 5","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing Thumbnail Captions: Data Collection and Validation 众包缩略图说明:数据收集和验证
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-28 DOI: https://dl.acm.org/doi/10.1145/3589346
Carlos Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang
{"title":"Crowdsourcing Thumbnail Captions: Data Collection and Validation","authors":"Carlos Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang","doi":"https://dl.acm.org/doi/10.1145/3589346","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3589346","url":null,"abstract":"<p>Speech interfaces, such as personal assistants and screen readers, read image captions to users—but typically only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension, yet have the advantage of being faster to consume. We explore how to effectively collect both thumbnail captions—succinct image descriptions meant to be consumed quickly—and comprehensive captions—which allow individuals to understand visual content in greater detail; we consider text-based instructions and time-constrained methods to collect descriptions at these two levels of detail and find that a time-constrained method is the most effective for collecting thumbnail captions while preserving caption accuracy. Additionally, we verify that caption authors using this time-constrained method are still able to focus on the most important regions of an image by tracking their eye gaze. We evaluate our collected captions along human-rated axes—correctness, fluency, amount of detail, and mentions of important concepts—and discuss the potential for model-based metrics to perform large-scale automatic evaluations in the future.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"34 1-2","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourcing Thumbnail Captions: Data Collection and Validation 众包缩略图说明:数据收集和验证
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-28 DOI: 10.1145/3589346
Carlos Alejandro Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang
{"title":"Crowdsourcing Thumbnail Captions: Data Collection and Validation","authors":"Carlos Alejandro Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang","doi":"10.1145/3589346","DOIUrl":"https://doi.org/10.1145/3589346","url":null,"abstract":"Speech interfaces, such as personal assistants and screen readers, read image captions to users. Typically, however, only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension yet have the advantage of being faster to consume. We explore how to effectively collect both thumbnail captions—succinct image descriptions meant to be consumed quickly—and comprehensive captions—which allow individuals to understand visual content in greater detail. We consider text-based instructions and time-constrained methods to collect descriptions at these two levels of detail and find that a time-constrained method is the most effective for collecting thumbnail captions while preserving caption accuracy. Additionally, we verify that caption authors using this time-constrained method are still able to focus on the most important regions of an image by tracking their eye gaze. We evaluate our collected captions along human-rated axes—correctness, fluency, amount of detail, and mentions of important concepts—and discuss the potential for model-based metrics to perform large-scale automatic evaluations in the future.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"14 4 1","pages":"1 - 28"},"PeriodicalIF":3.4,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89153840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems 用户如何体验人工智能系统的可追溯性?胰岛素自动输送(AID)系统的主观信息处理意识研究
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-24 DOI: https://dl.acm.org/doi/10.1145/3588594
Tim Schrills, Thomas Franke
{"title":"How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems","authors":"Tim Schrills, Thomas Franke","doi":"https://dl.acm.org/doi/10.1145/3588594","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588594","url":null,"abstract":"<p>When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction, we study Subjective Information Processing Awareness (SIPA) as a key construct to research users’ experience of explainable AI. The objective of the present research was to examine how users experience differing levels of traceability of an AI algorithm. We developed a basic AID simulation to create realistic scenarios for an experiment with <i>N</i> = 80, where we examined the effect of three levels of information disclosure on SIPA and performance. Attributes serving as the basis for insulin needs calculation were shown to users, who predicted the AID system’s calculation after over 60 observations. Results showed a difference in SIPA after repeated observations, associated with a general decline of SIPA ratings over time. Supporting scale validity, SIPA was strongly correlated with trust and satisfaction with explanations. The present research indicates that the effect of different levels of information disclosure may need several repetitions before it manifests. Additionally, high levels of information disclosure may lead to a miscalibration between SIPA and performance in predicting the system’s results. The results indicate that for a responsible design of XAI, system designers could utilize prediction tasks in order to calibrate experienced traceability.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"3 6","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems 用户如何体验人工智能系统的可追溯性?胰岛素自动输送(AID)系统的主观信息处理意识研究
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-24 DOI: 10.1145/3588594
Tim Schrills, T. Franke
{"title":"How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems","authors":"Tim Schrills, T. Franke","doi":"10.1145/3588594","DOIUrl":"https://doi.org/10.1145/3588594","url":null,"abstract":"When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction, we study Subjective Information Processing Awareness (SIPA) as a key construct to research users’ experience of explainable AI. The objective of the present research was to examine how users experience differing levels of traceability of an AI algorithm. We developed a basic AID simulation to create realistic scenarios for an experiment with N = 80, where we examined the effect of three levels of information disclosure on SIPA and performance. Attributes serving as the basis for insulin needs calculation were shown to users, who predicted the AID system’s calculation after over 60 observations. Results showed a difference in SIPA after repeated observations, associated with a general decline of SIPA ratings over time. Supporting scale validity, SIPA was strongly correlated with trust and satisfaction with explanations. The present research indicates that the effect of different levels of information disclosure may need several repetitions before it manifests. Additionally, high levels of information disclosure may lead to a miscalibration between SIPA and performance in predicting the system’s results. The results indicate that for a responsible design of XAI, system designers could utilize prediction tasks in order to calibrate experienced traceability.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"18 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85212145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Conversational Context-sensitive Ad Generation with a Few Core-Queries 会话上下文敏感广告生成与一些核心查询
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-23 DOI: 10.1145/3588578
Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, M. Imai
{"title":"Conversational Context-sensitive Ad Generation with a Few Core-Queries","authors":"Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, M. Imai","doi":"10.1145/3588578","DOIUrl":"https://doi.org/10.1145/3588578","url":null,"abstract":"When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fit the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they learned how to design of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"80 1","pages":"1 - 37"},"PeriodicalIF":3.4,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89068206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conversational Context-Sensitive Ad Generation With a Few Core-Queries 会话上下文敏感广告生成与一些核心查询
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-23 DOI: https://dl.acm.org/doi/10.1145/3588578
Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai
{"title":"Conversational Context-Sensitive Ad Generation With a Few Core-Queries","authors":"Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai","doi":"https://dl.acm.org/doi/10.1145/3588578","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588578","url":null,"abstract":"<p>When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fitted the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they assimilated the hang of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.</p>","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"58 5","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty 不同不确定性水平下人工智能与逻辑式解释对用户决策的影响
IF 3.4 4区 计算机科学
ACM Transactions on Interactive Intelligent Systems Pub Date : 2023-03-16 DOI: 10.1145/3588320
Federico Maria Cau, H. Hauptmann, L. D. Spano, N. Tintarev
{"title":"Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty","authors":"Federico Maria Cau, H. Hauptmann, L. D. Spano, N. Tintarev","doi":"10.1145/3588320","DOIUrl":"https://doi.org/10.1145/3588320","url":null,"abstract":"Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles, for classification tasks. We conducted two separate studies: one requesting participants to recognise hand-written digits and one to classify the sentiment of reviews. To assess the decision making, we analysed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design): according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on the AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.","PeriodicalId":48574,"journal":{"name":"ACM Transactions on Interactive Intelligent Systems","volume":"104 1","pages":""},"PeriodicalIF":3.4,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80461681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信