Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight

Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo
{"title":"Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight","authors":"Ye Wang ,&nbsp;Huan Chen ,&nbsp;Xiaofan Wei ,&nbsp;Cheng Chang ,&nbsp;Xinyi Zuo","doi":"10.1016/j.chbah.2025.100198","DOIUrl":null,"url":null,"abstract":"<div><div>This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that <strong>trust can be achieved through transparency</strong>. By revealing the <strong>coexistence of AI appreciation and aversion</strong>, the study offers nuanced insights into <strong>trust calibration</strong> within <strong>socially and emotionally sensitive communication contexts</strong>. These results also inform the <strong>integration of AI summarization into qualitative research workflows</strong>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100198"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000829","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that trust can be achieved through transparency. By revealing the coexistence of AI appreciation and aversion, the study offers nuanced insights into trust calibration within socially and emotionally sensitive communication contexts. These results also inform the integration of AI summarization into qualitative research workflows.
信任机器:在有和没有人类监督的情况下,探索参与者对虚拟焦点小组中人工智能驱动的摘要的看法
本研究探讨了人工智能辅助摘要的使用,作为虚拟焦点小组(VFG)设置的人工智能调节助手的一部分,重点是通过人为监督和透明度来校准信任。为了了解参与者的观点,本研究采用了混合方法:研究1进行了一个焦点小组,为研究2的刺激设计收集初始数据,研究2是一个在线实验,收集了三组(对照组和两个治疗组)对人工智能总结的看法的定量和定性测量。进行方差分析和人工智能辅助的专题分析。研究结果表明,无论有没有人为监督,参与者都积极接受了人工智能摘要。然而,在参与者对人工智能摘要的VFG应用的满意度方面没有观察到显着差异。定性研究结果显示,参与者对人工智能的效率表示赞赏,但对准确性、真实性以及人工智能缺乏真正人类理解的可能性表示担忧。这些研究结果表明,信任可以通过透明度来实现,从而为人工智能领域的信任文献做出了贡献。通过揭示对人工智能的欣赏和厌恶并存,该研究为在社交和情感敏感的沟通环境中进行信任校准提供了细致入微的见解。这些结果也为将人工智能总结整合到定性研究工作流程中提供了信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信