Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo
{"title":"信任机器:在有和没有人类监督的情况下,探索参与者对虚拟焦点小组中人工智能驱动的摘要的看法","authors":"Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo","doi":"10.1016/j.chbah.2025.100198","DOIUrl":null,"url":null,"abstract":"<div><div>This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that <strong>trust can be achieved through transparency</strong>. By revealing the <strong>coexistence of AI appreciation and aversion</strong>, the study offers nuanced insights into <strong>trust calibration</strong> within <strong>socially and emotionally sensitive communication contexts</strong>. These results also inform the <strong>integration of AI summarization into qualitative research workflows</strong>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100198"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight\",\"authors\":\"Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo\",\"doi\":\"10.1016/j.chbah.2025.100198\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that <strong>trust can be achieved through transparency</strong>. By revealing the <strong>coexistence of AI appreciation and aversion</strong>, the study offers nuanced insights into <strong>trust calibration</strong> within <strong>socially and emotionally sensitive communication contexts</strong>. These results also inform the <strong>integration of AI summarization into qualitative research workflows</strong>.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"6 \",\"pages\":\"Article 100198\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000829\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000829","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight
This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that trust can be achieved through transparency. By revealing the coexistence of AI appreciation and aversion, the study offers nuanced insights into trust calibration within socially and emotionally sensitive communication contexts. These results also inform the integration of AI summarization into qualitative research workflows.