在定性研究中利用人工智能的力量:探索、使用和重新设计ChatGPT

He Zhang (Albert) , Chuhao Wu , Jingyi Xie , Yao Lyu , Jie Cai , John M. Carroll
{"title":"在定性研究中利用人工智能的力量:探索、使用和重新设计ChatGPT","authors":"He Zhang (Albert) ,&nbsp;Chuhao Wu ,&nbsp;Jingyi Xie ,&nbsp;Yao Lyu ,&nbsp;Jie Cai ,&nbsp;John M. Carroll","doi":"10.1016/j.chbah.2025.100144","DOIUrl":null,"url":null,"abstract":"<div><div>AI tools, particularly large-scale language model (LLM) based applications such as ChatGPT, have the potential to mitigate qualitative research workload. In this study, we conducted semi-structured interviews with 17 participants and held a co-design session with 13 qualitative researchers to develop a framework for designing prompts specifically crafted to support junior researchers and stakeholders interested in leveraging AI for qualitative research. Our findings indicate that improving transparency, providing guidance on prompts, and strengthening users' understanding of LLMs' capabilities significantly enhance their ability to interact with ChatGPT. By comparing researchers' attitudes toward LLM-supported qualitative analysis before and after the co-design process, we reveal that the shift from an initially negative to a positive perception is driven by increased familiarity with the LLM's capabilities and the implementation of prompt engineering techniques that enhance response transparency and, in turn, foster greater trust. This research not only highlights the importance of well-designed prompts in LLM applications but also offers reflections for qualitative researchers on the perception of AI's role. Finally, we emphasize the potential ethical risks and the impact of constructing AI ethical expectations by researchers, particularly those who are novices, on future research and AI development.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100144"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT\",\"authors\":\"He Zhang (Albert) ,&nbsp;Chuhao Wu ,&nbsp;Jingyi Xie ,&nbsp;Yao Lyu ,&nbsp;Jie Cai ,&nbsp;John M. Carroll\",\"doi\":\"10.1016/j.chbah.2025.100144\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>AI tools, particularly large-scale language model (LLM) based applications such as ChatGPT, have the potential to mitigate qualitative research workload. In this study, we conducted semi-structured interviews with 17 participants and held a co-design session with 13 qualitative researchers to develop a framework for designing prompts specifically crafted to support junior researchers and stakeholders interested in leveraging AI for qualitative research. Our findings indicate that improving transparency, providing guidance on prompts, and strengthening users' understanding of LLMs' capabilities significantly enhance their ability to interact with ChatGPT. By comparing researchers' attitudes toward LLM-supported qualitative analysis before and after the co-design process, we reveal that the shift from an initially negative to a positive perception is driven by increased familiarity with the LLM's capabilities and the implementation of prompt engineering techniques that enhance response transparency and, in turn, foster greater trust. This research not only highlights the importance of well-designed prompts in LLM applications but also offers reflections for qualitative researchers on the perception of AI's role. Finally, we emphasize the potential ethical risks and the impact of constructing AI ethical expectations by researchers, particularly those who are novices, on future research and AI development.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"4 \",\"pages\":\"Article 100144\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000283\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000283","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能工具,特别是基于大规模语言模型(LLM)的应用程序,如ChatGPT,有可能减轻定性研究的工作量。在本研究中,我们对17名参与者进行了半结构化访谈,并与13名定性研究人员举行了共同设计会议,以开发一个专门设计提示的框架,以支持对利用人工智能进行定性研究感兴趣的初级研究人员和利益相关者。我们的研究结果表明,提高透明度,提供提示指导,加强用户对法学硕士能力的理解,可以显著提高他们与ChatGPT交互的能力。通过比较研究人员在协同设计过程之前和之后对法学硕士支持的定性分析的态度,我们发现,从最初的消极到积极的看法的转变是由对法学硕士能力的日益熟悉和快速工程技术的实施所驱动的,这些技术提高了响应的透明度,进而促进了更大的信任。这项研究不仅强调了在法学硕士应用程序中设计良好的提示的重要性,而且为定性研究人员提供了对人工智能角色感知的思考。最后,我们强调了潜在的伦理风险,以及研究人员(特别是新手)构建人工智能伦理期望对未来研究和人工智能发展的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT

Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT
AI tools, particularly large-scale language model (LLM) based applications such as ChatGPT, have the potential to mitigate qualitative research workload. In this study, we conducted semi-structured interviews with 17 participants and held a co-design session with 13 qualitative researchers to develop a framework for designing prompts specifically crafted to support junior researchers and stakeholders interested in leveraging AI for qualitative research. Our findings indicate that improving transparency, providing guidance on prompts, and strengthening users' understanding of LLMs' capabilities significantly enhance their ability to interact with ChatGPT. By comparing researchers' attitudes toward LLM-supported qualitative analysis before and after the co-design process, we reveal that the shift from an initially negative to a positive perception is driven by increased familiarity with the LLM's capabilities and the implementation of prompt engineering techniques that enhance response transparency and, in turn, foster greater trust. This research not only highlights the importance of well-designed prompts in LLM applications but also offers reflections for qualitative researchers on the perception of AI's role. Finally, we emphasize the potential ethical risks and the impact of constructing AI ethical expectations by researchers, particularly those who are novices, on future research and AI development.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信