Ted K. Mburu, Kangxuan Rong, Campbell J. McColley, Alexandra Werth
{"title":"人工智能驱动调查问题生成的方法学基础","authors":"Ted K. Mburu, Kangxuan Rong, Campbell J. McColley, Alexandra Werth","doi":"10.1002/jee.70012","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question–Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants.</p>\n </section>\n \n <section>\n \n <h3> Design</h3>\n \n <p>We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.</p>\n </section>\n </div>","PeriodicalId":50206,"journal":{"name":"Journal of Engineering Education","volume":"114 3","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Methodological foundations for artificial intelligence-driven survey question generation\",\"authors\":\"Ted K. Mburu, Kangxuan Rong, Campbell J. McColley, Alexandra Werth\",\"doi\":\"10.1002/jee.70012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question–Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Design</h3>\\n \\n <p>We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.</p>\\n </section>\\n </div>\",\"PeriodicalId\":50206,\"journal\":{\"name\":\"Journal of Engineering Education\",\"volume\":\"114 3\",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Engineering Education\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/jee.70012\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Engineering Education","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jee.70012","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Methodological foundations for artificial intelligence-driven survey question generation
Background
This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability.
Purpose
We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question–Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants.
Design
We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool.
Results
We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys.
Conclusions
While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.