This study investigates the use of large language models to create adaptive, contextually relevant survey questions, aiming to enhance data quality in educational research without limiting scalability.
We provide step-by-step methods to develop a dynamic survey instrument, driven by artificial intelligence (AI), and introduce the Synthetic Question–Response Analysis (SQRA) framework, a methodology designed to help evaluate AI-generated questions before deployment with human participants.
We examine the questions generated by our survey instrument, as well as compare AI-to-AI, generated through our SQRA framework, with AI-to-human interactions. Activity theory provides a theoretical lens to examine the dynamic interactions between AI and participants, highlighting the mutual influence within the survey tool.
We found that AI-generated questions were contextually relevant and adaptable, successfully incorporating course-specific references. However, issues such as redundant phrasing, double-barreled questions, and jargon affected the clarity of the questions. Although the SQRA framework exhibited limitations in replicating human response variability, its iterative refinement process proved effective in improving question quality, reinforcing the utility of this approach for enhancing AI-driven surveys.
While AI-driven question generation can enhance the scalability and personalization of open-ended survey prompts, more research is needed to establish best practices for high-quality educational research. The SQRA framework demonstrated practical utility for prompt refinement and initial validation of AI-generated survey content, but it is not capable of replicating human responses. We highlight the importance of iterative prompt engineering, ethical considerations, and the need for methodological advancements in the development of trustworthy AI-driven survey instruments for educational research.