{"title":"Parameter-Efficiently Fine-Tuning Large Language Models for Classroom Dialogue Analysis","authors":"Deliang Wang;Yaqian Zheng;Jinjiang Li;Gaowei Chen","doi":"10.1109/TLT.2025.3567995","DOIUrl":null,"url":null,"abstract":"Researchers have increasingly utilized artificial intelligence to automatically analyze classroom dialogue, aiming to provide timely feedback to teachers due to its educational significance. However, traditional machine learning and deep learning models face challenges, such as limited performance and lack of generalizability, across various dimensions of classroom dialogue and educational contexts. Recent efforts to utilize large language models (LLMs) for classroom dialogue analysis have predominantly relied on prompt engineering techniques, primarily due to the high costs associated with full fine-tuning, which has resulted in suboptimal performance and areas needing improvement. We, therefore, propose the application of parameter-efficient fine-tuning (PEFT) techniques to enhance the performance of LLMs in classroom dialogue analysis. Specifically, we utilized low-rank adaptation, a prominent PEFT technique, to fine-tune three state-of-the-art LLMs—Llama-3.2-3B, Gemma-2-9B, and Mistral-7B-v0.3—targeting the analysis of both teachers' and students' dialogic moves within K-12 mathematics lessons. The experimental results indicate that, in comparison to fully fine-tuning BERT and RoBERTa models and prompting LLMs, LLMs fine-tuned using the PEFT technique achieve superior performance. Moreover, the PEFT approach significantly reduced the number of trainable parameters within the LLMs by over 300 times and decreased their training duration. Although the training time for PEFT-tuned LLMs was still longer than that required for fully fine-tuning BERT and RoBERTa, these LLMs demonstrated specialization in this specific dimension and generalizability to other tasks and contexts. We believe that the use of PEFT techniques presents a promising direction for future research in classroom dialogue analysis.","PeriodicalId":49191,"journal":{"name":"IEEE Transactions on Learning Technologies","volume":"18 ","pages":"542-555"},"PeriodicalIF":4.9000,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Learning Technologies","FirstCategoryId":"95","ListUrlMain":"https://ieeexplore.ieee.org/document/10992249/","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Researchers have increasingly utilized artificial intelligence to automatically analyze classroom dialogue, aiming to provide timely feedback to teachers due to its educational significance. However, traditional machine learning and deep learning models face challenges, such as limited performance and lack of generalizability, across various dimensions of classroom dialogue and educational contexts. Recent efforts to utilize large language models (LLMs) for classroom dialogue analysis have predominantly relied on prompt engineering techniques, primarily due to the high costs associated with full fine-tuning, which has resulted in suboptimal performance and areas needing improvement. We, therefore, propose the application of parameter-efficient fine-tuning (PEFT) techniques to enhance the performance of LLMs in classroom dialogue analysis. Specifically, we utilized low-rank adaptation, a prominent PEFT technique, to fine-tune three state-of-the-art LLMs—Llama-3.2-3B, Gemma-2-9B, and Mistral-7B-v0.3—targeting the analysis of both teachers' and students' dialogic moves within K-12 mathematics lessons. The experimental results indicate that, in comparison to fully fine-tuning BERT and RoBERTa models and prompting LLMs, LLMs fine-tuned using the PEFT technique achieve superior performance. Moreover, the PEFT approach significantly reduced the number of trainable parameters within the LLMs by over 300 times and decreased their training duration. Although the training time for PEFT-tuned LLMs was still longer than that required for fully fine-tuning BERT and RoBERTa, these LLMs demonstrated specialization in this specific dimension and generalizability to other tasks and contexts. We believe that the use of PEFT techniques presents a promising direction for future research in classroom dialogue analysis.
期刊介绍:
The IEEE Transactions on Learning Technologies covers all advances in learning technologies and their applications, including but not limited to the following topics: innovative online learning systems; intelligent tutors; educational games; simulation systems for education and training; collaborative learning tools; learning with mobile devices; wearable devices and interfaces for learning; personalized and adaptive learning systems; tools for formative and summative assessment; tools for learning analytics and educational data mining; ontologies for learning systems; standards and web services that support learning; authoring tools for learning materials; computer support for peer tutoring; learning via computer-mediated inquiry, field, and lab work; social learning techniques; social networks and infrastructures for learning and knowledge sharing; and creation and management of learning objects.