{"title":"比较ChatGPT反馈和同伴反馈对学生统计分析评价判断的影响:一个案例研究。","authors":"Xiao Xie, Lawrence Jun Zhang, Aaron J Wilson","doi":"10.3390/bs15070884","DOIUrl":null,"url":null,"abstract":"<p><p>Higher Degree by Research (HDR) students in language and education disciplines, particularly those enrolled in thesis-only programmes, are increasingly expected to interpret complex statistical data. However, many lack the analytical skills required for independent statistical analysis, posing challenges to their research competence. This study investigated the pedagogical potential of ChatGPT-4o feedback and peer feedback in supporting students' evaluative judgement during a 14-week doctoral-level statistical analysis course at a research-intensive university. Thirty-two doctoral students were assigned to receive either ChatGPT feedback or peer feedback on a mid-term assignment. They were then required to complete written reflections. Follow-up interviews with six selected participants revealed that each feedback modality influenced their evaluative judgement differently across three dimensions: hard (accuracy-based), soft (value-based), and dynamic (process-based). While ChatGPT provided timely and detailed guidance, it offered limited support for students' confidence in verifying accuracy. Peer feedback promoted critical reflection and collaboration but varied in quality. We therefore argue that strategically combining ChatGPT feedback and peer feedback may better support novice researchers in developing statistical competence in hybrid human-AI learning environments.</p>","PeriodicalId":8742,"journal":{"name":"Behavioral Sciences","volume":"15 7","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing ChatGPT Feedback and Peer Feedback in Shaping Students' Evaluative Judgement of Statistical Analysis: A Case Study.\",\"authors\":\"Xiao Xie, Lawrence Jun Zhang, Aaron J Wilson\",\"doi\":\"10.3390/bs15070884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Higher Degree by Research (HDR) students in language and education disciplines, particularly those enrolled in thesis-only programmes, are increasingly expected to interpret complex statistical data. However, many lack the analytical skills required for independent statistical analysis, posing challenges to their research competence. This study investigated the pedagogical potential of ChatGPT-4o feedback and peer feedback in supporting students' evaluative judgement during a 14-week doctoral-level statistical analysis course at a research-intensive university. Thirty-two doctoral students were assigned to receive either ChatGPT feedback or peer feedback on a mid-term assignment. They were then required to complete written reflections. Follow-up interviews with six selected participants revealed that each feedback modality influenced their evaluative judgement differently across three dimensions: hard (accuracy-based), soft (value-based), and dynamic (process-based). While ChatGPT provided timely and detailed guidance, it offered limited support for students' confidence in verifying accuracy. Peer feedback promoted critical reflection and collaboration but varied in quality. We therefore argue that strategically combining ChatGPT feedback and peer feedback may better support novice researchers in developing statistical competence in hybrid human-AI learning environments.</p>\",\"PeriodicalId\":8742,\"journal\":{\"name\":\"Behavioral Sciences\",\"volume\":\"15 7\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Behavioral Sciences\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.3390/bs15070884\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavioral Sciences","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3390/bs15070884","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
Comparing ChatGPT Feedback and Peer Feedback in Shaping Students' Evaluative Judgement of Statistical Analysis: A Case Study.
Higher Degree by Research (HDR) students in language and education disciplines, particularly those enrolled in thesis-only programmes, are increasingly expected to interpret complex statistical data. However, many lack the analytical skills required for independent statistical analysis, posing challenges to their research competence. This study investigated the pedagogical potential of ChatGPT-4o feedback and peer feedback in supporting students' evaluative judgement during a 14-week doctoral-level statistical analysis course at a research-intensive university. Thirty-two doctoral students were assigned to receive either ChatGPT feedback or peer feedback on a mid-term assignment. They were then required to complete written reflections. Follow-up interviews with six selected participants revealed that each feedback modality influenced their evaluative judgement differently across three dimensions: hard (accuracy-based), soft (value-based), and dynamic (process-based). While ChatGPT provided timely and detailed guidance, it offered limited support for students' confidence in verifying accuracy. Peer feedback promoted critical reflection and collaboration but varied in quality. We therefore argue that strategically combining ChatGPT feedback and peer feedback may better support novice researchers in developing statistical competence in hybrid human-AI learning environments.