Shengxin Hong, Chang Cai, Sixuan Du, Haiyue Feng, Siyuan Liu, Xiuyi Fan
{"title":"\"我的分数错了!\":评价学生作文的交互式反馈的可竞争人工智能框架","authors":"Shengxin Hong, Chang Cai, Sixuan Du, Haiyue Feng, Siyuan Liu, Xiuyi Fan","doi":"arxiv-2409.07453","DOIUrl":null,"url":null,"abstract":"Interactive feedback, where feedback flows in both directions between teacher\nand student, is more effective than traditional one-way feedback. However, it\nis often too time-consuming for widespread use in educational practice. While\nLarge Language Models (LLMs) have potential for automating feedback, they\nstruggle with reasoning and interaction in an interactive setting. This paper\nintroduces CAELF, a Contestable AI Empowered LLM Framework for automating\ninteractive feedback. CAELF allows students to query, challenge, and clarify\ntheir feedback by integrating a multi-agent system with computational\nargumentation. Essays are first assessed by multiple Teaching-Assistant Agents\n(TA Agents), and then a Teacher Agent aggregates the evaluations through formal\nreasoning to generate feedback and grades. Students can further engage with the\nfeedback to refine their understanding. A case study on 500 critical thinking\nessays with user studies demonstrates that CAELF significantly improves\ninteractive feedback, enhancing the reasoning and interaction capabilities of\nLLMs. This approach offers a promising solution to overcoming the time and\nresource barriers that have limited the adoption of interactive feedback in\neducational settings.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"10 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"\\\"My Grade is Wrong!\\\": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays\",\"authors\":\"Shengxin Hong, Chang Cai, Sixuan Du, Haiyue Feng, Siyuan Liu, Xiuyi Fan\",\"doi\":\"arxiv-2409.07453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Interactive feedback, where feedback flows in both directions between teacher\\nand student, is more effective than traditional one-way feedback. However, it\\nis often too time-consuming for widespread use in educational practice. While\\nLarge Language Models (LLMs) have potential for automating feedback, they\\nstruggle with reasoning and interaction in an interactive setting. This paper\\nintroduces CAELF, a Contestable AI Empowered LLM Framework for automating\\ninteractive feedback. CAELF allows students to query, challenge, and clarify\\ntheir feedback by integrating a multi-agent system with computational\\nargumentation. Essays are first assessed by multiple Teaching-Assistant Agents\\n(TA Agents), and then a Teacher Agent aggregates the evaluations through formal\\nreasoning to generate feedback and grades. Students can further engage with the\\nfeedback to refine their understanding. A case study on 500 critical thinking\\nessays with user studies demonstrates that CAELF significantly improves\\ninteractive feedback, enhancing the reasoning and interaction capabilities of\\nLLMs. This approach offers a promising solution to overcoming the time and\\nresource barriers that have limited the adoption of interactive feedback in\\neducational settings.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"10 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07453\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
"My Grade is Wrong!": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays
Interactive feedback, where feedback flows in both directions between teacher
and student, is more effective than traditional one-way feedback. However, it
is often too time-consuming for widespread use in educational practice. While
Large Language Models (LLMs) have potential for automating feedback, they
struggle with reasoning and interaction in an interactive setting. This paper
introduces CAELF, a Contestable AI Empowered LLM Framework for automating
interactive feedback. CAELF allows students to query, challenge, and clarify
their feedback by integrating a multi-agent system with computational
argumentation. Essays are first assessed by multiple Teaching-Assistant Agents
(TA Agents), and then a Teacher Agent aggregates the evaluations through formal
reasoning to generate feedback and grades. Students can further engage with the
feedback to refine their understanding. A case study on 500 critical thinking
essays with user studies demonstrates that CAELF significantly improves
interactive feedback, enhancing the reasoning and interaction capabilities of
LLMs. This approach offers a promising solution to overcoming the time and
resource barriers that have limited the adoption of interactive feedback in
educational settings.