注入ai的语义模型丰富和扩展编程问题生成

I-Han Hsiao, Cheng-Yu Chung
{"title":"注入ai的语义模型丰富和扩展编程问题生成","authors":"I-Han Hsiao, Cheng-Yu Chung","doi":"10.37965/jait.2022.0090","DOIUrl":null,"url":null,"abstract":"Creating practice questions for programming learning is not easy. It requires the instructor to diligently organize heterogeneous learning resources, i.e., conceptual programming concepts and procedural programming rules. Today’s programming question generation (PQG) is still largely replying on the demanding creation task performed by the instructors without advanced technological support. In this work, we propose a semantic PQG model that aims to help the instructor generate new programming questions and expand the assessment items. The PQG model is designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network by the Local Knowledge Graph (LKG) and Abstract Syntax Tree (AST). For any given question, the model queries the established network to find related code examples and generates a set of questions by the associated LKG/AST semantic structures. We conduct analysis to compare instructor-made questions from 9 undergraduate introductory programming courses and textbook questions. The results show that the instructor-made questions had much simpler complexity than the textbook ones. The disparity of topic distribution intrigued us to further research the breadth and depth of question quality and also to investigate the complexity of the questions in relations to the student performances. Finally, we report an user study results on the proposed AI-infused semantic PQG model in examining the machine-generated questions quality.","PeriodicalId":70996,"journal":{"name":"人工智能技术学报(英文)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"AI-Infused Semantic Model to Enrich and Expand Programming Question Generation\",\"authors\":\"I-Han Hsiao, Cheng-Yu Chung\",\"doi\":\"10.37965/jait.2022.0090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Creating practice questions for programming learning is not easy. It requires the instructor to diligently organize heterogeneous learning resources, i.e., conceptual programming concepts and procedural programming rules. Today’s programming question generation (PQG) is still largely replying on the demanding creation task performed by the instructors without advanced technological support. In this work, we propose a semantic PQG model that aims to help the instructor generate new programming questions and expand the assessment items. The PQG model is designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network by the Local Knowledge Graph (LKG) and Abstract Syntax Tree (AST). For any given question, the model queries the established network to find related code examples and generates a set of questions by the associated LKG/AST semantic structures. We conduct analysis to compare instructor-made questions from 9 undergraduate introductory programming courses and textbook questions. The results show that the instructor-made questions had much simpler complexity than the textbook ones. The disparity of topic distribution intrigued us to further research the breadth and depth of question quality and also to investigate the complexity of the questions in relations to the student performances. Finally, we report an user study results on the proposed AI-infused semantic PQG model in examining the machine-generated questions quality.\",\"PeriodicalId\":70996,\"journal\":{\"name\":\"人工智能技术学报(英文)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"人工智能技术学报(英文)\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://doi.org/10.37965/jait.2022.0090\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"人工智能技术学报(英文)","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.37965/jait.2022.0090","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

摘要

为编程学习创建练习题并不容易。它要求教师勤奋地组织异构学习资源,即概念性编程概念和过程性编程规则。今天的编程问题生成(PQG)在很大程度上仍然依赖于教师在没有先进技术支持的情况下执行的苛刻的创建任务。在这项工作中,我们提出了一个语义PQG模型,旨在帮助教师生成新的编程问题并扩展评估项目。PQG模型旨在通过局部知识图(LKG)和抽象语法树(AST)将教科书中的概念性和程序性编程知识转化为语义网络。对于任何给定的问题,该模型查询已建立的网络以查找相关的代码示例,并根据相关的LKG/AST语义结构生成一组问题。我们对9门本科编程导论课程中教师自编的问题和教科书中的问题进行了分析比较。结果表明,教师自编问题的复杂程度远低于教科书问题。话题分布的差异促使我们进一步研究问题质量的广度和深度,以及问题的复杂性与学生成绩的关系。最后,我们报告了在检查机器生成的问题质量时所提出的人工智能注入语义PQG模型的用户研究结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AI-Infused Semantic Model to Enrich and Expand Programming Question Generation
Creating practice questions for programming learning is not easy. It requires the instructor to diligently organize heterogeneous learning resources, i.e., conceptual programming concepts and procedural programming rules. Today’s programming question generation (PQG) is still largely replying on the demanding creation task performed by the instructors without advanced technological support. In this work, we propose a semantic PQG model that aims to help the instructor generate new programming questions and expand the assessment items. The PQG model is designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network by the Local Knowledge Graph (LKG) and Abstract Syntax Tree (AST). For any given question, the model queries the established network to find related code examples and generates a set of questions by the associated LKG/AST semantic structures. We conduct analysis to compare instructor-made questions from 9 undergraduate introductory programming courses and textbook questions. The results show that the instructor-made questions had much simpler complexity than the textbook ones. The disparity of topic distribution intrigued us to further research the breadth and depth of question quality and also to investigate the complexity of the questions in relations to the student performances. Finally, we report an user study results on the proposed AI-infused semantic PQG model in examining the machine-generated questions quality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信