HierLLM: Hierarchical Large Language Model for Question Recommendation

Yuxuan Liu, Haipeng Liu, Ting Long
{"title":"HierLLM: Hierarchical Large Language Model for Question Recommendation","authors":"Yuxuan Liu, Haipeng Liu, Ting Long","doi":"arxiv-2409.06177","DOIUrl":null,"url":null,"abstract":"Question recommendation is a task that sequentially recommends questions for\nstudents to enhance their learning efficiency. That is, given the learning\nhistory and learning target of a student, a question recommender is supposed to\nselect the question that will bring the most improvement for students. Previous\nmethods typically model the question recommendation as a sequential\ndecision-making problem, estimating students' learning state with the learning\nhistory, and feeding the learning state with the learning target to a neural\nnetwork to select the recommended question from a question set. However,\nprevious methods are faced with two challenges: (1) learning history is\nunavailable in the cold start scenario, which makes the recommender generate\ninappropriate recommendations; (2) the size of the question set is much large,\nwhich makes it difficult for the recommender to select the best question\nprecisely. To address the challenges, we propose a method called hierarchical\nlarge language model for question recommendation (HierLLM), which is a\nLLM-based hierarchical structure. The LLM-based structure enables HierLLM to\ntackle the cold start issue with the strong reasoning abilities of LLM. The\nhierarchical structure takes advantage of the fact that the number of concepts\nis significantly smaller than the number of questions, narrowing the range of\nselectable questions by first identifying the relevant concept for the\nto-recommend question, and then selecting the recommended question based on\nthat concept. This hierarchical structure reduces the difficulty of the\nrecommendation.To investigate the performance of HierLLM, we conduct extensive\nexperiments, and the results demonstrate the outstanding performance of\nHierLLM.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06177","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Question recommendation is a task that sequentially recommends questions for students to enhance their learning efficiency. That is, given the learning history and learning target of a student, a question recommender is supposed to select the question that will bring the most improvement for students. Previous methods typically model the question recommendation as a sequential decision-making problem, estimating students' learning state with the learning history, and feeding the learning state with the learning target to a neural network to select the recommended question from a question set. However, previous methods are faced with two challenges: (1) learning history is unavailable in the cold start scenario, which makes the recommender generate inappropriate recommendations; (2) the size of the question set is much large, which makes it difficult for the recommender to select the best question precisely. To address the challenges, we propose a method called hierarchical large language model for question recommendation (HierLLM), which is a LLM-based hierarchical structure. The LLM-based structure enables HierLLM to tackle the cold start issue with the strong reasoning abilities of LLM. The hierarchical structure takes advantage of the fact that the number of concepts is significantly smaller than the number of questions, narrowing the range of selectable questions by first identifying the relevant concept for the to-recommend question, and then selecting the recommended question based on that concept. This hierarchical structure reduces the difficulty of the recommendation.To investigate the performance of HierLLM, we conduct extensive experiments, and the results demonstrate the outstanding performance of HierLLM.
HierLLM:用于问题推荐的分层大语言模型
问题推荐是一项按顺序为学生推荐问题以提高其学习效率的任务。也就是说,在给定学生的学习历史和学习目标的情况下,问题推荐者应该选择能给学生带来最大进步的问题。以往的方法通常将问题推荐作为一个连续决策问题来建模,利用学习历史估计学生的学习状态,然后将学习状态和学习目标输入神经网络,从问题集中选择推荐的问题。然而,以往的方法面临两个挑战:(1)学习历史在冷启动场景下不可用,这使得推荐器生成不恰当的推荐;(2)问题集的规模很大,这使得推荐器难以精确地选择最佳问题。针对上述问题,我们提出了一种基于 LLM 的分层结构的问题推荐方法,即分层大语言模型(HierLLM)。基于 LLM 的结构使 HierLLM 能够利用 LLM 的强大推理能力解决冷启动问题。分层结构利用了概念数明显少于问题数这一事实,通过首先确定要推荐问题的相关概念,然后根据该概念选择推荐问题,从而缩小了可选问题的范围。为了研究 HierLLM 的性能,我们进行了大量的实验,结果证明了 HierLLM 的出色性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信