推进劳动力市场的本体对齐:将大型语言模型与领域知识相结合

Lucas L. Snijder, Quirine T. S. Smit, Maaike H. T. de Boer
{"title":"推进劳动力市场的本体对齐:将大型语言模型与领域知识相结合","authors":"Lucas L. Snijder, Quirine T. S. Smit, Maaike H. T. de Boer","doi":"10.1609/aaaiss.v3i1.31208","DOIUrl":null,"url":null,"abstract":"One of the approaches to help the demand and supply problem in the labor market domain is to change from degree-based hiring to skill-based hiring. The link between occupations, degrees and skills is captured in domain ontologies such as ESCO in Europe and O*NET in the US. Several countries are also building or extending these ontologies. The alignment of the ontologies is important, as it should be clear how they all relate. Aligning two ontologies by creating a mapping between them is a tedious task to do manually, and with the rise of generative large language models like GPT-4, we explore how language models and domain knowledge can be combined in the matching of the instances in the ontologies and in finding the specific relation between the instances (mapping refinement). We specifically focus on the process of updating a mapping, but the methods could also be used to create a first-time mapping. We compare the performance of several state-of-the-art methods such as GPT-4 and fine-tuned BERT models on the mapping between ESCO and O*NET and ESCO and CompetentNL (the Dutch variant) for both ontology matching and mapping refinement. Our findings indicate that: 1) Match-BERT-GPT, an integration of BERT and GPT, performs best in ontology matching, while 2) TaSeR outperforms GPT-4, albeit marginally, in the task of mapping refinement. These results show that domain knowledge is still important in ontology alignment, especially in the updating of a mapping in our use cases in the labor domain.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"39 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing Ontology Alignment in the Labor Market: Combining Large Language Models with Domain Knowledge\",\"authors\":\"Lucas L. Snijder, Quirine T. S. Smit, Maaike H. T. de Boer\",\"doi\":\"10.1609/aaaiss.v3i1.31208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"One of the approaches to help the demand and supply problem in the labor market domain is to change from degree-based hiring to skill-based hiring. The link between occupations, degrees and skills is captured in domain ontologies such as ESCO in Europe and O*NET in the US. Several countries are also building or extending these ontologies. The alignment of the ontologies is important, as it should be clear how they all relate. Aligning two ontologies by creating a mapping between them is a tedious task to do manually, and with the rise of generative large language models like GPT-4, we explore how language models and domain knowledge can be combined in the matching of the instances in the ontologies and in finding the specific relation between the instances (mapping refinement). We specifically focus on the process of updating a mapping, but the methods could also be used to create a first-time mapping. We compare the performance of several state-of-the-art methods such as GPT-4 and fine-tuned BERT models on the mapping between ESCO and O*NET and ESCO and CompetentNL (the Dutch variant) for both ontology matching and mapping refinement. Our findings indicate that: 1) Match-BERT-GPT, an integration of BERT and GPT, performs best in ontology matching, while 2) TaSeR outperforms GPT-4, albeit marginally, in the task of mapping refinement. These results show that domain knowledge is still important in ontology alignment, especially in the updating of a mapping in our use cases in the labor domain.\",\"PeriodicalId\":516827,\"journal\":{\"name\":\"Proceedings of the AAAI Symposium Series\",\"volume\":\"39 3\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the AAAI Symposium Series\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1609/aaaiss.v3i1.31208\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31208","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

帮助解决劳动力市场供求问题的方法之一是从基于学位的招聘转变为基于技能的招聘。欧洲的 ESCO 和美国的 O*NET 等领域本体论都体现了职业、学位和技能之间的联系。一些国家也在建立或扩展这些本体论。本体的统一非常重要,因为它们之间的关系应该是一目了然的。通过创建本体之间的映射来对齐两个本体是一项繁琐的人工工作,而随着 GPT-4 等生成式大型语言模型的兴起,我们探索了如何将语言模型和领域知识结合起来,对本体中的实例进行匹配,并找到实例之间的特定关系(映射细化)。我们特别关注映射的更新过程,但这些方法也可用于创建首次映射。我们比较了 GPT-4 和微调 BERT 模型等几种最先进的方法在 ESCO 和 O*NET 以及 ESCO 和 CompetentNL(荷兰语变体)之间的映射上的本体匹配和映射细化性能。我们的研究结果表明1) Match-BERT-GPT(BERT 和 GPT 的集成)在本体匹配方面表现最佳,而 2) TaSeR 在映射细化任务方面的表现优于 GPT-4,尽管微不足道。这些结果表明,领域知识在本体对齐中仍然很重要,尤其是在我们的劳动领域用例中更新映射时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Advancing Ontology Alignment in the Labor Market: Combining Large Language Models with Domain Knowledge
One of the approaches to help the demand and supply problem in the labor market domain is to change from degree-based hiring to skill-based hiring. The link between occupations, degrees and skills is captured in domain ontologies such as ESCO in Europe and O*NET in the US. Several countries are also building or extending these ontologies. The alignment of the ontologies is important, as it should be clear how they all relate. Aligning two ontologies by creating a mapping between them is a tedious task to do manually, and with the rise of generative large language models like GPT-4, we explore how language models and domain knowledge can be combined in the matching of the instances in the ontologies and in finding the specific relation between the instances (mapping refinement). We specifically focus on the process of updating a mapping, but the methods could also be used to create a first-time mapping. We compare the performance of several state-of-the-art methods such as GPT-4 and fine-tuned BERT models on the mapping between ESCO and O*NET and ESCO and CompetentNL (the Dutch variant) for both ontology matching and mapping refinement. Our findings indicate that: 1) Match-BERT-GPT, an integration of BERT and GPT, performs best in ontology matching, while 2) TaSeR outperforms GPT-4, albeit marginally, in the task of mapping refinement. These results show that domain knowledge is still important in ontology alignment, especially in the updating of a mapping in our use cases in the labor domain.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信