为流程自动化背景下的大型语言模型建立基准数据集

IF 3 Q2 ENGINEERING, CHEMICAL
Tejennour Tizaoui , Ruomu Tan
{"title":"为流程自动化背景下的大型语言模型建立基准数据集","authors":"Tejennour Tizaoui ,&nbsp;Ruomu Tan","doi":"10.1016/j.dche.2024.100186","DOIUrl":null,"url":null,"abstract":"<div><div>The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation.</div></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"13 ","pages":"Article 100186"},"PeriodicalIF":3.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772508124000486/pdfft?md5=41f0a659b6aed87235c44fe3a8cc7489&pid=1-s2.0-S2772508124000486-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Towards a benchmark dataset for large language models in the context of process automation\",\"authors\":\"Tejennour Tizaoui ,&nbsp;Ruomu Tan\",\"doi\":\"10.1016/j.dche.2024.100186\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation.</div></div>\",\"PeriodicalId\":72815,\"journal\":{\"name\":\"Digital Chemical Engineering\",\"volume\":\"13 \",\"pages\":\"Article 100186\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2772508124000486/pdfft?md5=41f0a659b6aed87235c44fe3a8cc7489&pid=1-s2.0-S2772508124000486-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Chemical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772508124000486\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, CHEMICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chemical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772508124000486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

摘要

流程自动化领域拥有大量的文本文档语料库,可以通过大型语言模型(LLM)和自然语言理解(NLU)系统加以利用。最近,开源的各种 LLM 取得了进步,为在这一领域有效利用 LLM 提供了机会。然而,LLMs 是在一般文本数据上预先训练的,缺乏流程自动化等更专业、更细分领域的知识。此外,由于缺乏专门针对流程自动化的数据集,因此很难准确评估 LLM 在该领域的有效性。本文旨在为创建多任务基准奠定基础,以评估和调整流程自动化中的 LLM。在本文中,我们介绍了一种新颖的半自动数据生成工作流程,专门用于创建提取式问题解答(QA)数据集。本文提出的方法包括从关注流程自动化的学术论文中提取段落,生成相应的问题,然后对数据集进行注释和评估。最初创建的数据集还要进行数据扩充,并使用语义相似度指标进行评估。然后,本研究在新创建的流程自动化提取性质量保证数据集上对六种 LLM 进行了基准测试。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Towards a benchmark dataset for large language models in the context of process automation
The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信