LangTest:用于自定义 LLM 和 NLP 模型的综合评估库

IF 1.3 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Arshaan Nazir, Thadaka Kalyan Chakravarthy, David Amore Cecchini, Rakshit Khajuria, Prikshit Sharma, Ali Tarik Mirik, Veysel Kocaman, David Talby
{"title":"LangTest:用于自定义 LLM 和 NLP 模型的综合评估库","authors":"Arshaan Nazir,&nbsp;Thadaka Kalyan Chakravarthy,&nbsp;David Amore Cecchini,&nbsp;Rakshit Khajuria,&nbsp;Prikshit Sharma,&nbsp;Ali Tarik Mirik,&nbsp;Veysel Kocaman,&nbsp;David Talby","doi":"10.1016/j.simpa.2024.100619","DOIUrl":null,"url":null,"abstract":"<div><p>The use of natural language processing (NLP) models, including the more recent large language models (LLM) in real-world applications obtained relevant success in the past years. To measure the performance of these systems, traditional performance metrics such as accuracy, precision, recall, and f1-score are used. Although it is important to measure the performance of the models in those terms, natural language often requires an holistic evaluation that consider other important aspects such as robustness, bias, accuracy, toxicity, fairness, safety, efficiency, clinical relevance, security, representation, disinformation, political orientation, sensitivity, factuality, legal concerns, and vulnerabilities. To address the gap, we introduce <em>LangTest</em>, an open source Python toolkit, aimed at reshaping the evaluation of LLMs and NLP models in real-world applications. The project aims to empower data scientists, enabling them to meet high standards in the ever-evolving landscape of AI model development. Specifically, it provides a comprehensive suite of more than 60 test types, ensuring a more comprehensive understanding of a model’s behavior and responsible AI use. In this experiment, a Named Entity Recognition (NER) clinical model showed significant improvement in its capabilities to identify clinical entities in text after applying data augmentation for robustness.</p></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2665963824000071/pdfft?md5=08c3b88d18208044478d2ee4f4d9432b&pid=1-s2.0-S2665963824000071-main.pdf","citationCount":"0","resultStr":"{\"title\":\"LangTest: A comprehensive evaluation library for custom LLM and NLP models\",\"authors\":\"Arshaan Nazir,&nbsp;Thadaka Kalyan Chakravarthy,&nbsp;David Amore Cecchini,&nbsp;Rakshit Khajuria,&nbsp;Prikshit Sharma,&nbsp;Ali Tarik Mirik,&nbsp;Veysel Kocaman,&nbsp;David Talby\",\"doi\":\"10.1016/j.simpa.2024.100619\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The use of natural language processing (NLP) models, including the more recent large language models (LLM) in real-world applications obtained relevant success in the past years. To measure the performance of these systems, traditional performance metrics such as accuracy, precision, recall, and f1-score are used. Although it is important to measure the performance of the models in those terms, natural language often requires an holistic evaluation that consider other important aspects such as robustness, bias, accuracy, toxicity, fairness, safety, efficiency, clinical relevance, security, representation, disinformation, political orientation, sensitivity, factuality, legal concerns, and vulnerabilities. To address the gap, we introduce <em>LangTest</em>, an open source Python toolkit, aimed at reshaping the evaluation of LLMs and NLP models in real-world applications. The project aims to empower data scientists, enabling them to meet high standards in the ever-evolving landscape of AI model development. Specifically, it provides a comprehensive suite of more than 60 test types, ensuring a more comprehensive understanding of a model’s behavior and responsible AI use. In this experiment, a Named Entity Recognition (NER) clinical model showed significant improvement in its capabilities to identify clinical entities in text after applying data augmentation for robustness.</p></div>\",\"PeriodicalId\":29771,\"journal\":{\"name\":\"Software Impacts\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2024-02-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2665963824000071/pdfft?md5=08c3b88d18208044478d2ee4f4d9432b&pid=1-s2.0-S2665963824000071-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Impacts\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2665963824000071\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Impacts","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2665963824000071","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

自然语言处理(NLP)模型,包括最新的大型语言模型(LLM),在过去几年的实际应用中取得了巨大成功。为了衡量这些系统的性能,人们使用了传统的性能指标,如准确率、精确度、召回率和 f1 分数。尽管从这些方面来衡量模型的性能非常重要,但自然语言通常需要进行整体评估,考虑其他重要方面,如鲁棒性、偏差、准确性、毒性、公平性、安全性、效率、临床相关性、安全性、代表性、虚假信息、政治倾向、敏感性、事实性、法律问题和漏洞。为了填补这一空白,我们推出了一个开源 Python 工具包 LangTest,旨在重塑 LLM 和 NLP 模型在现实世界应用中的评估。该项目旨在增强数据科学家的能力,使他们能够在不断发展的人工智能模型开发中达到高标准。具体来说,它提供了一个包含 60 多种测试类型的综合套件,确保对模型行为和负责任的人工智能使用有更全面的了解。在这项实验中,一个命名实体识别(NER)临床模型在应用数据增强技术以提高稳健性后,其识别文本中临床实体的能力有了显著提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
LangTest: A comprehensive evaluation library for custom LLM and NLP models

The use of natural language processing (NLP) models, including the more recent large language models (LLM) in real-world applications obtained relevant success in the past years. To measure the performance of these systems, traditional performance metrics such as accuracy, precision, recall, and f1-score are used. Although it is important to measure the performance of the models in those terms, natural language often requires an holistic evaluation that consider other important aspects such as robustness, bias, accuracy, toxicity, fairness, safety, efficiency, clinical relevance, security, representation, disinformation, political orientation, sensitivity, factuality, legal concerns, and vulnerabilities. To address the gap, we introduce LangTest, an open source Python toolkit, aimed at reshaping the evaluation of LLMs and NLP models in real-world applications. The project aims to empower data scientists, enabling them to meet high standards in the ever-evolving landscape of AI model development. Specifically, it provides a comprehensive suite of more than 60 test types, ensuring a more comprehensive understanding of a model’s behavior and responsible AI use. In this experiment, a Named Entity Recognition (NER) clinical model showed significant improvement in its capabilities to identify clinical entities in text after applying data augmentation for robustness.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Software Impacts
Software Impacts Software
CiteScore
2.70
自引率
9.50%
发文量
0
审稿时长
16 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信