探索 LLM 在自动生成日志语句中的有效性:实证研究

IF 6.5 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Yichen Li;Yintong Huo;Zhihan Jiang;Renyi Zhong;Pinjia He;Yuxin Su;Lionel C. Briand;Michael R. Lyu
{"title":"探索 LLM 在自动生成日志语句中的有效性:实证研究","authors":"Yichen Li;Yintong Huo;Zhihan Jiang;Renyi Zhong;Pinjia He;Yuxin Su;Lionel C. Briand;Michael R. Lyu","doi":"10.1109/TSE.2024.3475375","DOIUrl":null,"url":null,"abstract":"Automated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate logging statements in real-world complex software. Given the great success in natural language generation and programming language comprehension, large language models (LLMs) might help developers generate logging statements, but this has not yet been investigated. To fill the gap, this paper performs the first study on exploring LLMs for logging statement generation. We first build a logging statement generation dataset, \n<italic>LogBench</i>\n, with two parts: (1) \n<italic>LogBench-O</i>\n: \n<italic>3,870</i>\n methods with \n<italic>6,849</i>\n logging statements collected from GitHub repositories, and (2) \n<italic>LogBench-T</i>\n: the transformed unseen code from LogBench-O. Then, we leverage LogBench to evaluate the \n<italic>effectiveness</i>\n and \n<italic>generalization capabilities</i>\n (using \n<italic>LogBench-T</i>\n) of 13 top-performing LLMs, from 60M to 405B parameters. In addition, we examine the performance of these LLMs against classical retrieval-based and machine learning-based logging methods from the era preceding LLMs. Specifically, we evaluate the logging effectiveness of LLMs by studying their ability to determine logging ingredients and the impact of prompts and external program information. We further evaluate LLM's logging generalization capabilities using unseen data (LogBench-T) derived from code transformation techniques. While existing LLMs deliver decent predictions on logging levels and logging variables, our study indicates that they only achieve a maximum BLEU score of \n<italic>0.249</i>\n, thus calling for improvements. The paper also highlights the importance of prompt constructions and external factors (e.g., programming contexts and code comments) for LLMs’ logging performance. In addition, we observed that existing LLMs show a significant performance drop (\n<italic>8.2%-16.2%</i>\n decrease) when dealing with logging unseen code, revealing their unsatisfactory generalization capabilities. Based on these findings, we identify five implications and provide practical advice for future logging research. Our empirical analysis discloses the limitations of current logging approaches while showcasing the potential of LLM-based logging tools, and provides actionable guidance for building more practical models.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"50 12","pages":"3188-3207"},"PeriodicalIF":6.5000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Effectiveness of LLMs in Automated Logging Statement Generation: An Empirical Study\",\"authors\":\"Yichen Li;Yintong Huo;Zhihan Jiang;Renyi Zhong;Pinjia He;Yuxin Su;Lionel C. Briand;Michael R. Lyu\",\"doi\":\"10.1109/TSE.2024.3475375\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate logging statements in real-world complex software. Given the great success in natural language generation and programming language comprehension, large language models (LLMs) might help developers generate logging statements, but this has not yet been investigated. To fill the gap, this paper performs the first study on exploring LLMs for logging statement generation. We first build a logging statement generation dataset, \\n<italic>LogBench</i>\\n, with two parts: (1) \\n<italic>LogBench-O</i>\\n: \\n<italic>3,870</i>\\n methods with \\n<italic>6,849</i>\\n logging statements collected from GitHub repositories, and (2) \\n<italic>LogBench-T</i>\\n: the transformed unseen code from LogBench-O. Then, we leverage LogBench to evaluate the \\n<italic>effectiveness</i>\\n and \\n<italic>generalization capabilities</i>\\n (using \\n<italic>LogBench-T</i>\\n) of 13 top-performing LLMs, from 60M to 405B parameters. In addition, we examine the performance of these LLMs against classical retrieval-based and machine learning-based logging methods from the era preceding LLMs. Specifically, we evaluate the logging effectiveness of LLMs by studying their ability to determine logging ingredients and the impact of prompts and external program information. We further evaluate LLM's logging generalization capabilities using unseen data (LogBench-T) derived from code transformation techniques. While existing LLMs deliver decent predictions on logging levels and logging variables, our study indicates that they only achieve a maximum BLEU score of \\n<italic>0.249</i>\\n, thus calling for improvements. The paper also highlights the importance of prompt constructions and external factors (e.g., programming contexts and code comments) for LLMs’ logging performance. In addition, we observed that existing LLMs show a significant performance drop (\\n<italic>8.2%-16.2%</i>\\n decrease) when dealing with logging unseen code, revealing their unsatisfactory generalization capabilities. Based on these findings, we identify five implications and provide practical advice for future logging research. Our empirical analysis discloses the limitations of current logging approaches while showcasing the potential of LLM-based logging tools, and provides actionable guidance for building more practical models.\",\"PeriodicalId\":13324,\"journal\":{\"name\":\"IEEE Transactions on Software Engineering\",\"volume\":\"50 12\",\"pages\":\"3188-3207\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Software Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10707668/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10707668/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

自动日志语句生成支持开发人员记录关键的软件运行时行为。虽然最近的大量研究集中在基于检索和基于学习的方法上,但结果表明它们无法在现实世界的复杂软件中提供适当的日志记录语句。考虑到在自然语言生成和编程语言理解方面取得的巨大成功,大型语言模型(llm)可能会帮助开发人员生成日志记录语句,但这还没有被研究过。为了填补这一空白,本文首次探索了llm用于日志语句生成的研究。我们首先构建一个日志语句生成数据集LogBench,它由两部分组成:(1)LogBench- o:从GitHub存储库收集的3,870个方法和6,849个日志语句;(2)LogBench- t:从LogBench- o转换的未见过的代码。然后,我们利用LogBench来评估13个表现最好的llm的有效性和泛化能力(使用LogBench- t),从60M到405B参数。此外,我们研究了这些llm与经典的基于检索和基于机器学习的日志方法的性能,这些方法来自llm之前的时代。具体来说,我们通过研究llm确定日志成分以及提示和外部程序信息的影响的能力来评估其日志记录的有效性。我们使用从代码转换技术派生的不可见数据(LogBench-T)进一步评估LLM的日志泛化能力。虽然现有的llm对日志级别和日志变量提供了不错的预测,但我们的研究表明,它们只能达到0.249的最大BLEU分数,因此需要改进。本文还强调了提示结构和外部因素(例如,编程环境和代码注释)对llm日志记录性能的重要性。此外,我们观察到现有的llm在处理未见过的日志代码时表现出显著的性能下降(8.2%-16.2%),这表明它们的泛化能力不令人满意。在此基础上,提出了五方面的启示,并对今后的测井研究提出了切实可行的建议。我们的实证分析揭示了当前测井方法的局限性,同时展示了基于llm的测井工具的潜力,并为构建更实用的模型提供了可操作的指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring the Effectiveness of LLMs in Automated Logging Statement Generation: An Empirical Study
Automated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate logging statements in real-world complex software. Given the great success in natural language generation and programming language comprehension, large language models (LLMs) might help developers generate logging statements, but this has not yet been investigated. To fill the gap, this paper performs the first study on exploring LLMs for logging statement generation. We first build a logging statement generation dataset, LogBench , with two parts: (1) LogBench-O : 3,870 methods with 6,849 logging statements collected from GitHub repositories, and (2) LogBench-T : the transformed unseen code from LogBench-O. Then, we leverage LogBench to evaluate the effectiveness and generalization capabilities (using LogBench-T ) of 13 top-performing LLMs, from 60M to 405B parameters. In addition, we examine the performance of these LLMs against classical retrieval-based and machine learning-based logging methods from the era preceding LLMs. Specifically, we evaluate the logging effectiveness of LLMs by studying their ability to determine logging ingredients and the impact of prompts and external program information. We further evaluate LLM's logging generalization capabilities using unseen data (LogBench-T) derived from code transformation techniques. While existing LLMs deliver decent predictions on logging levels and logging variables, our study indicates that they only achieve a maximum BLEU score of 0.249 , thus calling for improvements. The paper also highlights the importance of prompt constructions and external factors (e.g., programming contexts and code comments) for LLMs’ logging performance. In addition, we observed that existing LLMs show a significant performance drop ( 8.2%-16.2% decrease) when dealing with logging unseen code, revealing their unsatisfactory generalization capabilities. Based on these findings, we identify five implications and provide practical advice for future logging research. Our empirical analysis discloses the limitations of current logging approaches while showcasing the potential of LLM-based logging tools, and provides actionable guidance for building more practical models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering 工程技术-工程:电子与电气
CiteScore
9.70
自引率
10.80%
发文量
724
审稿时长
6 months
期刊介绍: IEEE Transactions on Software Engineering seeks contributions comprising well-defined theoretical results and empirical studies with potential impacts on software construction, analysis, or management. The scope of this Transactions extends from fundamental mechanisms to the development of principles and their application in specific environments. Specific topic areas include: a) Development and maintenance methods and models: Techniques and principles for specifying, designing, and implementing software systems, encompassing notations and process models. b) Assessment methods: Software tests, validation, reliability models, test and diagnosis procedures, software redundancy, design for error control, and measurements and evaluation of process and product aspects. c) Software project management: Productivity factors, cost models, schedule and organizational issues, and standards. d) Tools and environments: Specific tools, integrated tool environments, associated architectures, databases, and parallel and distributed processing issues. e) System issues: Hardware-software trade-offs. f) State-of-the-art surveys: Syntheses and comprehensive reviews of the historical development within specific areas of interest.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信