Yichen Li;Yintong Huo;Zhihan Jiang;Renyi Zhong;Pinjia He;Yuxin Su;Lionel C. Briand;Michael R. Lyu
{"title":"探索 LLM 在自动生成日志语句中的有效性:实证研究","authors":"Yichen Li;Yintong Huo;Zhihan Jiang;Renyi Zhong;Pinjia He;Yuxin Su;Lionel C. Briand;Michael R. Lyu","doi":"10.1109/TSE.2024.3475375","DOIUrl":null,"url":null,"abstract":"Automated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate logging statements in real-world complex software. Given the great success in natural language generation and programming language comprehension, large language models (LLMs) might help developers generate logging statements, but this has not yet been investigated. To fill the gap, this paper performs the first study on exploring LLMs for logging statement generation. We first build a logging statement generation dataset, \n<italic>LogBench</i>\n, with two parts: (1) \n<italic>LogBench-O</i>\n: \n<italic>3,870</i>\n methods with \n<italic>6,849</i>\n logging statements collected from GitHub repositories, and (2) \n<italic>LogBench-T</i>\n: the transformed unseen code from LogBench-O. Then, we leverage LogBench to evaluate the \n<italic>effectiveness</i>\n and \n<italic>generalization capabilities</i>\n (using \n<italic>LogBench-T</i>\n) of 13 top-performing LLMs, from 60M to 405B parameters. In addition, we examine the performance of these LLMs against classical retrieval-based and machine learning-based logging methods from the era preceding LLMs. Specifically, we evaluate the logging effectiveness of LLMs by studying their ability to determine logging ingredients and the impact of prompts and external program information. We further evaluate LLM's logging generalization capabilities using unseen data (LogBench-T) derived from code transformation techniques. While existing LLMs deliver decent predictions on logging levels and logging variables, our study indicates that they only achieve a maximum BLEU score of \n<italic>0.249</i>\n, thus calling for improvements. The paper also highlights the importance of prompt constructions and external factors (e.g., programming contexts and code comments) for LLMs’ logging performance. In addition, we observed that existing LLMs show a significant performance drop (\n<italic>8.2%-16.2%</i>\n decrease) when dealing with logging unseen code, revealing their unsatisfactory generalization capabilities. Based on these findings, we identify five implications and provide practical advice for future logging research. Our empirical analysis discloses the limitations of current logging approaches while showcasing the potential of LLM-based logging tools, and provides actionable guidance for building more practical models.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"50 12","pages":"3188-3207"},"PeriodicalIF":6.5000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring the Effectiveness of LLMs in Automated Logging Statement Generation: An Empirical Study\",\"authors\":\"Yichen Li;Yintong Huo;Zhihan Jiang;Renyi Zhong;Pinjia He;Yuxin Su;Lionel C. Briand;Michael R. Lyu\",\"doi\":\"10.1109/TSE.2024.3475375\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate logging statements in real-world complex software. Given the great success in natural language generation and programming language comprehension, large language models (LLMs) might help developers generate logging statements, but this has not yet been investigated. To fill the gap, this paper performs the first study on exploring LLMs for logging statement generation. We first build a logging statement generation dataset, \\n<italic>LogBench</i>\\n, with two parts: (1) \\n<italic>LogBench-O</i>\\n: \\n<italic>3,870</i>\\n methods with \\n<italic>6,849</i>\\n logging statements collected from GitHub repositories, and (2) \\n<italic>LogBench-T</i>\\n: the transformed unseen code from LogBench-O. Then, we leverage LogBench to evaluate the \\n<italic>effectiveness</i>\\n and \\n<italic>generalization capabilities</i>\\n (using \\n<italic>LogBench-T</i>\\n) of 13 top-performing LLMs, from 60M to 405B parameters. In addition, we examine the performance of these LLMs against classical retrieval-based and machine learning-based logging methods from the era preceding LLMs. Specifically, we evaluate the logging effectiveness of LLMs by studying their ability to determine logging ingredients and the impact of prompts and external program information. We further evaluate LLM's logging generalization capabilities using unseen data (LogBench-T) derived from code transformation techniques. While existing LLMs deliver decent predictions on logging levels and logging variables, our study indicates that they only achieve a maximum BLEU score of \\n<italic>0.249</i>\\n, thus calling for improvements. The paper also highlights the importance of prompt constructions and external factors (e.g., programming contexts and code comments) for LLMs’ logging performance. In addition, we observed that existing LLMs show a significant performance drop (\\n<italic>8.2%-16.2%</i>\\n decrease) when dealing with logging unseen code, revealing their unsatisfactory generalization capabilities. Based on these findings, we identify five implications and provide practical advice for future logging research. Our empirical analysis discloses the limitations of current logging approaches while showcasing the potential of LLM-based logging tools, and provides actionable guidance for building more practical models.\",\"PeriodicalId\":13324,\"journal\":{\"name\":\"IEEE Transactions on Software Engineering\",\"volume\":\"50 12\",\"pages\":\"3188-3207\"},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Software Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10707668/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10707668/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Exploring the Effectiveness of LLMs in Automated Logging Statement Generation: An Empirical Study
Automated logging statement generation supports developers in documenting critical software runtime behavior. While substantial recent research has focused on retrieval-based and learning-based methods, results suggest they fail to provide appropriate logging statements in real-world complex software. Given the great success in natural language generation and programming language comprehension, large language models (LLMs) might help developers generate logging statements, but this has not yet been investigated. To fill the gap, this paper performs the first study on exploring LLMs for logging statement generation. We first build a logging statement generation dataset,
LogBench
, with two parts: (1)
LogBench-O
:
3,870
methods with
6,849
logging statements collected from GitHub repositories, and (2)
LogBench-T
: the transformed unseen code from LogBench-O. Then, we leverage LogBench to evaluate the
effectiveness
and
generalization capabilities
(using
LogBench-T
) of 13 top-performing LLMs, from 60M to 405B parameters. In addition, we examine the performance of these LLMs against classical retrieval-based and machine learning-based logging methods from the era preceding LLMs. Specifically, we evaluate the logging effectiveness of LLMs by studying their ability to determine logging ingredients and the impact of prompts and external program information. We further evaluate LLM's logging generalization capabilities using unseen data (LogBench-T) derived from code transformation techniques. While existing LLMs deliver decent predictions on logging levels and logging variables, our study indicates that they only achieve a maximum BLEU score of
0.249
, thus calling for improvements. The paper also highlights the importance of prompt constructions and external factors (e.g., programming contexts and code comments) for LLMs’ logging performance. In addition, we observed that existing LLMs show a significant performance drop (
8.2%-16.2%
decrease) when dealing with logging unseen code, revealing their unsatisfactory generalization capabilities. Based on these findings, we identify five implications and provide practical advice for future logging research. Our empirical analysis discloses the limitations of current logging approaches while showcasing the potential of LLM-based logging tools, and provides actionable guidance for building more practical models.
期刊介绍:
IEEE Transactions on Software Engineering seeks contributions comprising well-defined theoretical results and empirical studies with potential impacts on software construction, analysis, or management. The scope of this Transactions extends from fundamental mechanisms to the development of principles and their application in specific environments. Specific topic areas include:
a) Development and maintenance methods and models: Techniques and principles for specifying, designing, and implementing software systems, encompassing notations and process models.
b) Assessment methods: Software tests, validation, reliability models, test and diagnosis procedures, software redundancy, design for error control, and measurements and evaluation of process and product aspects.
c) Software project management: Productivity factors, cost models, schedule and organizational issues, and standards.
d) Tools and environments: Specific tools, integrated tool environments, associated architectures, databases, and parallel and distributed processing issues.
e) System issues: Hardware-software trade-offs.
f) State-of-the-art surveys: Syntheses and comprehensive reviews of the historical development within specific areas of interest.