Automating Method Naming with Context-Aware Prompt-Tuning

Jie Zhu, Ling-jie Li, Li Yang, Xiaoxiao Ma, Chun Zuo
{"title":"Automating Method Naming with Context-Aware Prompt-Tuning","authors":"Jie Zhu, Ling-jie Li, Li Yang, Xiaoxiao Ma, Chun Zuo","doi":"10.1109/ICPC58990.2023.00035","DOIUrl":null,"url":null,"abstract":"Method names are crucial to program comprehension and maintenance. Recently, many approaches have been proposed to automatically recommend method names and detect inconsistent names. Despite promising, their results are still suboptimal considering the three following drawbacks: 1) These models are mostly trained from scratch, learning two different objectives simultaneously. The misalignment between two objectives will negatively affect training efficiency and model performance. 2) The enclosing class context is not fully exploited, making it difficult to learn the abstract functionality of the method. 3) Current method name consistency checking methods follow a generate-then-compare process, which restricts the accuracy as they highly rely on the quality of generated names and face difficulty measuring the semantic consistency.In this paper, we propose an approach named AUMENA to AUtomate MEthod NAming tasks with context-aware prompt-tuning. Unlike existing deep learning based approaches, our model first learns the contextualized representation(i.e., class attributes) of programming language and natural language through the pre-training model, then fully exploits the capacity and knowledge of large language model with prompt-tuning to precisely detect inconsistent method names and recommend more accurate names. To better identify semantically consistent names, we model the method name consistency checking task as a two-class classification problem, avoiding the limitation of previous generate-then-compare consistency checking approaches. Experiment results reflect that AUMENA scores 68.6%, 72.0%, 73.6%, 84.7% on four datasets of method name recommendation, surpassing the state-of-the-art baseline by 8.5%, 18.4%, 11.0%, 12.0%, respectively. And our approach scores 80.8% accuracy on method name consistency checking, reaching an 5.5% outperformance. All data and trained models are publicly available.","PeriodicalId":376593,"journal":{"name":"2023 IEEE/ACM 31st International Conference on Program Comprehension (ICPC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 31st International Conference on Program Comprehension (ICPC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPC58990.2023.00035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Method names are crucial to program comprehension and maintenance. Recently, many approaches have been proposed to automatically recommend method names and detect inconsistent names. Despite promising, their results are still suboptimal considering the three following drawbacks: 1) These models are mostly trained from scratch, learning two different objectives simultaneously. The misalignment between two objectives will negatively affect training efficiency and model performance. 2) The enclosing class context is not fully exploited, making it difficult to learn the abstract functionality of the method. 3) Current method name consistency checking methods follow a generate-then-compare process, which restricts the accuracy as they highly rely on the quality of generated names and face difficulty measuring the semantic consistency.In this paper, we propose an approach named AUMENA to AUtomate MEthod NAming tasks with context-aware prompt-tuning. Unlike existing deep learning based approaches, our model first learns the contextualized representation(i.e., class attributes) of programming language and natural language through the pre-training model, then fully exploits the capacity and knowledge of large language model with prompt-tuning to precisely detect inconsistent method names and recommend more accurate names. To better identify semantically consistent names, we model the method name consistency checking task as a two-class classification problem, avoiding the limitation of previous generate-then-compare consistency checking approaches. Experiment results reflect that AUMENA scores 68.6%, 72.0%, 73.6%, 84.7% on four datasets of method name recommendation, surpassing the state-of-the-art baseline by 8.5%, 18.4%, 11.0%, 12.0%, respectively. And our approach scores 80.8% accuracy on method name consistency checking, reaching an 5.5% outperformance. All data and trained models are publicly available.
使用上下文感知的提示调优自动化方法命名
方法名对于程序的理解和维护至关重要。最近,人们提出了许多方法来自动推荐方法名和检测不一致的方法名。尽管很有希望,但考虑到以下三个缺点,它们的结果仍然不是最优的:1)这些模型大多是从头开始训练的,同时学习两个不同的目标。两个目标之间的不一致将对训练效率和模型性能产生负面影响。2)封闭类上下文没有被充分利用,使得很难学习方法的抽象功能。3)目前的方法名称一致性检查方法采用先生成后比较的方法,高度依赖生成名称的质量,难以测量语义一致性,限制了方法名称一致性检查的准确性。在本文中,我们提出了一种名为AUMENA的方法,通过上下文感知的提示调优来自动化方法命名任务。与现有的基于深度学习的方法不同,我们的模型首先学习情境化表示(即:通过预训练模型对编程语言和自然语言(类属性)进行分类,充分利用大语言模型的能力和知识,及时调优,精确检测不一致的方法名,并推荐更准确的方法名。为了更好地识别语义一致的名称,我们将方法名称一致性检查任务建模为一个两类分类问题,避免了以前的先生成后比较一致性检查方法的局限性。实验结果表明,AUMENA在4个方法名称推荐数据集上的得分分别为68.6%、72.0%、73.6%、84.7%,分别比国家标准高出8.5%、18.4%、11.0%、12.0%。我们的方法在方法名称一致性检查上的准确率为80.8%,达到了5.5%的优势。所有数据和训练过的模型都是公开的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信