{"title":"Neural Model for Generating Method Names from Combined Contexts","authors":"Zane Varner, Çerağ Oğuztüzün, Feng Long","doi":"10.1109/STC55697.2022.00009","DOIUrl":null,"url":null,"abstract":"The names given to methods within a software system are critical to the success of both software development and maintenance. Meaningful and concise method names save developers both time and effort when attempting to understand and use the code. Our study focuses on learning concise and meaningful method names from word tokens found within the contexts of a method, including the method documentation, input parameters, return type, method body, and enclosing class. Combining the approaches of previous studies, we constructed both an RNN encoder-decoder model with attention as well as a Transformer model, each tested using different combinations of contextual information as input. Our experiments demonstrate that a model that uses all of the mentioned contexts will have a higher performance than a model that uses any subset of the contexts. Furthermore, we demonstrate that the Transformer model outperforms the RNN model in this scenario.","PeriodicalId":170123,"journal":{"name":"2022 IEEE 29th Annual Software Technology Conference (STC)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 29th Annual Software Technology Conference (STC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/STC55697.2022.00009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The names given to methods within a software system are critical to the success of both software development and maintenance. Meaningful and concise method names save developers both time and effort when attempting to understand and use the code. Our study focuses on learning concise and meaningful method names from word tokens found within the contexts of a method, including the method documentation, input parameters, return type, method body, and enclosing class. Combining the approaches of previous studies, we constructed both an RNN encoder-decoder model with attention as well as a Transformer model, each tested using different combinations of contextual information as input. Our experiments demonstrate that a model that uses all of the mentioned contexts will have a higher performance than a model that uses any subset of the contexts. Furthermore, we demonstrate that the Transformer model outperforms the RNN model in this scenario.