Juncai Guo, Jin Liu, Xiao Liu, Yao Wan, Yanjie Zhao, Li Li, Kui Liu, Jacques Klein, Tegawendé F. Bissyandé
{"title":"PyScribe–Learning to describe python code","authors":"Juncai Guo, Jin Liu, Xiao Liu, Yao Wan, Yanjie Zhao, Li Li, Kui Liu, Jacques Klein, Tegawendé F. Bissyandé","doi":"10.1002/spe.3291","DOIUrl":null,"url":null,"abstract":"Code comment generation, which attempts to summarize the functionality of source code in textual descriptions, plays an important role in automatic software development research. Currently, several structural neural networks have been exploited to preserve the syntax structure of source code based on abstract syntax trees (ASTs). However, they can not well capture both the long-distance and local relations between nodes while retaining the overall structural information of AST. To mitigate this problem, we present a prototype tool titled <span>PyScribe</span>, which extends the Transformer model to a new encoder-decoder-based framework. Particularly, the triplet position is designed and integrated into the node-level and edge-level structural features of AST for producing Python code comments automatically. This paper, to the best of our knowledge, makes the first effort to model the edges of AST as an explicit component for improved code representation. By specifying triplet positions for each node and edge, the overall structural information can be well preserved in the learning process. Moreover, the captured node and edge features go through a two-stage decoding process to yield higher qualified comments. To evaluate the effectiveness of <span>PyScribe</span>, we resort to a large dataset of code-comment pairs by mining Jupyter Notebooks from GitHub, for which we have made it publicly available to support further studies. The experimental results reveal that <span>PyScribe</span> is indeed effective, outperforming the state-ofthe-art by achieving an average BLEU score (i.e., av-BLEU) of <math altimg=\"urn:x-wiley:spe:media:spe3291:spe3291-math-0001\" display=\"inline\" location=\"graphic/spe3291-math-0001.png\" overflow=\"scroll\">\n<semantics>\n<mrow>\n<mo form=\"prefix\">≈</mo>\n</mrow>\n$$ \\approx $$</annotation>\n</semantics></math>0.28.","PeriodicalId":21899,"journal":{"name":"Software: Practice and Experience","volume":"124 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software: Practice and Experience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/spe.3291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Code comment generation, which attempts to summarize the functionality of source code in textual descriptions, plays an important role in automatic software development research. Currently, several structural neural networks have been exploited to preserve the syntax structure of source code based on abstract syntax trees (ASTs). However, they can not well capture both the long-distance and local relations between nodes while retaining the overall structural information of AST. To mitigate this problem, we present a prototype tool titled PyScribe, which extends the Transformer model to a new encoder-decoder-based framework. Particularly, the triplet position is designed and integrated into the node-level and edge-level structural features of AST for producing Python code comments automatically. This paper, to the best of our knowledge, makes the first effort to model the edges of AST as an explicit component for improved code representation. By specifying triplet positions for each node and edge, the overall structural information can be well preserved in the learning process. Moreover, the captured node and edge features go through a two-stage decoding process to yield higher qualified comments. To evaluate the effectiveness of PyScribe, we resort to a large dataset of code-comment pairs by mining Jupyter Notebooks from GitHub, for which we have made it publicly available to support further studies. The experimental results reveal that PyScribe is indeed effective, outperforming the state-ofthe-art by achieving an average BLEU score (i.e., av-BLEU) of 0.28.