Xinyu Zhang, Ji Xin, Andrew Yates, Jimmy J. Lin, D. Cheriton
{"title":"Bag-of-Words Baselines for Semantic Code Search","authors":"Xinyu Zhang, Ji Xin, Andrew Yates, Jimmy J. Lin, D. Cheriton","doi":"10.18653/v1/2021.nlp4prog-1.10","DOIUrl":"https://doi.org/10.18653/v1/2021.nlp4prog-1.10","url":null,"abstract":"The task of semantic code search is to retrieve code snippets from a source code corpus based on an information need expressed in natural language. The semantic gap between natural language and programming languages has for long been regarded as one of the most significant obstacles to the effectiveness of keyword-based information retrieval (IR) methods. It is a common assumption that “traditional” bag-of-words IR methods are poorly suited for semantic code search: our work empirically investigates this assumption. Specifically, we examine the effectiveness of two traditional IR methods, namely BM25 and RM3, on the CodeSearchNet Corpus, which consists of natural language queries paired with relevant code snippets. We find that the two keyword-based methods outperform several pre-BERT neural models. We also compare several code-specific data pre-processing strategies and find that specialized tokenization improves effectiveness.","PeriodicalId":435990,"journal":{"name":"Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126903074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artem Popov, Dmitrii Orekhov, Denis V. Litvinov, N. Korolev, Gleb Morgachev
{"title":"Time-Efficient Code Completion Model for the R Programming Language","authors":"Artem Popov, Dmitrii Orekhov, Denis V. Litvinov, N. Korolev, Gleb Morgachev","doi":"10.18653/v1/2021.nlp4prog-1.4","DOIUrl":"https://doi.org/10.18653/v1/2021.nlp4prog-1.4","url":null,"abstract":"In this paper we present a deep learning code completion model for the R language. We introduce several techniques to utilize language modeling based architecture in the code completion task. With these techniques, the model requires low resources, but still achieves high quality. We also present an evaluation dataset for the R language completion task. Our dataset contains multiple autocompletion usage contexts that provides robust validation results. The dataset is publicly available.","PeriodicalId":435990,"journal":{"name":"Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121447033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ConTest: A Unit Test Completion Benchmark featuring Context","authors":"Johannes Villmow, Jonas Depoix, A. Ulges","doi":"10.18653/v1/2021.nlp4prog-1.2","DOIUrl":"https://doi.org/10.18653/v1/2021.nlp4prog-1.2","url":null,"abstract":"We introduce CONTEST, a benchmark for NLP-based unit test completion, the task of predicting a test’s assert statements given its setup and focal method, i.e. the method to be tested. ConTest is large-scale (with 365k datapoints). Besides the test code and tested code, it also features context code called by either. We found context to be crucial for accurately predicting assertions. We also introduce baselines based on transformer encoder-decoders, and study the effects of including syntactic information and context. Overall, our models achieve a BLEU score of 38.2, while only generating unparsable code in 1.92% of cases.","PeriodicalId":435990,"journal":{"name":"Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132340650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}