{"title":"A First Look at Duplicate and Near-duplicate Self-admitted Technical Debt Comments","authors":"Jerin Yasmin, M. Sheikhaei, Yuan Tian","doi":"10.1145/3524610.3528387","DOIUrl":"https://doi.org/10.1145/3524610.3528387","url":null,"abstract":"Self-admitted technical debt (SATD) refers to technical debt that is intentionally introduced by developers and explicitly documented in code comments or other software artifacts (e.g., issue reports) to annotate sub-optimal decisions made by developers in the software development process. In this work, we take the first look at the existence and char-acteristics of duplicate and near-duplicate SATD comments in five popular Apache OSS projects, i.e., JSPWiki, Helix, Jackrab-bit, Archiva, and SystemML. We design a method to automatically identify groups of duplicate and near-duplicate SATD comments and track their evolution in the software system by mining the com-mit history of a software project. Leveraging the proposed method, we identified 3,520 duplicate and near-duplicate SATD comments from the target projects, which belong to 1,141 groups. We man-ually analyze the content and context of a sample of 1,505 SATD comments (by sampling 100 groups for each project) and identify if they annotate the same root cause. We also investigate whether du-plicate SATD comments exist in code clones, whether they co-exist in the same file, and whether they are introduced and removed simultaneously. Our preliminary study reveals several surprising findings that would shed light on future studies aiming to improve the management of duplicate SATD comments. For instance, only 48.5% duplicate SATD comment groups with the same root cause exist in regular code clones, and only 33.9% of the duplicate SATD comment pairs are introduced in the same commit.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114188243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Derek Robinson, Neil A. Ernst, Enrique Larios Vargas, M. Storey
{"title":"Error Identification Strategies for Python Jupyter Notebooks","authors":"Derek Robinson, Neil A. Ernst, Enrique Larios Vargas, M. Storey","doi":"10.1145/3524610.3529156","DOIUrl":"https://doi.org/10.1145/3524610.3529156","url":null,"abstract":"Computational notebooks-such as Jupyter or Colab-combine text and data analysis code. They have become ubiquitous in the world of data science and exploratory data analysis. Since these notebooks present a different programming paradigm than conventional IDE-driven programming, it is plausible that debugging in computational notebooks might also be different. More specifically, since creating notebooks blends domain knowledge, statistical analysis, and programming, the ways in which notebook users find and fix errors in these different forms might be different. In this paper, we present an exploratory, observational study on how Python Jupyter notebook users find and understand potential errors in notebooks. Through a conceptual replication of study design investigating the error identification strategies of R notebook users, we presented users with Python Jupyter notebooks pre-populated with common notebook errors-errors rooted in either the statistical data analysis, the knowledge of domain concepts, or in the programming. We then analyzed the strategies our study participants used to find these errors and determined how successful each strategy was at identifying errors. Our findings indicate that while the notebook programming environment is different from the environments used for traditional programming, debugging strategies remain quite similar. It is our hope that the insights presented in this paper will help both notebook tool designers and educators make changes to improve how data scientists discover errors more easily in the notebooks they write.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133312494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianyu Wu, Hao He, Wenxin Xiao, Kai Gao, Minghui Zhou
{"title":"Demystifying Software Release Note Issues on GitHub","authors":"Jianyu Wu, Hao He, Wenxin Xiao, Kai Gao, Minghui Zhou","doi":"10.1145/3524610.3527919","DOIUrl":"https://doi.org/10.1145/3524610.3527919","url":null,"abstract":"Release notes (RNs) summarize main changes between two consecutive software versions and serve as a central source of information when users upgrade software. While producing high quality RNs can be hard and poses a variety of challenges to developers, a comprehensive empirical understanding of these challenges is still lacking. In this paper, we bridge this knowledge gap by manually analyzing 1,731 latest GitHub issues to build a comprehensive taxonomy of RN issues with four dimensions: Content, Presentation, Accessibility, and Production. Among these issues, nearly half (48.47%) of them focus on Production; Content, Accessibility, and Presentation take 25.61 %, 17.65%, and 8.27%, respectively. We find that: 1) RN producers are more likely to miss information than to include incorrect information, especially for breaking changes; 2) improper layout may bury important information and confuse users; 3) many users find RNs inaccessible due to link deterioration, lack of notification, and obfuscate RN locations; 4) automating and regulating RN production remains challenging despite the great needs of RN producers. Our taxonomy not only pictures a roadmap to improve RN production in practice but also reveals interesting future research directions for automating RN production.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127916708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Code Snippets in Code Reviews: A Preliminary Study of the OpenStack Community","authors":"Liming Fu, Peng Liang, Beiqi Zhang","doi":"10.1145/3524610.3527884","DOIUrl":"https://doi.org/10.1145/3524610.3527884","url":null,"abstract":"Code review is a mature practice for software quality assurance in software development with which reviewers check the code that has been committed by developers, and verify the quality of code. During the code review discussions, reviewers and developers might use code snippets to provide necessary information (e.g., suggestions or explanations). However, little is known about the intentions and impacts of code snippets in code reviews. To this end, we conducted a preliminary study to investigate the nature of code snippets and their purposes in code reviews. We manually collected and checked 10,790 review comments from the Nova and Neutron projects of the OpenStack community, and finally obtained 626 review comments that contain code snippets for further analysis. The results show that: (1) code snippets are not prevalently used in code reviews, and most of the code snippets are provided by reviewers. (2) We identified two high-level purposes of code snippets provided by reviewers (i.e., Suggestion and Citation) with six detailed purposes, among which, Improving Code Implementation is the most common purpose. (3) For the code snippets in code reviews with the aim of suggestion, around 68.1% was accepted by developers. The results highlight promising research directions on using code snippets in code reviews.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127908643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Leelaprute, Bodin Chinthanet, Supatsara Wattanakriengkrai, R. Kula, Pongchai Jaisri, T. Ishio
{"title":"Does Coding in Pythonic Zen Peak Performance? Preliminary Experiments of Nine Pythonic Idioms at Scale","authors":"P. Leelaprute, Bodin Chinthanet, Supatsara Wattanakriengkrai, R. Kula, Pongchai Jaisri, T. Ishio","doi":"10.1145/3524610.3527879","DOIUrl":"https://doi.org/10.1145/3524610.3527879","url":null,"abstract":"In the field of data science, and for academics in general, the Python programming language is a popular choice, mainly because of its libraries for storing, manipulating, and gaining insight from data. Evidence includes the versatile set of machine learning, data visualization, and manipulation packages used for the ever-growing size of available data. The Zen of Python is a set of guiding design principles that developers use to write acceptable and elegant Python code. Most principles revolve around simplicity. However, as the need to compute large amounts of data, performance has become a necessity for the Python programmer. The new idea in this paper is to confirm whether writing the Pythonic way peaks performance at scale. As a starting point, we conduct a set of preliminary experiments to evaluate nine Pythonic code examples by comparing the performance of both Pythonic and Non-Pythonic code snippets. Our results reveal that writing in Pythonic idioms may save memory and time. We show that incorporating list comprehension, generator expression, zip, and itertools.zip_longest idioms can save up to 7,000 MB and up to 32.25 seconds. The results open more questions on how they could be utilized in a real-world setting. The replication package includes all scripts, and the results are available at https://doi.org/10.5281/zenodo.5712349","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130187119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Wang, Qiong Wu, Hongyu Zhang, Chen Lyu, Xue Jiang, Zhuoran Zheng, Lei Lyu, Songlin Hu
{"title":"HELoC: Hierarchical Contrastive Learning of Source Code Representation","authors":"Xiao Wang, Qiong Wu, Hongyu Zhang, Chen Lyu, Xue Jiang, Zhuoran Zheng, Lei Lyu, Songlin Hu","doi":"10.1145/3524610.3527896","DOIUrl":"https://doi.org/10.1145/3524610.3527896","url":null,"abstract":"Abstract syntax trees (ASTs) play a crucial role in source code representation. However, due to the large number of nodes in an AST and the typically deep AST hierarchy, it is challenging to learn the hierarchical structure of an AST effectively. In this paper, we propose HELoC, a hierarchical contrastive learning model for source code representation. To effectively learn the AST hierarchy, we use contrastive learning to allow the network to predict the AST node level and learn the hierarchical relationships between nodes in a self-supervised manner, which makes the representation vectors of nodes with greater differences in AST levels farther apart in the embedding space. By using such vectors, the structural similarities between code snippets can be measured more precisely. In the learning process, a novel GNN (called Residual Self-attention Graph Neural Network, RSGNN) is designed, which enables HELoC to focus on embedding the local structure of an AST while capturing its overall structure. HELoC is self-supervised and can be applied to many source code related downstream tasks such as code classification, code clone detection, and code clustering after pre-training. Our extensive experiments demonstrate that HELoC outperforms the state-of-the-art source code representation models.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127761457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anchoring Code Understandability Evaluations Through Task Descriptions","authors":"Marvin Wyrich, Lasse Merz, D. Graziotin","doi":"10.1145/3524610.3527904","DOIUrl":"https://doi.org/10.1145/3524610.3527904","url":null,"abstract":"In code comprehension experiments, participants are usually told at the beginning what kind of code comprehension task to expect. Describing experiment scenarios and experimental tasks will influence participants in ways that are sometimes hard to predict and control. In particular, describing or even mentioning the difficulty of a code comprehension task might anchor participants and their perception of the task itself. In this study, we investigated in a randomized, controlled experiment with 256 participants (50 software professionals and 206 computer science students) whether a hint about the difficulty of the code to be understood in a task description anchors participants in their own code comprehensibility ratings. Subjective code evaluations are a commonly used measure for how well a developer in a code comprehension study understood code. Accordingly, it is important to understand how robust these measures are to cognitive biases such as the anchoring effect. Our results show that participants are significantly influenced by the initial scenario description in their assessment of code com-prehensibility. An initial hint of hard to understand code leads participants to assess the code as harder to understand than partic-ipants who received no hint or a hint of easy to understand code. This affects students and professionals alike. We discuss examples of design decisions and contextual factors in the conduct of code comprehension experiments that can induce an anchoring effect, and recommend the use of more robust comprehension measures in code comprehension studies to enhance the validity of results.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117166425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junda He, Bowen Xu, Zhou Yang, Donggyun Han, Chengran Yang, David Lo
{"title":"PTM4Tag: Sharpening Tag Recommendation of Stack Overflow Posts with Pre-trained Models","authors":"Junda He, Bowen Xu, Zhou Yang, Donggyun Han, Chengran Yang, David Lo","doi":"10.1145/3524610.3527897","DOIUrl":"https://doi.org/10.1145/3524610.3527897","url":null,"abstract":"Stack Overflow is often viewed as one of the most influential Software Question & Answer (SQA) websites, containing millions of programming-related questions and answers. Tags play a critical role in efficiently structuring the contents in Stack Overflow and are vital to support a range of site operations, e.g., querying relevant contents. Poorly selected tags often introduce extra noise and redundancy, which raises problems like tag synonym and tag explosion. Thus, an automated tag recommendation technique that can accurately recommend high-quality tags is desired to alleviate the problems mentioned above. Inspired by the recent success of pre-trained language models (PTMs) in natural language processing (NLP), we present PTM4Tag, a tag recommendation framework for Stack Overflow posts that utilize PTMs with a triplet architecture, which models the components of a post, i.e., Title, Description, and Code with independent language models. To the best of our knowledge, this is the first work that leverages PTMs in the tag recommendation task of SQA sites. We comparatively evaluate the performance of PTM4Tag based on five popular pre-trained models: BERT, RoBERTa, ALBERT, CodeBERT, and BERTOverflow. Our results show that leveraging CodeBERT, a software engineering (SE) domain-specific PTM in PTM4Tag achieves the best performance among the five considered PTMs and outperforms the state-of-the-art Convolutional Neural Network-based approach by a large margin in terms of average Precision@k, Recall@k, and F1-score@k. We conduct an ablation study to quantify the contribution of a post's constituent components (Title, Description, and Code Snippets) to the performance of PTM4Tag. Our results show that Title is the most important in predicting the most relevant tags, and utilizing all the components achieves the best performance.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121570184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Zhang, Ya Xiao, Md Mahir Asef Kabir, D. Yao, Na Meng
{"title":"Example-Based Vulnerability Detection and Repair in Java Code","authors":"Y. Zhang, Ya Xiao, Md Mahir Asef Kabir, D. Yao, Na Meng","doi":"10.1145/3524610.3527895","DOIUrl":"https://doi.org/10.1145/3524610.3527895","url":null,"abstract":"The Java libraries JCA and JSSE offer cryptographic APIs to facilitate secure coding. When developers misuse some of the APIs, their code becomes vulnerable to cyber-attacks. To eliminate such vulnerabilities, people built tools to detect security-API misuses via pattern matching. However, most tools do not (1) fix misuses or (2) allow users to extend tools' pattern sets. To overcome both limitations, we created Seader-an example-based approach to detect and repair security-API misuses. Given an exemplar $langletext{insecure, secure}rangle$ code pair, Seader compares the snippets to infer any API-misuse template and corresponding fixing edit. Based on the inferred info, given a program, Seader performs inter-procedural static analysis to search for security-API misuses and to propose customized fixes. For evaluation, we applied Seader to 28 $langletext{insecure, secure}rangle$ code pairs; Seader successfully inferred 21 unique API-misuse templates and related fixes. With these $langletext{vulnerability, fix}rangle$ patterns, we applied Seader to a program benchmark that has 86 known vulnerabilities. Seader detected vulnerabilities with 95% precision, 72% recall, and 82% F-score. We also applied Seader to 100 open-source projects and manually checked 77 suggested repairs; 76 of the repairs were correct. Seader can help developers correctly use security APIs.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116944678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Code Smells in Elixir: Early Results from a Grey Literature Review","authors":"L. F. M. Vegi, M. T. Valente","doi":"10.1145/3524610.3527881","DOIUrl":"https://doi.org/10.1145/3524610.3527881","url":null,"abstract":"Elixir is a new functional programming language whose popularity is rising in the industry. However, there are few works in the literature focused on studying the internal quality of systems implemented in this language. Particularly, to the best of our knowledge, there is currently no catalog of code smells for Elixir. Therefore, in this paper, through a grey literature review, we investigate whether Elixir developers discuss code smells. Our preliminary results indicate that 11 of the 22 traditional code smells cataloged by Fowler and Beck are discussed by Elixir developers. We also propose a list of 18 new smells specific for Elixir systems and investigate whether these smells are currently identified by Credo, a well-known static code analysis tool for Elixir. We conclude that only two traditional code smells and one Elixir-specific code smell are automatically detected by this tool. Thus, these early results represent an opportunity for extending tools such as Credo to detect code smells and then contribute to improving the internal quality of Elixir systems.","PeriodicalId":426634,"journal":{"name":"2022 IEEE/ACM 30th International Conference on Program Comprehension (ICPC)","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131497688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}