2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)最新文献

筛选
英文 中文
How Does Code Reviewing Feedback Evolve?: A Longitudinal Study at Dell EMC 代码评审反馈是如何演变的?:戴尔EMC纵向研究
R. Wen, Maxime Lamothe, Shane McIntosh
{"title":"How Does Code Reviewing Feedback Evolve?: A Longitudinal Study at Dell EMC","authors":"R. Wen, Maxime Lamothe, Shane McIntosh","doi":"10.1145/3510457.3513039","DOIUrl":"https://doi.org/10.1145/3510457.3513039","url":null,"abstract":"Code review is an integral part of modern software development, where fellow developers critique the content, premise, and structure of code changes. Organizations like DellEMC have made considerable investment in code reviews, yet tracking the characteristics of feedback that code reviews provide (a primary product of the code reviewing process) is still a difficult process. To understand community and personal feedback trends, we perform a longitudinal study of 39,249 reviews that contain 248,695 review comments from a proprietary project that is developed by DellEMC. To investigate generalizability, we replicate our study on the OpenStackn Ova project. Through an analysis guided by topic models, we observe that more context-specific, technical feedback is introduced as the studied projects and communities age and as the reviewers within those communities accrue experience. This suggests that communities are reaping a larger return on investment in code review as they grow accustomed to the practice and as reviewers hone their skills. The code review trends uncovered by our models present opportunities for enterprises to monitor reviewing tendencies and improve knowledge transfer and reviewer skills.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123420251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Natural Language Processing Techniques to Improve Manual Test Case Descriptions 使用自然语言处理技术改进手工测试用例描述
Markos Viggiato, Dale Paas, C. Buzon, C. Bezemer
{"title":"Using Natural Language Processing Techniques to Improve Manual Test Case Descriptions","authors":"Markos Viggiato, Dale Paas, C. Buzon, C. Bezemer","doi":"10.1145/3510457.3513045","DOIUrl":"https://doi.org/10.1145/3510457.3513045","url":null,"abstract":"Despite the recent advancements in test automation, testing often remains a manual, and costly, activity in many industries. Manual test cases, often described only in natural language, consist of one or more test steps, which are instructions that must be performed to achieve the testing objective. Having different employees specifying test cases might result in redundant, unclear, or incomplete test cases. Manually reviewing and validating newly-specified test cases is time-consuming and becomes impractical in a scenario with a large test suite. Therefore, in this paper, we propose an automated framework to automatically analyze test cases that are specified in natural language and provide actionable recommendations on how to improve the test cases. Our framework consists of configurable components and modules for analysis, which are capable of recommending improvements to the following: (1) the terminology of a new test case through language modeling, (2) potentially missing test steps for a new test case through frequent itemset and association rule mining, and (3) recommendation of similar test cases that already exist in the test suite through text embedding and clustering. We thoroughly evaluated the three modules on data from our industry partner. Our framework can provide actionable recommendations, which is an important challenge given the widespread occurrence of test cases that are described only in natural language in the software industry (in particular, the game industry).","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126052884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Strategies for Reuse and Sharing among Data Scientists in Software Teams 软件团队中数据科学家之间的重用和共享策略
Will Epperson, Yi Wang, R. Deline, S. Drucker
{"title":"Strategies for Reuse and Sharing among Data Scientists in Software Teams","authors":"Will Epperson, Yi Wang, R. Deline, S. Drucker","doi":"10.1145/3510457.3513042","DOIUrl":"https://doi.org/10.1145/3510457.3513042","url":null,"abstract":"Effective sharing and reuse practices have long been hallmarks of proficient software engineering. Yet the exploratory nature of data science presents new challenges and opportunities to support sharing and reuse of analysis code. To better understand current practices, we conducted interviews (N=17) and a survey (N=132) with data scientists at Microsoft, and extract five commonly used strategies for sharing and reuse of past work: personal analysis reuse, personal utility libraries, team shared analysis code, team shared template notebooks, and team shared libraries. We also identify factors that encourage or discourage data scientists from sharing and reusing. Our participants described obstacles to reuse and sharing including a lack of incentives to create shared code, difficulties in making data science code modular, and a lack of tool interoperability. We discuss how future tools might help meet these needs.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129647014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Unreliable Test Infrastructures in Automotive Testing Setups 汽车测试装置中不可靠的测试基础设施
Claudius V. Jordan, P. Foth, A. Pretschner, Matthias Fruth
{"title":"Unreliable Test Infrastructures in Automotive Testing Setups","authors":"Claudius V. Jordan, P. Foth, A. Pretschner, Matthias Fruth","doi":"10.1145/3510457.3513069","DOIUrl":"https://doi.org/10.1145/3510457.3513069","url":null,"abstract":"During system testing of automotive electrical control units various reasons can lead to invalid test failures, e.g., non-responding components, faulty simulation models, faulty test case implementations, or hardware or software misconfigurations. To determine whether a test failure is invalid and what the underlying cause was, the test executions have to be analyzed manually, which is tedious and therefore costly. In this work, we report the magnitude of the problem of invalid test failures with four system testing projects from the automotive domain. We find that up to 91% of failed test executions are considered invalid. An oftentimes overlooked challenge are unreliable test infrastructures which deteriorate the validity of the test runs. In the studied projects already between 27% and 53% of failed test executions are linked to unreliable test infrastructures.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125303326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Testing Machine Learning Systems in Industry: An Empirical Study 工业测试机器学习系统:一项实证研究
Shuyue Li†, Jiaqi Guo, Jian-Guang Lou, Ming Fan, Ting Liu, Dongmei Zhang
{"title":"Testing Machine Learning Systems in Industry: An Empirical Study","authors":"Shuyue Li†, Jiaqi Guo, Jian-Guang Lou, Ming Fan, Ting Liu, Dongmei Zhang","doi":"10.1145/3510457.3513036","DOIUrl":"https://doi.org/10.1145/3510457.3513036","url":null,"abstract":"Machine learning becomes increasingly prevalent and integrated into a wide range of software systems. These systems, named ML systems, must be adequately tested to gain confidence that they behave correctly. Although many research efforts have been devoted to testing technologies for ML systems, the industrial teams are faced with new challenges on testing the ML systems in real-world settings. To absorb inspirations from the industry on the problems in ML testing, we conducted an empirical study including a survey with 87 responses and interviews with 7 senior ML practitioners from well-known IT companies. Our study uncovers significant industrial concerns on major testing activities, i.e., test data collection, test execution, and test result analysis, and also the good practices and open challenges from the perspective of the industry. (1) Test data collection is conducted in different ways on ML model, data, and code and faced with different challenges. (2) Test execution in ML systems suffers from two major problems: entanglement among the components and the regression on model performance. (3) Test result analysis centers on quantitative methods, e.g., metric-based evaluation, and is combined with some qualitative methods based on practitioners’ experience. Based on our findings, we highlight the research opportunities and also provide some implications for practitioners.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"658 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132351481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Issues in the Adoption of the Scaled Agile Framework 采用规模化敏捷框架的问题
P. Ciancarini, A. Kruglov, W. Pedrycz, Dilshat Salikhov, G. Succi
{"title":"Issues in the Adoption of the Scaled Agile Framework","authors":"P. Ciancarini, A. Kruglov, W. Pedrycz, Dilshat Salikhov, G. Succi","doi":"10.1145/3510457.3513028","DOIUrl":"https://doi.org/10.1145/3510457.3513028","url":null,"abstract":"Agile methods were originally introduced for small sized, colocated teams. Their successful products immediately brought up the issue of adapting the methods also for large and distributed organizations engaged in projects to build major, complex products. Currently the most popular multi-teams agile method is the Scaled Agile Framework (SAFe) which, however, is subject to criticism: it appears to be quite demanding and expensive in terms of human resource and project management practices. Moreover, SAFe allegedly goes against some of the principles of agility. This research attempts to gather a deeper understanding of the matter first reviewing and analysing the studies published on this topic via a multivocal literature review and then with an extended empirical investigation on the matters that appear most controversial via the direct analysis of the work of 25 respondents from 17 different companies located in eight countries. Thus, the originality of this research is in the systemic assessment of the “level of flexibility” of SAFe, highlighting the challenges of adopting this framework as it relates to decision making, structure, and the technical and managerial competencies of the company. The results show that SAFe can be an effective and adequate approach if the company is ready to invest a significant effort and resources into it both in the form of providing time for SAFe to be properly absorbed and specific training for individuals.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132810598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Verifying Dynamic Trait Objects in Rust 在Rust中验证动态Trait对象
Alexa VanHattum, Daniel Schwartz-Narbonne, Nathan Chong, Adrian Sampson
{"title":"Verifying Dynamic Trait Objects in Rust","authors":"Alexa VanHattum, Daniel Schwartz-Narbonne, Nathan Chong, Adrian Sampson","doi":"10.1145/3510457.3513031","DOIUrl":"https://doi.org/10.1145/3510457.3513031","url":null,"abstract":"Rust has risen in prominence as a systems programming language in large part due to its focus on reliability. The language's advanced type system and borrow checker eliminate certain classes of memory safety violations. But for critical pieces of code, teams need assurance beyond what the type checker alone can provide. Verification tools for Rust can check other properties, from memory faults in unsafe Rust code to user-defined correctness assertions. This paper particularly focuses on the challenges in reasoning about Rust's dynamic trait objects, a feature that provides dynamic dispatch for function abstractions. While the explicit dyn keyword that denotes dynamic dispatch is used in 37% of the 500 most-downloaded Rust libraries (crates), dynamic dispatch is implicitly linked into 70%. To our knowledge, our open-source Kani Rust Verifier is the first symbolic modeling checking tool for Rust that can verify correctness while supporting the breadth of dynamic trait objects, including dynamically dispatched closures. We show how our system uses semantic trait information from Rust's Mid-level Intermediate Representation (an advantage over targeting a language-agnostic level such as LLVM) to improve verification performance by 5%–15× for examples from open-source virtualization software. Finally, we share an open-source suite of verification test cases for dynamic trait objects.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133331560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Industry's Cry for Tools that Support Large-Scale Refactoring 业界对支持大规模重构的工具的呼声
James Ivers, R. Nord, I. Ozkaya, Chris Seifried, C. Timperley, M. Kessentini
{"title":"Industry's Cry for Tools that Support Large-Scale Refactoring","authors":"James Ivers, R. Nord, I. Ozkaya, Chris Seifried, C. Timperley, M. Kessentini","doi":"10.1145/3510457.3513074","DOIUrl":"https://doi.org/10.1145/3510457.3513074","url":null,"abstract":"Software refactoring plays an important role in software engineering. Developers often turn to refactoring when they want to restructure software to improve its quality without changing its external behavior. Compared to small-scale (floss) refactoring, many refactoring efforts are much larger, requiring entire teams and months of effort, and the role of tools in these efforts is not as well studied. This short paper introduces an industry survey that we conducted. Results from 107 developers demonstrate that projects commonly go through multiple large-scale refactorings, each of which requires considerable effort. While there is often a desire to refactor, other business concerns such as developing new features often take higher priority. Our study finds that developers use several categories of tools to support large-scale refactoring and rely more heavily on general-purpose tools like IDEs than on tools designed specifically to support refactoring. Tool support varies across the different activities (spanning communication, reasoning, and technical activities), with some particularly challenging activities seeing little use of tools in practice. Our study demonstrates a clear need for better large-scale refactoring tools.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"4 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114046188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Flaky Tests on Historical Test Prioritization on Chrome 片状测试对Chrome历史测试优先级的影响
Emad Fallahzadeh, Peter C. Rigby
{"title":"The Impact of Flaky Tests on Historical Test Prioritization on Chrome","authors":"Emad Fallahzadeh, Peter C. Rigby","doi":"10.1145/3510457.3513038","DOIUrl":"https://doi.org/10.1145/3510457.3513038","url":null,"abstract":"Test prioritization algorithms prioritize probable failing tests to give faster feedback to developers in case a failure occurs. Test prioritization approaches that use historical failures to run tests that have failed in the past may be susceptible to flaky tests as these tests often fail and then pass without identifying a fault. Traditionally, flaky failures like other types of failures are considered blocking, i. e. a test that needs to be investigated before the code can move to the next stage. However, on Google Chrome, flaky failures are non-blocking and the code still moves to the next stage in the CI pipeline. In this work, we explain the Chrome testing pipeline and classification. Then, we re-implement two important history based test prioritization algorithms and evaluate them on over 276 million test runs from the Chrome project. We apply these algorithms in two scenarios. First, we consider flaky failures as blocking and then, we use Chrome's approach and consider flaky failures as non-blocking. Our investigation reveals that 99.58% of all failures are flaky. These types of failures are much more repetitive than non-flaky failures, and they are also well distributed over time. We conclude that the prior performance of the prioritization algorithms have been inflated by flaky failures. We release our data and scripts in our replication package [8].","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116035539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Asynchronous Call Graph for JavaScript JavaScript的异步调用图
Dominik Seifert, Michael Wan, Jane Yung-jen Hsu, Benson Yeh
{"title":"An Asynchronous Call Graph for JavaScript","authors":"Dominik Seifert, Michael Wan, Jane Yung-jen Hsu, Benson Yeh","doi":"10.1145/3510457.3513059","DOIUrl":"https://doi.org/10.1145/3510457.3513059","url":null,"abstract":"Asynchronous JavaScript has become omnipresent, yet is inherently difficult to reason about. While many recent debugging tools are trying to address this issue with (semi-)automatic methods, interactive analysis tools are few and far between. To this date, developers are required to build mental models of complex concurrent control flows with little to no tool support. Thus, asynchrony is making life hard for novices and catches even seasoned developers off-guard, especially when dealing with unfamiliar code. That is why we propose the Asynchronous Call Graph. It is the first approach to capture and visualize concurrent control flow between call graph roots. It is also the first concurrency analysis tool for JavaScript that is fully interactive and integrated with an omniscient debugger in a popular IDE. First tests show that the ACG works successfully on real-world codebases. This approach has the potential to set a new standard for how developers can analyze asynchrony.","PeriodicalId":119790,"journal":{"name":"2022 IEEE/ACM 44th International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129213269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信