{"title":"A Retrospective of Proving the Correctness of Multiprocess Programs","authors":"Leslie Lamport","doi":"10.1109/tse.2024.3522038","DOIUrl":"https://doi.org/10.1109/tse.2024.3522038","url":null,"abstract":"","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"8 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142884233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflections of a Former Editor-in-Chief of TSE","authors":"Jeff Kramer","doi":"10.1109/tse.2024.3521306","DOIUrl":"https://doi.org/10.1109/tse.2024.3521306","url":null,"abstract":"","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"60 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142879938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelson Silva;Jorge Melegati;Fabio Silveira;Xiaofeng Wang;Mauricio Ferreira;Eduardo Guerra
{"title":"ArchHypo: Managing Software Architecture Uncertainty Using Hypotheses Engineering","authors":"Kelson Silva;Jorge Melegati;Fabio Silveira;Xiaofeng Wang;Mauricio Ferreira;Eduardo Guerra","doi":"10.1109/TSE.2024.3520477","DOIUrl":"10.1109/TSE.2024.3520477","url":null,"abstract":"Uncertainty is present in software architecture decisions due to a lack of knowledge about the requirements and the solutions involved. However, this uncertainty is usually not made explicit, and decisions can be made based on unproven premises or false assumptions. This paper focuses on a technique called ArchHypo that uses hypotheses engineering to manage uncertainties related to software architecture. It proposes formulating a technical plan based on each hypothesis’ assessment, incorporating measures able to mitigate its impact and reduce uncertainty. To evaluate the proposed technique, this paper reports an application of the technique in a mission-critical project that faced several technical challenges. Conclusions were based on data extracted from the project documentation and a questionnaire answered by all team members. As a result, the application of ArchHypo provided a structured approach to dividing the architectural work through iterations, which facilitated architectural decision-making. However, further research is needed to fully understand its impact across different contexts. On the other hand, the team identified the learning curve and process adjustments required for ArchHypo's adoption as significant challenges that could hinder its widespread adoption. In conclusion, the evidence found in this study indicates that the technique has the potential to provide a suitable way to manage the uncertainties related to software architecture, facilitating the strategic postponement of decisions while addressing their potential impact.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 2","pages":"430-448"},"PeriodicalIF":6.5,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10807272","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142858371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ChatAssert: LLM-Based Test Oracle Generation With External Tools Assistance","authors":"Ishrak Hayet;Adam Scott;Marcelo d'Amorim","doi":"10.1109/TSE.2024.3519159","DOIUrl":"10.1109/TSE.2024.3519159","url":null,"abstract":"Test oracle generation is an important and challenging problem. Neural-based solutions have been recently proposed for oracle generation but they are still inaccurate. For example, the accuracy of the state-of-the-art technique <sc>teco</small> is only 27.5% on its dataset including 3,540 test cases. We propose <sc>ChatAssert</small>, a prompt engineering framework designed for oracle generation that uses dynamic and static information to iteratively refine prompts for querying large language models (LLMs). <sc>ChatAssert</small> uses code summaries and examples to assist an LLM in generating candidate test oracles, uses a lightweight static analysis to assist the LLM in repairing generated oracles that fail to compile, and uses dynamic information obtained from test runs to help the LLM in repairing oracles that compile but do not pass. Experimental results using an independent publicly-available dataset show that <sc>ChatAssert</small> improves the state-of-the-art technique, <sc>teco</small>, on key evaluation metrics. For example, it improves <italic>Acc@1</i> by 15%. Overall, results provide initial yet strong evidence that using external tools in the formulation of prompts is an important aid in LLM-based oracle generation.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 1","pages":"305-319"},"PeriodicalIF":6.5,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142832239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced Crowdsourced Test Report Prioritization via Image-and-Text Semantic Understanding and Feature Integration","authors":"Chunrong Fang;Shengcheng Yu;Quanjun Zhang;Xin Li;Yulei Liu;Zhenyu Chen","doi":"10.1109/TSE.2024.3516372","DOIUrl":"10.1109/TSE.2024.3516372","url":null,"abstract":"Crowdsourced testing has gained prominence in the field of software testing due to its ability to effectively address the challenges posed by the fragmentation problem in mobile app testing. The inherent openness of crowdsourced testing brings diversity to the testing outcome. However, it also presents challenges for app developers in inspecting a substantial quantity of test reports. To help app developers inspect the bugs in crowdsourced test reports as early as possible, crowdsourced test report prioritization has emerged as an effective technology by establishing a systematic optimal report inspecting sequence. Nevertheless, crowdsourced test reports consist of app screenshots and textual descriptions, but current prioritization approaches mostly rely on textual descriptions, and some may add vectorized image features at the image-as-a-whole level or widget level. They still lack precision in accurately characterizing the distinctive features of crowdsourced test reports. In terms of prioritization strategy, prevailing approaches adopt simple prioritization based on features combined merely using weighted coefficients, without adequately considering the semantics, which may result in biased and ineffective outcomes. In this paper, we propose \u0000<sc>EncrePrior</small>\u0000, an enhanced crowdsourced test report prioritization approach via image-and-text semantic understanding and feature integration. \u0000<sc>EncrePrior</small>\u0000 extracts distinctive features from crowdsourced test reports. For app screenshots, \u0000<sc>EncrePrior</small>\u0000 considers the structure (i.e., GUI layout) and the contents (i.e., GUI widgets), viewing the app screenshot from the macroscopic and microscopic perspectives, respectively. For textual descriptions, \u0000<sc>EncrePrior</small>\u0000 considers the Bug Description and Reproduction Step as the bug context. During the prioritization, we do not directly merge the features with weights to guide the prioritization. Instead, in order to comprehensively consider the semantics, we adopt a prioritize-reprioritize strategy. This practice combines different features together by considering their individual ranks. The reports are first prioritized on four features separately. Then, the ranks on four sequences are used to lexicographically reprioritize the test reports with an integration of features from app screenshots and textual descriptions. Results of an empirical study show that \u0000<sc>EncrePrior</small>\u0000 outperforms the representative baseline approach \u0000<sc>DeepPrior</small>\u0000 by 15.61% on average, ranging from 2.99% to 63.64% on different apps, and the novelly proposed features and prioritization strategy all contribute to the excellent performance of \u0000<sc>EncrePrior</small>\u0000.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 1","pages":"283-304"},"PeriodicalIF":6.5,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Compiler Error Recovery Defects via Program Mutation Exploration","authors":"Yixuan Tang;Jingxuan Zhang;Xiaochen Li;Zhiqiu Huang;He Jiang","doi":"10.1109/TSE.2024.3510912","DOIUrl":"10.1109/TSE.2024.3510912","url":null,"abstract":"Compiler error recovery diagnostics facilitates software development as it provides the possible causes and suggestions on potential programming errors. However, due to compiler bugs, error recovery diagnostics could be erroneous, spurious, missing, or even crashing for mature production compilers like GCC and Clang. Compiler testing is one of the most widely used ways of ensuring its quality. However, existing compiler diagnostics testing approaches (e.g., DIPROM) only consider the typically syntactically valid test programs as inputs, which are unlikely to trigger compiler error recovery defects. Therefore, in this paper, we propose the first mutation based approach for Compiler Error Recovery diagnostics Testing, called CERTest. Specifically, CERTest first explores the mutation space for a given seed program, and leverages a series of <i>mutation configurations</i> (which are referred as a series of mutators applying for a seed) to iteratively mutate the structures of the seed, so as to generate error-sensitive program variants for triggering compiler error recovery mechanisms. To effectively construct error-sensitive structures, CERTest then applies a novel furthest-first based selection approach to select a set of representative mutation configurations to generate program variants in each iteration. With the generated program variants, CERTest finally leverages differential testing to detect error recovery defects in different compilers. The experiments on GCC and Clang demonstrate that CERTest outperforms five state-of-the-art approaches (i.e., DIPROM, <small>Ccoft</small>, <small>Clang-fuzzer</small>, AFL++, and HiCOND) by up to 13.10%<inline-formula><tex-math>$sim$</tex-math></inline-formula>221.61% on average in the term of bug-finding capability, and CERTest detects 9 new error recovery defects, 5 of which have been confirmed or fixed by developers.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 2","pages":"389-412"},"PeriodicalIF":6.5,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142809247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Influence of Data Resampling for Deep Learning-Based Log Anomaly Detection: Insights and Recommendations","authors":"Xiaoxue Ma;Huiqi Zou;Pinjia He;Jacky Keung;Yishu Li;Xiao Yu;Federica Sarro","doi":"10.1109/TSE.2024.3513413","DOIUrl":"10.1109/TSE.2024.3513413","url":null,"abstract":"Numerous Deep Learning (DL)-based approaches have gained attention in software Log Anomaly Detection (LAD), yet class imbalance in training data remains a challenge, with anomalies often comprising less than 1% of datasets like Thunderbird. Existing DLLAD methods may underperform in severely imbalanced datasets. Although data resampling has proven effective in other software engineering tasks, it has not been explored in LAD. This study aims to fill this gap by providing an in-depth analysis of the impact of diverse data resampling methods on existing DLLAD approaches from two distinct perspectives. Firstly, we assess the performance of these DLLAD approaches across four datasets with different levels of class imbalance, and we explore the impact of resampling ratios of normal to abnormal data on DLLAD approaches. Secondly, we evaluate the effectiveness of the data resampling methods when utilizing optimal resampling ratios of normal to abnormal data. Our findings indicate that oversampling methods generally outperform undersampling and hybrid sampling methods. Data resampling on raw data yields superior results compared to data resampling in the feature space. These improvements are attributed to the increased attention given to important tokens. By exploring the resampling ratio of normal to abnormal data, we suggest generating more data for minority classes through oversampling while removing less data from majority classes through undersampling. In conclusion, our study provides valuable insights into the intricate relationship between data resampling methods and DLLAD. By addressing the challenge of class imbalance, researchers and practitioners can enhance DLLAD performance.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 1","pages":"243-261"},"PeriodicalIF":6.5,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johan Martinson;Wardah Mahmood;Jude Gyimah;Thorsten Berger
{"title":"FM-PRO: A Feature Modeling Process","authors":"Johan Martinson;Wardah Mahmood;Jude Gyimah;Thorsten Berger","doi":"10.1109/TSE.2024.3513635","DOIUrl":"10.1109/TSE.2024.3513635","url":null,"abstract":"Almost any software system needs to exist in multiple variants. While branching or forking—a.k.a. clone & own—are simple and inexpensive strategies, they do not scale well with the number of variants created. Software platforms—a.k.a. software product lines—scale and allow to derive variants by selecting the desired features in an automated, tool-supported process. However, product lines are difficult to adopt and to evolve, requiring mechanisms to manage features and their implementations in complex codebases. Such systems can easily have thousands of features with intricate dependencies. Feature models have arguably become the most popular notation to model and manage features, mainly due to their intuitive, tree-like representation. Introduced more than 30 years ago, thousands of techniques relying on feature models have been presented, including model configuration, synthesis, analysis, and evolution techniques. However, despite many success stories, organizations still struggle with adopting software product lines, limiting the usefulness of such techniques. Surprisingly, no modeling process exists to systematically create feature models, despite them being the main artifact of a product line. This challenges organizations, even hindering the adoption of product lines altogether. We present FM-PRO, a process to engineer feature models. It can be used with different adoption strategies for product lines, including creating one from scratch (\u0000<italic>pro-active adoption</i>\u0000) and re-engineering one from existing cloned variants (\u0000<italic>extractive adoption</i>\u0000). The resulting feature models can be used for configuration, planning, evolution, reasoning about variants, or keeping an overview understanding of complex software platforms. We systematically engineered the process based on empirically elicited modeling principles. We evaluated and refined it in a real-world industrial case study, two surveys with industrial and academic feature-modeling experts, as well as an open-source case study. We hope that FM-PRO helps to adopt feature models and that it facilitates higher-level, feature-oriented engineering practices, establishing features as a better and more abstract way to manage increasingly complex codebases.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 1","pages":"262-282"},"PeriodicalIF":6.5,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MoCo: Fuzzing Deep Learning Libraries via Assembling Code","authors":"Pin Ji;Yang Feng;Duo Wu;Lingyue Yan;Penglin Chen;Jia Liu;Zhihong Zhao","doi":"10.1109/TSE.2024.3509975","DOIUrl":"10.1109/TSE.2024.3509975","url":null,"abstract":"The rapidly developing Deep Learning (DL) techniques have been applied in software systems of various types. However, they can also pose new safety threats with potentially serious consequences, especially in safety-critical domains. DL libraries serve as the underlying foundation for DL systems, and bugs in them can have unpredictable impacts that directly affect the behaviors of DL systems. Previous research on fuzzing DL libraries still has limitations in generating tests corresponding to crucial testing scenarios and constructing test oracles. In this paper, we propose <monospace>MoCo</monospace>, a novel fuzzing testing method for DL libraries via assembling code. The seed tests used by <monospace>MoCo</monospace> are code files that implement DL models, covering both model construction and training in the most common real-world application scenarios for DL libraries. <monospace>MoCo</monospace> first disassembles the seed code files to extract templates and code blocks, then applies code block mutation operators (e.g., API replacement, random generation, and boundary checking) to generate new code blocks that fit the template. To ensure the correctness of the code block mutation, we employ the Large Language Model to parse the official documents of DL libraries for information about the parameters and the constraints between them. By inserting context-appropriate code blocks into the template, <monospace>MoCo</monospace> can generate a tree of code files with intergenerational relations. According to the derivation relations in this tree, we construct the test oracle based on the execution state consistency and the calculation result consistency. Since the granularity of code assembly is controlled rather than randomly divergent, we can quickly pinpoint the lines of code where the bugs are located and the corresponding triggering conditions. We conduct a comprehensive experiment to evaluate the efficiency and effectiveness of <monospace>MoCo</monospace> using three widely-used DL libraries (i.e., TensorFlow, PyTorch, and Jittor). During the experiments, <monospace>MoCo</monospace> detects 77 new bugs of four types in three DL libraries, where 55 bugs have been confirmed, and 39 bugs have been fixed by developers. The experimental results demonstrate that <monospace>MoCo</monospace> can generate high-quality tests that cover crucial testing scenarios and detect different types of bugs, which helps developers improve the reliability of DL libraries.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 2","pages":"371-388"},"PeriodicalIF":6.5,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}