Alessandra Somma , Domenico Amalfitano , Alessio Bucaioni , Alessandra De Benedictis
{"title":"A model-driven approach for engineering Mobility Digital Twins: The Bologna case study","authors":"Alessandra Somma , Domenico Amalfitano , Alessio Bucaioni , Alessandra De Benedictis","doi":"10.1016/j.infsof.2025.107863","DOIUrl":"10.1016/j.infsof.2025.107863","url":null,"abstract":"<div><h3>Context:</h3><div>As cities grapple with increasing congestion, sustainability concerns, and the need for efficient mobility systems, <em>Mobility Digital Twins</em> (MoDTs) have emerged as promising technology for improving urban transportation. However, the development of MoDTs remains hindered by challenges such as structural complexity, data heterogeneity, lack of interoperability, and limited support for scalability, maintainability, and adaptability.</div></div><div><h3>Objective:</h3><div>This work aims to address these barriers by introducing a structured and systematic engineering framework that supports the design development of MoDT, reducing technical debt, development costs and human errors, while promoting long-term evolution.</div></div><div><h3>Methods:</h3><div>We propose a <em>Model-Driven Engineering</em> (MDE) approach that organizes the development of MoDTs through models at different levels of abstraction and adopts automated transformations from high-level specifications to executable code artifacts, supporting MoDT life-cycle.</div></div><div><h3>Results:</h3><div>The proposed approach is validated through its application in developing a MoDT for the city of Bologna, Italy. To support this, we introduce the <strong>M2DT</strong> tool, which automates the workflow from high-level models to software code artifacts. The resulting <strong>BoMoDT</strong> platform is built using open-source technologies and real mobility data. This case study demonstrates the feasibility and effectiveness of our approach, which, to our knowledge, is the first to apply a model-driven strategy for the entire MoDT development. A qualitative evaluation confirms that our framework addresses key challenges in MoDT development. Quantitative experiments further validate BoMoDT’s ability to accurately reproduce and monitor real urban mobility conditions.</div></div><div><h3>Conclusion:</h3><div>The proposed approach offers a solid foundation for addressing MoDT development challenges. By combining automation with structured abstraction, it improves adaptability and maintainability while enabling scalable integration, helping make MoDTs more accessible for future urban system design.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107863"},"PeriodicalIF":4.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomasz Lamża , Justyna Zawalska , Kacper Jurek , Mariusz Sterzel , Katarzyna Rycerz
{"title":"Facilitating usage of quantum–classical optimization software with QHyper library","authors":"Tomasz Lamża , Justyna Zawalska , Kacper Jurek , Mariusz Sterzel , Katarzyna Rycerz","doi":"10.1016/j.infsof.2025.107861","DOIUrl":"10.1016/j.infsof.2025.107861","url":null,"abstract":"<div><h3>Context:</h3><div>Quantum computing offers promising approaches to solving combinatorial optimization problems. While there are many software frameworks for these approaches, they are developed by different providers, requiring users to reimplement their problems separately in each environment to facilitate comparisons. Additionally, due to the current limitations of quantum hardware, classical solvers are often used as reference methods, further increasing implementation overhead. This lack of integration highlights the need for a unified API that seamlessly connects all approaches, enabling efficient problem formulation and comparison of different methods.</div></div><div><h3>Objective:</h3><div>This work aims to provide a research tool with a unified interface for specifying combinatorial optimization problems, selecting different solvers, managing problem hyperparameters, and standardizing the output for effortless analysis and comparison.</div></div><div><h3>Methods:</h3><div>We have developed an open-source research tool with a modular architecture that fulfills the aforementioned objective. QHyper’s design provides an independent representation of the problem along with a converter that supports various formulations compatible with available solvers. Among them are the Quantum Approximate Optimization Algorithm implemented in PennyLane, the D-Wave Advantage solver, and the Gurobi optimizer. For optimization, QHyper offers various methods, from local techniques based on gradient descent to custom global Monte Carlo methods for hyperparameter optimization.</div></div><div><h3>Results:</h3><div>QHyper’s overhead is within the statistical margin of error compared to the stand-alone use of solvers. The included illustrative example shows how to create a custom problem and easily switch between the chosen solvers. The paper also contains references to real-life scientific use cases for the presented tool.</div></div><div><h3>Conclusion:</h3><div>QHyper’s design ensures easy extensibility to new problems, solvers, and optimizers, and has proven its usefulness for several scientific use cases mentioned in the paper. With simple configuration options and easy management, such as through Jupyter Notebooks, this library can be useful for practitioners, engineers, and academics working on combinatorial optimization research.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107861"},"PeriodicalIF":4.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145007722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equitable cybersecurity: Towards generating requirements through the lens of security literacy","authors":"Bilal Naqvi , Annika Wolff , Domenico Racanelli","doi":"10.1016/j.infsof.2025.107891","DOIUrl":"10.1016/j.infsof.2025.107891","url":null,"abstract":"<div><h3>Context</h3><div>Users of modern-day systems must understand how these systems operate and their roles in protecting these systems. This requires a degree of security literacy, but, as with all literacies, this varies across the general population. Improving literacy requires time for learning and gaining practical experience, and that does not happen overnight. Therefore, a two-pronged approach is necessary, whereby we ensure that everyone who uses these systems possesses an appropriate level of security literacy and design systems that are intuitive and usable by all users, regardless of their level of security literacy.</div></div><div><h3>Objectives</h3><div>This paper aims to demonstrate that traditional requirements-gathering approaches often overlook important requirements related to security literacy. The paper does so by considering a case study featuring the development of a novel biometric e-ID across six cases in five European countries.</div></div><div><h3>Methods</h3><div>To address this objective, firstly, the paper synthesized elements from academic and gray literature to conceptualize security literacy. The co-design approach was then used to draft scenarios based on the six cases (in the case study) and to identify security literacy-specific requirements.</div></div><div><h3>Results</h3><div>The paper presents a conceptual model of security literacy structured into pillars, core, and specialized knowledge areas and abilities, respectively. Using this model as an analytical lens, the paper presents six co-created scenarios and 11 security literacy-specific requirements that were not captured using standard requirement-gathering approaches.</div></div><div><h3>Conclusion</h3><div>The paper demonstrates that traditional requirement-gathering approaches can overlook important, nuanced requirements, particularly those relevant to user groups with lower security literacy. The model presented in this paper helps identify requirements from a security literacy perspective, thereby enhancing user security engagement and interactions.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107891"},"PeriodicalIF":4.3,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiaolin Qin , Benjamin Djian , Ettore Merlo , Heng Li , Sébastien Gambs
{"title":"Representation-based fairness evaluation and bias correction robustness assessment in neural networks","authors":"Qiaolin Qin , Benjamin Djian , Ettore Merlo , Heng Li , Sébastien Gambs","doi":"10.1016/j.infsof.2025.107876","DOIUrl":"10.1016/j.infsof.2025.107876","url":null,"abstract":"<div><h3>Context:</h3><div>While machine learning has achieved high predictive performance in many domains, decisions may still be biased and unfair regarding specific demographic groups characterized by sensitive attributes such as gender, age, or race.</div></div><div><h3>Objectives:</h3><div>In this paper, we introduce a novel approach to assess model fairness and bias correction robustness based on Computational Profile Distance (CPD) analysis with respect to sensitive attributes.</div></div><div><h3>Methods:</h3><div>To study model fairness, we quantify the model’s representation difference using the computational profile learned from different subgroups (e.g., male and female) on the individual and group level. To analyze the robustness of bias correction outcomes, we compare the correction suggestions provided based on confidence (i.e., softmax score) and likelihood (i.e., CPD).</div></div><div><h3>Results:</h3><div>To demonstrate the potential of the proposed approach, experiments have been performed using 24 models targeting 3 datasets used in previous fairness studies. Our experiments showed that computational profile distributions can effectively address model fairness from a representation perspective. Further, the experiments indicated that confidence-based bias correction decisions can vary largely from likelihood-based ones, and we should take both suggestions into account to obtain robust outcomes.</div></div><div><h3>Conclusion:</h3><div>Demonstrated with a set of experiments, our CPD-based approaches can help users build their trust in fairness assessment and bias mitigation of AI decisions, in ethically sensitive domains such as human resources, finance, health, and more.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107876"},"PeriodicalIF":4.3,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145004837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdul Malik , Bilal Khan , Saeed Mian Qaisar , Moez Krichen
{"title":"AntiPhishX: An AI-driven service-oriented ensemble framework for detecting phishing and ai-powered phishing attacks","authors":"Abdul Malik , Bilal Khan , Saeed Mian Qaisar , Moez Krichen","doi":"10.1016/j.infsof.2025.107877","DOIUrl":"10.1016/j.infsof.2025.107877","url":null,"abstract":"<div><h3>Context</h3><div>The internet has become an essential societal utility, providing opportunities for both legitimate and illegitimate users. Cyberattacks, including phishing Uniform Resource Locator (URL) attacks, have emerged as a significant cybersecurity concern, especially with the increasing adoption of Artificial Intelligence (AI). The exponential growth of AI-driven phishing URL attacks presents new challenges for cyberspace security.</div></div><div><h3>Objective</h3><div>This study aims to develop a novel approach, named AntiPhishX, to detect phishing and AI-phishing URL attacks effectively. The model leverages advancements in AI and service-oriented computing to enhance detection accuracy and overcome the limitations of existing methods.</div></div><div><h3>Methods</h3><div>The proposed AntiPhishX approach integrates Natural Language Processing (NLP) techniques to extract relevant features and analyze text dependencies within URLs. A cohesive model is designed by applying machine learning (ML) algorithms to the processed feature sets. A voting-based ensemble of best-performing ML models is constructed to classify URLs as phishing, AI-phishing, or benign in real time. The model is implemented and evaluated in Python using a dataset of 90,000 URLs collected from the PhishTank platform.</div></div><div><h3>Results</h3><div>The AntiPhishX model outperformed benchmark models, achieving: Precision: 98.32 %, Recall: 97.63 %, F-score: 98.31 %, and Detection rate: 98.12 %</div></div><div><h3>Conclusion</h3><div>The findings demonstrate the potential of AI-driven and service-oriented computing approaches, such as AntiPhishX, in strengthening cyberspace defenses against evolving phishing threats. This study highlights the effectiveness of integrating NLP and ML techniques in phishing URL detection systems.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107877"},"PeriodicalIF":4.3,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fairness on a budget, across the board: A cost-effective evaluation of fairness-aware practices across contexts, tasks, and sensitive attributes","authors":"Alessandra Parziale , Gianmario Voria , Giammaria Giordano , Gemma Catolino , Gregorio Robles , Fabio Palomba","doi":"10.1016/j.infsof.2025.107858","DOIUrl":"10.1016/j.infsof.2025.107858","url":null,"abstract":"<div><h3>Context:</h3><div>Machine Learning (ML) is widely used in critical domains like finance, healthcare, and criminal justice, where unfair predictions can lead to harmful outcomes. Although bias mitigation techniques have been developed by the Software Engineering (SE) community, their practical adoption is limited due to complexity and integration issues. As a simpler alternative, fairness-aware practices, namely conventional ML engineering techniques adapted to promote fairness, e.g., MinMax Scaling, which normalizes feature values to prevent attributes linked to sensitive groups from disproportionately influencing predictions, have recently been proposed, yet their actual impact is still unexplored.</div></div><div><h3>Objective:</h3><div>Building on our prior work that explored fairness-aware practices in different contexts, this paper extends the investigation through a large-scale empirical study assessing their effectiveness across diverse ML tasks, sensitive attributes, and datasets belonging to specific application domains.</div></div><div><h3>Methods:</h3><div>We conduct 5940 experiments, evaluating fairness-aware practices from two perspectives: <em>contextual bias mitigation</em> and <em>cost-effectiveness</em>. Contextual evaluation examines fairness improvements across different ML models, sensitive attributes, and datasets. Cost-effectiveness analysis considers the trade-off between fairness gains and performance costs.</div></div><div><h3>Results:</h3><div>Findings reveal that the effectiveness of fairness-aware practices depends on specific contexts’ datasets and configurations, while cost-effectiveness analysis highlights those that best balance ethical gains and efficiency.</div></div><div><h3>Conclusion:</h3><div>These insights guide practitioners in choosing fairness-enhancing practices with minimal performance impact, supporting ethical ML development.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107858"},"PeriodicalIF":4.3,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Ponce , Roberto Verdecchia , Breno Miranda , Jacopo Soldani
{"title":"Microservices testing: A systematic literature review","authors":"Francisco Ponce , Roberto Verdecchia , Breno Miranda , Jacopo Soldani","doi":"10.1016/j.infsof.2025.107870","DOIUrl":"10.1016/j.infsof.2025.107870","url":null,"abstract":"<div><h3>Context:</h3><div>Microservices offer scalability and resilience for modern cloud-native applications but present significant challenges in software testing due to their distributed and heterogeneous nature.</div></div><div><h3>Objective:</h3><div>This study aims to consolidate and classify the current body of knowledge on microservice testing through a systematic literature review, providing actionable insights for both researchers and practitioners.</div></div><div><h3>Methods:</h3><div>Following established guidelines for systematic literature reviews in software engineering, we identified 74 primary studies relevant to microservices testing. These studies were systematically categorized using the SWEBOK (<em>Software Engineering Body of Knowledge</em>) taxonomy for software testing. Specifically, we classified the identified techniques according to their testing objectives, levels, strategies, and types. We also evaluated the study types to gauge the maturity and readiness of the current state-of-the-art in microservice testing.</div></div><div><h3>Results:</h3><div>System testing emerged as the most frequently investigated testing level, followed by integration, unit, and acceptance testing. Conformance, regression, and API testing were the most common functional testing objectives, while performance efficiency and reliability were instead predominant in the case of non-functional testing. Specification-based testing strategies were the most adopted, followed by usage-based and fault-based ones. Additionally, most studies employed laboratory experiments and had low-to-medium technology readiness levels, indicating early-stage maturity. The systems under test varied in size and domain, with TrainTicket being the most widely used reference benchmark for large systems.</div></div><div><h3>Conclusion:</h3><div>While significant progress has been made in microservice testing, the field remains fragmented, with notable gaps in areas such as, e.g., flexibility and security testing. The dominance of early-stage proposals highlights the need for more empirical validation and industry-grade benchmarks to facilitate broader adoption. This review offers a structured roadmap for future research and practical adoption in microservices testing.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107870"},"PeriodicalIF":4.3,"publicationDate":"2025-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Coverage metrics for actor model programs: A model-based approach","authors":"Sajad Jalali , Seyed-Hassan Mirian-Hosseinabadi","doi":"10.1016/j.infsof.2025.107860","DOIUrl":"10.1016/j.infsof.2025.107860","url":null,"abstract":"<div><h3>Context:</h3><div>Software testing plays an important role in developing high-quality software. Developers need to understand software test effectiveness; therefore, using coverage criteria for test assessments is common. Selecting appropriate criteria for each software context is crucial, as it can lead to better fault detection. Each context requires a specific test suite to reveal faults, and developers should be aware of the context to select suitable criteria. In actor model programs, developers encounter a specific paradigm and need novel criteria.</div></div><div><h3>Objective:</h3><div>In this study, we utilize the Model-Based Testing approach and demonstrate its applications in actor model programs. Our goal is to identify coverage criteria using this method and evaluate its effectiveness.</div></div><div><h3>Method:</h3><div>We utilize a model-based testing approach to discover new coverage metrics in actor model programs. Initially, we identify the key properties of actor model programs. Based on these properties, we propose specific coverage criteria that can effectively expose faults. Lastly, we evaluate the effectiveness of these new coverage criteria through a series of experiments.</div></div><div><h3>Results:</h3><div>This paper shows that our suggested coverage criteria based on model-based testing can be more efficient than traditional coverage methods. Our experiments indicate that we can detect faults more effectively with these criteria. In particular, with the Pair of Event Type Receive Coverage, we can guide tests to find faults at least 20% faster than conventional tests.</div></div><div><h3>Conclusion:</h3><div>This paper uses the model-based testing approach for ensuring the quality of actor model programs. We define three new coverage metrics and evaluate them through experiments. These experiments demonstrate that the new coverage metrics can reveal faults more effectively than traditional coverage methods in certain situations.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107860"},"PeriodicalIF":4.3,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144903263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing the impact of tuning parameter in instance selection based bug resolution classification","authors":"Chaymae Miloudi , Laila Cheikhi , Ali Idri , Alain Abran","doi":"10.1016/j.infsof.2025.107874","DOIUrl":"10.1016/j.infsof.2025.107874","url":null,"abstract":"<div><h3>Context</h3><div>Software maintenance is time-consuming and requires significant effort for bug resolution and various types of software enhancement. Estimating software maintenance effort is challenging for open source software (OSS) without historical data about direct effort expressed in terms of man-days, compared to proprietary software for which this data about effort is available. Therefore, maintenance efforts in the OSS context can only be estimated indirectly through other features, such as OSS bug reports, and other approaches, such as bug resolution prediction models using a number of machine learning (ML) techniques. Although these bug reports are at times large in size, they need to be preprocessed before they can be used. In this context, instance selection (IS) has been presented in the literature as a way of reducing the size of datasets by selecting a subset of instances. Additionally, ML techniques often require fine-tuning of numerous parameters to achieve optimal predictions. This is typically done using tuning parameter (TP) methods.</div></div><div><h3>Objective</h3><div>The empirical study reported here investigated the impact of TP methods together with instance selection algorithms (ISAs) on the performance of bug resolution prediction ML classifiers on five datasets: Eclipse JDT, Eclipse Platform, KDE, LibreOffice, and Apache.</div></div><div><h3>Method</h3><div>To this end, a set of 480 ML classifiers are built using 60 datasets including the five original ones, 15 reduced datasets using Edited Nearest Neighbor (ENN), Repeated Edited Nearest Neighbor (RENN), and all-k Nearest Neighbor (AllkNN) single ISAs, and 40 reduced datasets using Bagging, Random Feature Subsets, and Voting ensemble ISAs, together with four ML techniques (k Nearest Neighbor (kNN), Support Vector Machine (SVM), Voted Perceptron (VP), and Random Tree (RT) using Grid Search (GS) and Default Parameter (DP) configurations. Furthermore, the classifiers were evaluated using Accuracy, Precision, and Recall performance criteria, in addition to the ten-fold cross-validation method. Next, these classifiers are compared to determine how parameter tuning and IS can enhance bug resolution prediction performance.</div></div><div><h3>Conclusion</h3><div>The findings revealed that (1) using GS with single ISAs enhanced the performance of the built ML classifiers, (2) using GS with homogeneous and heterogeneous ensemble ISAs enhanced the performance of the built ML classifiers, and (3) associating GS and SVM with RENN (either used as a single ISA or implemented as a base algorithm for ensemble ISAs) gave the best performance.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107874"},"PeriodicalIF":4.3,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandra Duque-Torres , Claus Klammer , Stefan Fischer , Rudolf Ramler , Dietmar Pfahl
{"title":"Metamorphic testing for optimisation: A case study on PID controller tuning","authors":"Alejandra Duque-Torres , Claus Klammer , Stefan Fischer , Rudolf Ramler , Dietmar Pfahl","doi":"10.1016/j.infsof.2025.107872","DOIUrl":"10.1016/j.infsof.2025.107872","url":null,"abstract":"<div><h3>Context:</h3><div>Tuning PID controller parameters is essential for achieving stable and efficient system behaviour. However, traditional methods often depend on expert heuristics or exhaustive simulations, which can be time-consuming and difficult to generalise across configurations.</div></div><div><h3>Objective:</h3><div>This paper explores Metamorphic Testing (MT) as a validation-driven approach for identifying optimal PID parameter set based on expected behavioural trends.</div></div><div><h3>Method:</h3><div>We define a set of domain-informed Metamorphic Relations (MRs) that capture control-theoretic expectations. These MRs are used to evaluate and compare simulated parameter sets, based on how many behavioural constraints they satisfy.</div></div><div><h3>Results:</h3><div>Our findings show that high MR satisfaction is a strong indicator of potentially optimal parameter sets, often yielding smooth, stable control responses. Although trade-offs between metrics—such as energy usage and error correction occasionally led to partial MR violations, expert-preferred parameter sets still closely matched those with the highest MR scores. These results support the use of MR-based evaluation as a practical approach to identifying high-quality tuning parameters.</div></div><div><h3>Conclusion:</h3><div>Metamorphic Testing offers an interpretable and structured way to guide controller tuning by formalising expected system behaviour. This approach helps identify near-optimal configurations and complements traditional tuning practices through systematic validation.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"188 ","pages":"Article 107872"},"PeriodicalIF":4.3,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144887321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}