{"title":"Privacy Impact Tree Analysis (PITA): A Tree-Based Privacy Threat Modeling Approach","authors":"Dimitri Van Landuyt","doi":"10.1109/TSE.2025.3573380","DOIUrl":"10.1109/TSE.2025.3573380","url":null,"abstract":"Threat modeling involves the early identification, prioritization and mitigation of relevant threats and risks, during the design and conceptualization stages of the software development life-cycle. Tree-based analysis is a structured risk analysis technique that starts from the articulation of possible negative outcomes and then systematically refines these into sub-goals, events or intermediate steps that contribute to this outcome becoming reality. While tree-based analysis techniques are widely adopted in the area of safety (fault tree analysis) or in cybersecurity (attack trees), this type of risk analysis approach is lacking in the area of privacy. To alleviate this, we present privacy impact tree analysis (PITA), a novel tree-based approach for privacy threat modeling. Instead of starting from safety hazards or attacker goals, PITA starts from listing the potential privacy impacts of the system under design, i.e., specific scenarios in which the system creates or contributes to specific privacy harms. To accommodate this, PITA provides a taxonomy, distinguishing between privacy impact types that pertain (i) data subject identity, (ii) data subject treatment, (iii) data subject control and (iv) treatment of personal data. In addition, a pragmatic methodology is presented that leverages both the hierarchical nature of the tree structures and the early ranking of impacts to focus the privacy engineering efforts. Finally, building upon the privacy impact notion as captured in the privacy impact trees, we provide a refinement of the foundational concept of the overall or aggregated ‘privacy footprint’ of a system. The approach is demonstrated and validated in three complex and contemporary real-world applications, through which we highlight the added value of this tree-based privacy threat analysis approach that refocuses on privacy harms and impacts.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2102-2124"},"PeriodicalIF":6.5,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144145987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Taxonomy of Contextual Factors in Continuous Integration Processes","authors":"Shujun Huang;Sebastian Proksch","doi":"10.1109/TSE.2025.3572382","DOIUrl":"10.1109/TSE.2025.3572382","url":null,"abstract":"Numerous studies have shown that <italic>Continuous Integration</i> (CI) significantly improves software development productivity. Research has already shown in other fields of software engineering that findings do not always generalize and are often limited to a specific context. So far, research on CI has not differentiated between varying contexts of the studied projects, which includes, for example, varying domains, personnel, technical environments, or cultures. We need to extend the theory of CI by considering the relevant context that will impact how projects approach CI. Although existing studies implicitly touch on context, they often lack a consistent terminology or rely on experience rather than a standardized approach. In this paper, we bridge this gap by developing a taxonomy of relevant contextual factors within the domain of CI. Using grounded theory, we analyze peer-reviewed studies and develop a comprehensive taxonomy of contextual factors of CI that we validate through a practitioner survey. The resulting taxonomy contains multiple levels of details, the main dimensions being Product, Team, Process, Quality, and Scale. The taxonomy offers a structured framework to address the gap in CI research regarding contextual theory. Researchers can use it to describe the scope of findings and to reason about the generalizability of theories. Developers can select and reuse practices more effectively by comparing to other similar projects.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2067-2087"},"PeriodicalIF":6.5,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144123089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Level Requirements Tracing Based on Large Language Models","authors":"Chuyan Ge;Tiantian Wang;Xiaotian Yang;Christoph Treude","doi":"10.1109/TSE.2025.3572094","DOIUrl":"10.1109/TSE.2025.3572094","url":null,"abstract":"Cross-level requirements traceability, linking <bold>high-level requirements (HLRs)</b> and <bold>low-level requirements (LLRs)</b>, is essential for maintaining relationships and consistency in software development. However, the manual creation of requirements links necessitates a profound understanding of the project and entails a complex and laborious process. Existing machine learning and deep learning methods often fail to fully understand semantic information, leading to low accuracy and unstable performance. This paper presents the first approach for cross-level requirements tracing based on large language models (LLMs) and introduces a data augmentation strategy (such as synonym replacement, machine translation, and noise introduction) to enhance model robustness. We compare three fine-tuning strategies—LoRA, P-Tuning, and Prompt-Tuning—on different scales of LLaMA models (1.1B, 7B, and 13B). The fine-tuned LLMs exhibit superior performance across various datasets, including six single-project datasets, three cross-project datasets within the same domain, and one cross-domain dataset. Experimental results show that fine-tuned LLMs outperform traditional information retrieval, machine learning, and deep learning methods on various datasets. Furthermore, we compare the performance of GPT and DeepSeek LLMs under different prompt templates, revealing their high sensitivity to prompt design and relatively poor result stability. Our approach achieves superior performance, outperforming GPT-4o and DeepSeek-r1 by 16.27% and 16.8% in F-measure on cross-domain datasets. Compared to the baseline method that relies on prompt engineering, it achieves a maximum improvement of 13.8%.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2044-2066"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Musengamana Jean de Dieu;Peng Liang;Mojtaba Shahin
{"title":"How Do OSS Developers Reuse Architectural Solutions From Q&A Sites: An Empirical Study","authors":"Musengamana Jean de Dieu;Peng Liang;Mojtaba Shahin","doi":"10.1109/TSE.2025.3572027","DOIUrl":"10.1109/TSE.2025.3572027","url":null,"abstract":"Developers reuse programming-related knowledge (e.g., code snippets) on Q&A sites (e.g., Stack Overflow) that functionally matches the programming problems they encounter in their development. Despite extensive research on Q&A sites, being a high-level and important type of development-related knowledge, architectural solutions (e.g., architecture tactics) and their reuse are rarely explored. To fill this gap, we conducted a mixed-methods study that includes a mining study and a survey study. For the mining study, we mined 984 commits and issues (i.e., 821 commits and 163 issues) from 893 Open-Source Software (OSS) projects on GitHub that explicitly referenced architectural solutions from Stack Overflow (SO) and Software Engineering Stack Exchange (SWESE). For the survey study, we identified practitioners involved in the reuse of these architectural solutions and surveyed 227 of them to further understand how practitioners reuse architectural solutions from Q&A sites in their OSS development. Our main findings are that: (1) OSS practitioners reuse architectural solutions from Q&A sites to solve a large variety (15 categories) of architectural problems, wherein <italic>Component design issue</i>, <italic>Architectural anti-pattern</i>, and <italic>Security issue</i> are dominant; (2) Seven categories of architectural solutions from Q&A sites have been reused to solve those problems, among which <italic>Architectural refactoring</i>, <italic>Use of frameworks</i>, and <italic>Architectural tactic</i> are the three most reused architectural solutions; (3) OSS developers often rely on ad hoc ways (e.g., informal, improvised, or unstructured approaches) to reuse architectural solutions from SO, drawing on personal experience and intuition rather than standardized or systematic practices; (4) Reusing architectural solutions from SO comes with a variety of challenges, e.g., OSS practitioners complain that they need to spend significant time to adapt such architectural solutions to address design concerns raised in their OSS development, and it is challenging to reuse architectural solutions that are not tailored to the design context of their OSS projects. Our findings pave the way for future research directions, including the design and development of approaches and tools (such as IDE plugin tools) to facilitate the reuse of architectural solutions from Q&A sites, and could also be used to offer guidelines to practitioners when they contribute architectural solutions to Q&A sites. Our dataset is publicly available at <uri>https://doi.org/10.5281/zenodo.10936098</uri>.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2015-2043"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pasquale Polverino;Fabio Di Lauro;Matteo Biagiola;Paolo Tonella;Antonio Carzaniga
{"title":"Parallelization in System-Level Testing: Novel Approaches to Manage Test Suite Dependencies","authors":"Pasquale Polverino;Fabio Di Lauro;Matteo Biagiola;Paolo Tonella;Antonio Carzaniga","doi":"10.1109/TSE.2025.3572388","DOIUrl":"10.1109/TSE.2025.3572388","url":null,"abstract":"System-level testing is fundamental to ensure the reliability of software systems. However, the execution time for system tests can be quite long, sometimes prohibitively long, especially in a regimen of continuous integration and deployment. One way to speed things up is to run the tests in parallel, provided that the execution schedule respects any dependency between tests. We present two novel approaches to detect dependencies in system-level tests, namely <sc>Pfast</small> and <sc>Mem-Fast</small>, which are highly parallelizable and optimistically run test schedules to exclude many dependencies when there are no failures. We evaluated our approaches both asymptotically and practically, on six Web applications and their system-level test suites, as well as on MySQL system-level tests. Our results show that, in general, <sc>Pfast</small> is significantly faster than the state-of-the-art <sc>PraDet</small> dependency detection algorithm, while producing parallelizable schedules that achieve a significant reduction in the overall test suite execution time.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2088-2101"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RECOVER: Toward Requirements Generation From Stakeholders’ Conversations","authors":"Gianmario Voria;Francesco Casillo;Carmine Gravino;Gemma Catolino;Fabio Palomba","doi":"10.1109/TSE.2025.3572056","DOIUrl":"10.1109/TSE.2025.3572056","url":null,"abstract":"Stakeholders’ conversations requirements elicitation meetings hold valuable insights into system and client needs. However, manually extracting requirements is time-consuming, labor-intensive, and prone to errors and biases. While current state-of-the-art methods assist in summarizing stakeholder conversations and classifying requirements based on their nature, there is a noticeable lack of approaches capable of both identifying requirements within these conversations and generating corresponding system requirements. These approaches would assist requirement identification, reducing engineers’ workload, time, and effort. They would also enhance accuracy and consistency in documentation, providing a reliable foundation for further analysis. To address this gap, this paper introduces <sc>RECOVER</small> (Requirements EliCitation frOm conVERsations), a novel conversational requirements engineering approach that leverages natural language processing and large language models (LLMs) to support practitioners in automatically extracting system requirements from stakeholder interactions by analyzing individual conversation turns. The approach is evaluated using a mixed-method research design that combines statistical performance analysis with a user study involving requirements engineers, targeting two levels of granularity. First, at the conversation turn level, the evaluation measures <sc>RECOVER</small>’s accuracy in identifying requirements-relevant dialogue and the quality of generated requirements in terms of correctness, completeness, and actionability. Second, at the entire conversation level, the evaluation assesses the overall usefulness and effectiveness of <sc>RECOVER</small> in synthesizing comprehensive system requirements from full stakeholder discussions. Empirical evaluation of <sc>RECOVER</small> shows promising performance, with generated requirements demonstrating satisfactory correctness, completeness, and actionability. The results also highlight the potential of automating requirements elicitation from conversations as an aid that enhances efficiency while maintaining human oversight.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 6","pages":"1912-1933"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sogol Masoumzadeh;Nuno Saavedra;Rungroj Maipradit;Lili Wei;João F. Ferreira;Dániel Varró;Shane McIntosh
{"title":"Do Experts Agree About Smelly Infrastructure?","authors":"Sogol Masoumzadeh;Nuno Saavedra;Rungroj Maipradit;Lili Wei;João F. Ferreira;Dániel Varró;Shane McIntosh","doi":"10.1109/TSE.2025.3553383","DOIUrl":"10.1109/TSE.2025.3553383","url":null,"abstract":"Code smells are anti-patterns that violate code understandability, re-usability, changeability, and maintainability. It is important to identify code smells and locate them in the code. For this purpose, automated detection of code smells is a sought-after feature for development tools; however, the design and evaluation of such tools depends on the quality of oracle datasets. The typical approach for creating an oracle dataset involves multiple developers independently inspecting and annotating code examples for their existing code smells. Since multiple inspectors cast votes about each code example, it is possible for the inspectors to disagree about the presence of smells. Such disagreements introduce ambiguity into how smells should be interpreted. Prior work has studied developer perceptions of code smells in traditional source code; however, smells in Infrastructure-as-Code (IaC) have not been investigated. To understand the real-world impact of disagreements among developers and their perceptions of IaC code smells, we conduct an empirical study on the oracle dataset of GLITCH—a state-of-the-art detection tool for security code smells in IaC. We analyze GLITCH's oracle dataset for code smell issues, their types, and individual annotations of the inspectors. Furthermore, we investigate possible confounding factors associated with the incidences of developer misaligned perceptions of IaC code smells. Finally, we triangulate developer perceptions of code smells in traditional source code with our results on IaC. Our study reveals that unlike developer perceptions of smells in traditional source code, their perceptions of smells in IaC are more substantially impacted by subjective interpretation of smell types and their co-occurrence relationships. For instance, the interpretation of admins by default, empty passwords, and hard-coded secrets varies considerably among raters and are more susceptible to misidentification than other IaC code smells. Consequently, the manual identification of IaC code smells involves annotation disagreements among developers—46.3% of studied IaC code smell incidences have at least one dissenting vote among three inspectors. Meanwhile, only 1.6% of code smell incidences in traditional source code are affected by inspector bias stemming from these disagreements. Hence, relying solely on the majority voting, would not fully represent the breadth of interpretation of the IaC under scrutiny.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 5","pages":"1472-1486"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What You See Is What You Get: Prototype Generation for IoT End-User Programming","authors":"Xiaohong Chen;Shi Chen;Zhi Jin;Zihan Chen;Mingsong Chen","doi":"10.1109/TSE.2025.3571585","DOIUrl":"10.1109/TSE.2025.3571585","url":null,"abstract":"With the rapid development of IoT technology, IoT-enabled systems, represented by smart homes, are becoming ubiquitous. In order to support personalized user requirements, such systems appeal to the end-user programming paradigm. This paradigm allows end-users to describe their requirements using TAP (Trigger-Action Programming) rules, which can be deployed on demand. However, writing TAP rules is error-prone and end-users are often unaware of the actual effects of the rules they write, given the context-sensitive nature of these effects. It is highly desirable that TAP rules can be validated before deployment. Unfortunately, requirements validation for IoT end-user programming has not received much attention so far. Therefore, this paper proposes to generate experience prototypes for IoT end-user programming using TAP rules. The difficulty lies in how to orchestrate user experience delivery service scenarios according to TAP rule and context changes, and effectively demonstrate these scenarios. We present a dynamic assembly approach for simulation model systems used for service scenario orchestration. By simulation, we synthesize desired system behaviors, system device behaviors, and context changes. Leveraging the simulation traces of each component, we employ animation techniques specifically designed to highlight user-aware changes. These experience prototypes allow end-users to directly understand the effects of the IoT-enabled systems, thereby determining whether their intentions are satisfied. Experimental results show that our approach is usable and effective for end-users and the generated experience prototypes are context-aware, capable of representing real-world service scenarios, effective, and efficient in requirements validation.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"1996-2014"},"PeriodicalIF":6.5,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144104697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding and Identifying Technical Debt in the Co-Evolution of Production and Test Code","authors":"Yimeng Guo;Zhifei Chen;Lu Xiao;Lin Chen;Yanhui Li;Yuming Zhou","doi":"10.1109/TSE.2025.3553112","DOIUrl":"10.1109/TSE.2025.3553112","url":null,"abstract":"The co-evolution of production and test code (PT co-evolution) has received increasing attention in recent years. However, we found that existing work did not comprehensively study various PT co-evolution scenarios, such as the qualification and persistence of their effects on software. Inspired by technical debt (TD), we refer to TD generated during the co-evolution between production and test code as PT co-evolution technical debt (PTCoTD). To better understand PT co-evolution, we first conducted an exploratory study on its characteristics on 15 open-source projects, finding unbalanced PT co-evolution prevalent and summarizing five potential PT flaws. Then we proposed an approach to identify and quantify PTCoTDs of these flaw patterns, considering evolutionary and structural relationships. We also built prediction models to describe cost trajectories and rank all PTCoTDs to prioritize expensive ones. The evaluation on the 15 projects shows that our approach can identify PTCoTDs that deserve attention. The identified PTCoTDs account for about half of the project's total maintenance costs, and the cost proportion of the expensive Top-5 is 1.8x more than the file proportion they contain. Almost all covered maintenance costs persist as PTCoTD in the future, with an average increase of 6.8% between the last two releases. Our approach also accurately predicts the costs of PTCoTD with an average prediction deviation of only 8.3%. Our study provides valuable insights into PT co-evolution scenarios and their effects, which can guide practices and inspire future work on software testing and maintenance.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 5","pages":"1415-1436"},"PeriodicalIF":6.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143661525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chuyang Xu;Zhongxin Liu;Xiaoxue Ren;Gehao Zhang;Ming Liang;David Lo
{"title":"FlexFL: Flexible and Effective Fault Localization With Open-Source Large Language Models","authors":"Chuyang Xu;Zhongxin Liu;Xiaoxue Ren;Gehao Zhang;Ming Liang;David Lo","doi":"10.1109/TSE.2025.3553363","DOIUrl":"10.1109/TSE.2025.3553363","url":null,"abstract":"Fault localization (FL) targets identifying bug locations within a software system, which can enhance debugging efficiency and improve software quality. Due to the impressive code comprehension ability of Large Language Models (LLMs), a few studies have proposed to leverage LLMs to locate bugs, i.e., LLM-based FL, and demonstrated promising performance. However, first, these methods are limited in flexibility. They rely on bug-triggering test cases to perform FL and cannot make use of other available bug-related information, e.g., bug reports. Second, they are built upon proprietary LLMs, which are, although powerful, confronted with risks in data privacy. To address these limitations, we propose a novel LLM-based FL framework named FlexFL, which can flexibly leverage different types of bug-related information and effectively work with open-source LLMs. FlexFL is composed of two stages. In the first stage, FlexFL reduces the search space of buggy code using state-of-the-art FL techniques of different families and provides a candidate list of bug-related methods. In the second stage, FlexFL leverages LLMs to delve deeper to double-check the code snippets of methods suggested by the first stage and refine fault localization results. In each stage, FlexFL constructs agents based on open-source LLMs, which share the same pipeline that does not postulate any type of bug-related information and can interact with function calls without the out-of-the-box capability. Extensive experimental results on Defects4J demonstrate that FlexFL outperforms the baselines and can work with different open-source LLMs. Specifically, FlexFL with a lightweight open-source LLM Llama3-8B can locate 42 and 63 more bugs than two state-of-the-art LLM-based FL approaches AutoFL and AgentFL that both use GPT-3.5. In addition, FlexFL can localize 93 bugs that cannot be localized by non-LLM-based FL techniques at the top 1. Furthermore, to mitigate potential data contamination, we conduct experiments on a dataset which Llama3-8B has not seen before, and the evaluation results show that FlexFL can also achieve good performance.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 5","pages":"1455-1471"},"PeriodicalIF":6.5,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143661523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}