Robert Wallace;Aakash Bansal;Zachary Karas;Ningzhi Tang;Yu Huang;Toby Jia-Jun Li;Collin McMillan
{"title":"Programmer Visual Attention During Context-Aware Code Summarization","authors":"Robert Wallace;Aakash Bansal;Zachary Karas;Ningzhi Tang;Yu Huang;Toby Jia-Jun Li;Collin McMillan","doi":"10.1109/TSE.2025.3554990","DOIUrl":"10.1109/TSE.2025.3554990","url":null,"abstract":"Programmer attention represents the visual focus of programmers on parts of the source code in pursuit of programming tasks. The focus of current research in modeling this programmer attention has been on using mouse cursors, keystrokes, or eye tracking equipment to map areas in a snippet of code. These approaches have traditionally only mapped attention for a single method. However, there is a knowledge gap in the literature because programming tasks such as source code summarization require programmers to use contextual knowledge that can only be found in other parts of the project, not only in a single method. To address this knowledge gap, we conducted an in-depth human study with 10 Java programmers, where each programmer generated summaries for 40 methods from five large Java projects over five one-hour sessions. We used eye tracking equipment to map the visual attention of programmers while they wrote the summaries. We also rate the quality of each summary. We found eye-gaze patterns and metrics that define common behaviors between programmer attention during context-aware code summarization. Specifically, we found that programmers need to read up to 35% fewer words (p <inline-formula><tex-math>$boldsymbol{ lt }$</tex-math></inline-formula> 0.01) over the whole session, and revisit 13% fewer words (p <inline-formula><tex-math>$ lt $</tex-math></inline-formula> 0.03) as they summarize each method during a session, while maintaining the quality of summaries. We also found that the amount of source code a participant looks at correlates with a higher quality summary, but this trend follows a bell-shaped curve, such that after a threshold reading more source code leads to a significant decrease (p <inline-formula><tex-math>$boldsymbol{ lt }$</tex-math></inline-formula> 0.01) in the quality of summaries. We also gathered insight into the type of methods in the project that provide the most contextual information for code summarization based on programmer attention. Specifically, we observed that programmers spent a majority of their time looking at methods inside the same class as the target method to be summarized. Surprisingly, we found that programmers spent significantly less time looking at methods in the call graph of the target method. We discuss how our empirical observations may aid future studies towards modeling programmer attention and improving context-aware automatic source code summarization.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 5","pages":"1524-1537"},"PeriodicalIF":6.5,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143712754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriele De Vito;Sergio Di Martino;Filomena Ferrucci;Carmine Gravino;Fabio Palomba
{"title":"LLM-Based Automation of COSMIC Functional Size Measurement From Use Cases","authors":"Gabriele De Vito;Sergio Di Martino;Filomena Ferrucci;Carmine Gravino;Fabio Palomba","doi":"10.1109/TSE.2025.3554562","DOIUrl":"10.1109/TSE.2025.3554562","url":null,"abstract":"COmmon Software Measurement International Consortium (COSMIC) Functional Size Measurement is a method widely used in the software industry to quantify user functionality and measure software size, which is crucial for estimating development effort, cost, and resource allocation. COSMIC measurement is a manual task that requires qualified professionals and effort. To support professionals in COSMIC measurement, we propose an automatic approach, CosMet, that leverages Large Language Models to measure software size starting from use cases specified in natural language. To evaluate the proposed approach, we developed a web tool that implements CosMet using GPT-4 and conducted two studies to assess the approach quantitatively and qualitatively. Initially, we experimented with CosMet on seven software systems, encompassing 123 use cases, and compared the generated results with the ground truth created by two certified professionals. Then, seven professional measurers evaluated the analysis achieved by CosMet and the extent to which the approach reduces the measurement time. The first study's results revealed that CosMet is highly effective in analyzing and measuring use cases. The second study highlighted that CosMet offers a transparent and interpretable analysis, allowing practitioners to understand how the measurement is derived and make necessary adjustments. Additionally, it reduces the manual measurement time by 60-80%.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 5","pages":"1500-1523"},"PeriodicalIF":6.5,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143712755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy Impact Tree Analysis (PITA): A Tree-Based Privacy Threat Modeling Approach","authors":"Dimitri Van Landuyt","doi":"10.1109/TSE.2025.3573380","DOIUrl":"10.1109/TSE.2025.3573380","url":null,"abstract":"Threat modeling involves the early identification, prioritization and mitigation of relevant threats and risks, during the design and conceptualization stages of the software development life-cycle. Tree-based analysis is a structured risk analysis technique that starts from the articulation of possible negative outcomes and then systematically refines these into sub-goals, events or intermediate steps that contribute to this outcome becoming reality. While tree-based analysis techniques are widely adopted in the area of safety (fault tree analysis) or in cybersecurity (attack trees), this type of risk analysis approach is lacking in the area of privacy. To alleviate this, we present privacy impact tree analysis (PITA), a novel tree-based approach for privacy threat modeling. Instead of starting from safety hazards or attacker goals, PITA starts from listing the potential privacy impacts of the system under design, i.e., specific scenarios in which the system creates or contributes to specific privacy harms. To accommodate this, PITA provides a taxonomy, distinguishing between privacy impact types that pertain (i) data subject identity, (ii) data subject treatment, (iii) data subject control and (iv) treatment of personal data. In addition, a pragmatic methodology is presented that leverages both the hierarchical nature of the tree structures and the early ranking of impacts to focus the privacy engineering efforts. Finally, building upon the privacy impact notion as captured in the privacy impact trees, we provide a refinement of the foundational concept of the overall or aggregated ‘privacy footprint’ of a system. The approach is demonstrated and validated in three complex and contemporary real-world applications, through which we highlight the added value of this tree-based privacy threat analysis approach that refocuses on privacy harms and impacts.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2102-2124"},"PeriodicalIF":6.5,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144145987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Taxonomy of Contextual Factors in Continuous Integration Processes","authors":"Shujun Huang;Sebastian Proksch","doi":"10.1109/TSE.2025.3572382","DOIUrl":"10.1109/TSE.2025.3572382","url":null,"abstract":"Numerous studies have shown that <italic>Continuous Integration</i> (CI) significantly improves software development productivity. Research has already shown in other fields of software engineering that findings do not always generalize and are often limited to a specific context. So far, research on CI has not differentiated between varying contexts of the studied projects, which includes, for example, varying domains, personnel, technical environments, or cultures. We need to extend the theory of CI by considering the relevant context that will impact how projects approach CI. Although existing studies implicitly touch on context, they often lack a consistent terminology or rely on experience rather than a standardized approach. In this paper, we bridge this gap by developing a taxonomy of relevant contextual factors within the domain of CI. Using grounded theory, we analyze peer-reviewed studies and develop a comprehensive taxonomy of contextual factors of CI that we validate through a practitioner survey. The resulting taxonomy contains multiple levels of details, the main dimensions being Product, Team, Process, Quality, and Scale. The taxonomy offers a structured framework to address the gap in CI research regarding contextual theory. Researchers can use it to describe the scope of findings and to reason about the generalizability of theories. Developers can select and reuse practices more effectively by comparing to other similar projects.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2067-2087"},"PeriodicalIF":6.5,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144123089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Level Requirements Tracing Based on Large Language Models","authors":"Chuyan Ge;Tiantian Wang;Xiaotian Yang;Christoph Treude","doi":"10.1109/TSE.2025.3572094","DOIUrl":"10.1109/TSE.2025.3572094","url":null,"abstract":"Cross-level requirements traceability, linking <bold>high-level requirements (HLRs)</b> and <bold>low-level requirements (LLRs)</b>, is essential for maintaining relationships and consistency in software development. However, the manual creation of requirements links necessitates a profound understanding of the project and entails a complex and laborious process. Existing machine learning and deep learning methods often fail to fully understand semantic information, leading to low accuracy and unstable performance. This paper presents the first approach for cross-level requirements tracing based on large language models (LLMs) and introduces a data augmentation strategy (such as synonym replacement, machine translation, and noise introduction) to enhance model robustness. We compare three fine-tuning strategies—LoRA, P-Tuning, and Prompt-Tuning—on different scales of LLaMA models (1.1B, 7B, and 13B). The fine-tuned LLMs exhibit superior performance across various datasets, including six single-project datasets, three cross-project datasets within the same domain, and one cross-domain dataset. Experimental results show that fine-tuned LLMs outperform traditional information retrieval, machine learning, and deep learning methods on various datasets. Furthermore, we compare the performance of GPT and DeepSeek LLMs under different prompt templates, revealing their high sensitivity to prompt design and relatively poor result stability. Our approach achieves superior performance, outperforming GPT-4o and DeepSeek-r1 by 16.27% and 16.8% in F-measure on cross-domain datasets. Compared to the baseline method that relies on prompt engineering, it achieves a maximum improvement of 13.8%.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2044-2066"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Musengamana Jean de Dieu;Peng Liang;Mojtaba Shahin
{"title":"How Do OSS Developers Reuse Architectural Solutions From Q&A Sites: An Empirical Study","authors":"Musengamana Jean de Dieu;Peng Liang;Mojtaba Shahin","doi":"10.1109/TSE.2025.3572027","DOIUrl":"10.1109/TSE.2025.3572027","url":null,"abstract":"Developers reuse programming-related knowledge (e.g., code snippets) on Q&A sites (e.g., Stack Overflow) that functionally matches the programming problems they encounter in their development. Despite extensive research on Q&A sites, being a high-level and important type of development-related knowledge, architectural solutions (e.g., architecture tactics) and their reuse are rarely explored. To fill this gap, we conducted a mixed-methods study that includes a mining study and a survey study. For the mining study, we mined 984 commits and issues (i.e., 821 commits and 163 issues) from 893 Open-Source Software (OSS) projects on GitHub that explicitly referenced architectural solutions from Stack Overflow (SO) and Software Engineering Stack Exchange (SWESE). For the survey study, we identified practitioners involved in the reuse of these architectural solutions and surveyed 227 of them to further understand how practitioners reuse architectural solutions from Q&A sites in their OSS development. Our main findings are that: (1) OSS practitioners reuse architectural solutions from Q&A sites to solve a large variety (15 categories) of architectural problems, wherein <italic>Component design issue</i>, <italic>Architectural anti-pattern</i>, and <italic>Security issue</i> are dominant; (2) Seven categories of architectural solutions from Q&A sites have been reused to solve those problems, among which <italic>Architectural refactoring</i>, <italic>Use of frameworks</i>, and <italic>Architectural tactic</i> are the three most reused architectural solutions; (3) OSS developers often rely on ad hoc ways (e.g., informal, improvised, or unstructured approaches) to reuse architectural solutions from SO, drawing on personal experience and intuition rather than standardized or systematic practices; (4) Reusing architectural solutions from SO comes with a variety of challenges, e.g., OSS practitioners complain that they need to spend significant time to adapt such architectural solutions to address design concerns raised in their OSS development, and it is challenging to reuse architectural solutions that are not tailored to the design context of their OSS projects. Our findings pave the way for future research directions, including the design and development of approaches and tools (such as IDE plugin tools) to facilitate the reuse of architectural solutions from Q&A sites, and could also be used to offer guidelines to practitioners when they contribute architectural solutions to Q&A sites. Our dataset is publicly available at <uri>https://doi.org/10.5281/zenodo.10936098</uri>.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2015-2043"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RECOVER: Toward Requirements Generation From Stakeholders’ Conversations","authors":"Gianmario Voria;Francesco Casillo;Carmine Gravino;Gemma Catolino;Fabio Palomba","doi":"10.1109/TSE.2025.3572056","DOIUrl":"10.1109/TSE.2025.3572056","url":null,"abstract":"Stakeholders’ conversations requirements elicitation meetings hold valuable insights into system and client needs. However, manually extracting requirements is time-consuming, labor-intensive, and prone to errors and biases. While current state-of-the-art methods assist in summarizing stakeholder conversations and classifying requirements based on their nature, there is a noticeable lack of approaches capable of both identifying requirements within these conversations and generating corresponding system requirements. These approaches would assist requirement identification, reducing engineers’ workload, time, and effort. They would also enhance accuracy and consistency in documentation, providing a reliable foundation for further analysis. To address this gap, this paper introduces <sc>RECOVER</small> (Requirements EliCitation frOm conVERsations), a novel conversational requirements engineering approach that leverages natural language processing and large language models (LLMs) to support practitioners in automatically extracting system requirements from stakeholder interactions by analyzing individual conversation turns. The approach is evaluated using a mixed-method research design that combines statistical performance analysis with a user study involving requirements engineers, targeting two levels of granularity. First, at the conversation turn level, the evaluation measures <sc>RECOVER</small>’s accuracy in identifying requirements-relevant dialogue and the quality of generated requirements in terms of correctness, completeness, and actionability. Second, at the entire conversation level, the evaluation assesses the overall usefulness and effectiveness of <sc>RECOVER</small> in synthesizing comprehensive system requirements from full stakeholder discussions. Empirical evaluation of <sc>RECOVER</small> shows promising performance, with generated requirements demonstrating satisfactory correctness, completeness, and actionability. The results also highlight the potential of automating requirements elicitation from conversations as an aid that enhances efficiency while maintaining human oversight.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 6","pages":"1912-1933"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pasquale Polverino;Fabio Di Lauro;Matteo Biagiola;Paolo Tonella;Antonio Carzaniga
{"title":"Parallelization in System-Level Testing: Novel Approaches to Manage Test Suite Dependencies","authors":"Pasquale Polverino;Fabio Di Lauro;Matteo Biagiola;Paolo Tonella;Antonio Carzaniga","doi":"10.1109/TSE.2025.3572388","DOIUrl":"10.1109/TSE.2025.3572388","url":null,"abstract":"System-level testing is fundamental to ensure the reliability of software systems. However, the execution time for system tests can be quite long, sometimes prohibitively long, especially in a regimen of continuous integration and deployment. One way to speed things up is to run the tests in parallel, provided that the execution schedule respects any dependency between tests. We present two novel approaches to detect dependencies in system-level tests, namely <sc>Pfast</small> and <sc>Mem-Fast</small>, which are highly parallelizable and optimistically run test schedules to exclude many dependencies when there are no failures. We evaluated our approaches both asymptotically and practically, on six Web applications and their system-level test suites, as well as on MySQL system-level tests. Our results show that, in general, <sc>Pfast</small> is significantly faster than the state-of-the-art <sc>PraDet</small> dependency detection algorithm, while producing parallelizable schedules that achieve a significant reduction in the overall test suite execution time.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"2088-2101"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sogol Masoumzadeh;Nuno Saavedra;Rungroj Maipradit;Lili Wei;João F. Ferreira;Dániel Varró;Shane McIntosh
{"title":"Do Experts Agree About Smelly Infrastructure?","authors":"Sogol Masoumzadeh;Nuno Saavedra;Rungroj Maipradit;Lili Wei;João F. Ferreira;Dániel Varró;Shane McIntosh","doi":"10.1109/TSE.2025.3553383","DOIUrl":"10.1109/TSE.2025.3553383","url":null,"abstract":"Code smells are anti-patterns that violate code understandability, re-usability, changeability, and maintainability. It is important to identify code smells and locate them in the code. For this purpose, automated detection of code smells is a sought-after feature for development tools; however, the design and evaluation of such tools depends on the quality of oracle datasets. The typical approach for creating an oracle dataset involves multiple developers independently inspecting and annotating code examples for their existing code smells. Since multiple inspectors cast votes about each code example, it is possible for the inspectors to disagree about the presence of smells. Such disagreements introduce ambiguity into how smells should be interpreted. Prior work has studied developer perceptions of code smells in traditional source code; however, smells in Infrastructure-as-Code (IaC) have not been investigated. To understand the real-world impact of disagreements among developers and their perceptions of IaC code smells, we conduct an empirical study on the oracle dataset of GLITCH—a state-of-the-art detection tool for security code smells in IaC. We analyze GLITCH's oracle dataset for code smell issues, their types, and individual annotations of the inspectors. Furthermore, we investigate possible confounding factors associated with the incidences of developer misaligned perceptions of IaC code smells. Finally, we triangulate developer perceptions of code smells in traditional source code with our results on IaC. Our study reveals that unlike developer perceptions of smells in traditional source code, their perceptions of smells in IaC are more substantially impacted by subjective interpretation of smell types and their co-occurrence relationships. For instance, the interpretation of admins by default, empty passwords, and hard-coded secrets varies considerably among raters and are more susceptible to misidentification than other IaC code smells. Consequently, the manual identification of IaC code smells involves annotation disagreements among developers—46.3% of studied IaC code smell incidences have at least one dissenting vote among three inspectors. Meanwhile, only 1.6% of code smell incidences in traditional source code are affected by inspector bias stemming from these disagreements. Hence, relying solely on the majority voting, would not fully represent the breadth of interpretation of the IaC under scrutiny.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 5","pages":"1472-1486"},"PeriodicalIF":6.5,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143672295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What You See Is What You Get: Prototype Generation for IoT End-User Programming","authors":"Xiaohong Chen;Shi Chen;Zhi Jin;Zihan Chen;Mingsong Chen","doi":"10.1109/TSE.2025.3571585","DOIUrl":"10.1109/TSE.2025.3571585","url":null,"abstract":"With the rapid development of IoT technology, IoT-enabled systems, represented by smart homes, are becoming ubiquitous. In order to support personalized user requirements, such systems appeal to the end-user programming paradigm. This paradigm allows end-users to describe their requirements using TAP (Trigger-Action Programming) rules, which can be deployed on demand. However, writing TAP rules is error-prone and end-users are often unaware of the actual effects of the rules they write, given the context-sensitive nature of these effects. It is highly desirable that TAP rules can be validated before deployment. Unfortunately, requirements validation for IoT end-user programming has not received much attention so far. Therefore, this paper proposes to generate experience prototypes for IoT end-user programming using TAP rules. The difficulty lies in how to orchestrate user experience delivery service scenarios according to TAP rule and context changes, and effectively demonstrate these scenarios. We present a dynamic assembly approach for simulation model systems used for service scenario orchestration. By simulation, we synthesize desired system behaviors, system device behaviors, and context changes. Leveraging the simulation traces of each component, we employ animation techniques specifically designed to highlight user-aware changes. These experience prototypes allow end-users to directly understand the effects of the IoT-enabled systems, thereby determining whether their intentions are satisfied. Experimental results show that our approach is usable and effective for end-users and the generated experience prototypes are context-aware, capable of representing real-world service scenarios, effective, and efficient in requirements validation.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 7","pages":"1996-2014"},"PeriodicalIF":6.5,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144104697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}