Charles Miranda, Guilherme Avelino, Pedro Santos Neto
{"title":"Test Co-Evolution in Software Projects: A Large-Scale Empirical Study","authors":"Charles Miranda, Guilherme Avelino, Pedro Santos Neto","doi":"10.1002/smr.70035","DOIUrl":"https://doi.org/10.1002/smr.70035","url":null,"abstract":"<p>The asynchronous evolution of tests and code can compromise software quality and project longevity. To investigate the impact of test and production code co-evolution, this study analyzes a large-scale dataset of 526 GitHub repositories written in six programming languages: JavaScript, TypeScript, Java, Python, PHP, and C#. We focus on understanding how tests evolve throughout the software lifecycle and the frequency with which production and test code evolve in sync. By applying clustering algorithms and Pearson's correlation coefficient, we identify different patterns of test co-evolution between projects. We found a significant correlation between high test co-evolution and smaller development teams but no significant relationship with the frequency of different maintenance activities (corrective, adaptive, perfective, or multi). Despite this, we identified five distinct test evolution patterns, highlighting diverse approaches to integrating testing practices. This work provides valuable insights into the dynamics of test co-evolution and its correlation in software maintainability.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144493020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Test Adequacy of Benchmarks for LLMs on Code Generation","authors":"Xiangyue Liu, Xiaobing Sun, Lili Bo, Yufei Hu, Xinwei Liu, Zhenlei Ye","doi":"10.1002/smr.70034","DOIUrl":"https://doi.org/10.1002/smr.70034","url":null,"abstract":"<div>\u0000 \u0000 <p>Code generation for users' intent has become increasingly prevalent with the large language models (LLMs). To automatically evaluate the effectiveness of these models, multiple execution-based benchmarks are proposed, including specially crafted tasks, accompanied by some test cases and a ground truth solution. LLMs are regarded as well-performed in code generation tasks if they can pass the test cases corresponding to most tasks in these benchmarks. However, it is unknown whether the test cases have sufficient test adequacy and whether the test adequacy can affect the evaluation. In this paper, we conducted an empirical study to evaluate the test adequacy of the execution-based benchmarks and to explore their effects during evaluation for LLMs. Based on the evaluation of the widely used benchmarks, HumanEval, MBPP, and two enhanced benchmarks HumanEval+ and MBPP+, we obtained the following results: (1) All the evaluated benchmarks have high statement coverage (above 99.16%), low branch coverage (74.39%) and low mutation score (87.69%). Especially for the tasks with higher cyclomatic complexities in the HumanEval and MBPP, the mutation score of test cases is lower. (2) No significant correlation exists between test adequacy (statement coverage, branch coverage and mutation score) of benchmarks and evaluating results on LLMs at the individual task level. (3) There is a significant positive correlation between mutation score-based evaluation and another execution-based evaluation metric (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>A</mi>\u0000 <mi>v</mi>\u0000 <mi>g</mi>\u0000 <mi>P</mi>\u0000 <mi>a</mi>\u0000 <mi>s</mi>\u0000 <mi>s</mi>\u0000 <mi>R</mi>\u0000 <mi>a</mi>\u0000 <mi>t</mi>\u0000 <mi>i</mi>\u0000 <mi>o</mi>\u0000 </mrow>\u0000 <annotation>$$ AvgPassRatio $$</annotation>\u0000 </semantics></math>) on LLMs at the individual task level. (4) The existing test case augmentation techniques have limited improvement in the coverage of test cases in the benchmark, while significantly improving the mutation score by approximately 34.60% and also can bring a more rigorous evaluation to LLMs on code generation. (5) The LLM-based test case generation technique (EvalPlus) performs better than the traditional search-based technique (Pynguin) in improving the benchmarks' test quality and evaluation ability of code generation.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144482208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Effectiveness of Open-Source Donation Platform: An Empirical Study on Opencollective","authors":"Shuoxiao Zhang, Enyi Tang, Xinyu Gao, Zhekai Zhang, Yixiao Shan, Haofeng Zhang, Ziyang He, Jianhua Zhao, Xuandong Li","doi":"10.1002/smr.70033","DOIUrl":"https://doi.org/10.1002/smr.70033","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, with the development of the open-source community, various open-source donation platforms have emerged. These platforms effectively alleviate the financial pressures faced by open-source projects through diversified funding sources and flexible donation methods. As one of the most representative open-source donation platforms, Opencollective has garnered widespread attention from both the open-source community and academia. Although Opencollective claims to provide more funding opportunities for open-source projects, the extent to which it effectively addresses the financial challenges faced by these projects remains unclear. While there have been studies on the effectiveness of traditional donation models, research on the effectiveness of emerging donation platforms such as Opencollective is still limited. Given that a large number of open-source projects are urgently seeking donations, understanding the effectiveness of donations through Opencollective is crucial for these projects. To address this gap, we have made an early step in this direction. This paper conducts a comprehensive study on the effectiveness of donations through the Opencollective, employing a combination of quantitative and qualitative analysis and identifies the following key findings: (1) Opencollective attracts a diverse group of participants, including individual donors, sponsors, contributors, and project managers, with individual donors constituting the largest group. Most donations are concentrated in the range of $5 to $10, indicating that the platform largely relies on small but frequent donations from individuals. (2) Only about 26.61% of open-source projects receive donations through Opencollective, with approximately 64.38% of these projects receiving a total donation amount of less than $50,000. The likelihood of receiving donations increases with project scale, maturity and the number of stars. Among projects that have received donations, larger projects with stronger social media promotion, greater attention and more issues are more likely to receive additional donations. (3) The positive impact of donations on project development and spend activities is significant only in the short term, with no notable long-term effects. In contrast, donations do not have a significant short-term impact on community engagement. Although the long-term effect is slightly positive, it is not statistically significant. (4) The main shortcomings of Opencollective include insufficient project management and collaboration features, inadequate user experience and interface design, high transaction fees, and a lack of transparency in fund allocation and usage. Our findings provide significant theoretical support and practical recommendations for the effectiveness of emerging donation platforms and the sustainable development of open-source projects.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 7","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144482206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating Security Controls in DevSecOps: Challenges, Solutions, and Future Research Directions","authors":"Maysa Sinan, Mojtaba Shahin, Iqbal Gondal","doi":"10.1002/smr.70029","DOIUrl":"https://doi.org/10.1002/smr.70029","url":null,"abstract":"<p>Cybersecurity has become a top priority for most organizations to protect their applications. The rapid increase in cyberattacks has necessitated a comprehensive repositioning of how security should be implemented within the software development lifecycle (SDLC). Development, Security, Operations (DevSecOps) is one of the trendy security methodologies and fastest growing development methods promoting shared responsibility for security and automating security practices at every step of the SDLC. DevSecOps is a cultural shift that integrates security controls into DevOps pipelines aiming to upscale overall security. Therefore, many organizations started to incorporate security controls within the deployment of DevSecOps through conducting continuous practices, for example, automated security testing, infrastructure as code (IaC), compliance as code, and continuous monitoring. This study aims to organize the knowledge and shed light on challenges concerning security controls during the adoption of DevSecOps, along with associated solutions and remediation workarounds reported in the literature. Further, the study aims to provide clear insights into the areas that require further investigation and research in the future. A systematic literature review (SLR) of 45 primary studies was carried out to extract data, and subsequently, the extracted data was analyzed using the thematic analysis method. This paper identifies 19 challenges related to security controls that could be experienced by security practitioners while implementing a DevSecOps model, along with 18 solutions and remediation actions suggested in literature to address and overcome some of the enlisted challenges. In addition, some gap areas are identified as opportunities for future research in this domain with the aim of improving the integration of security controls in a DevSecOps environment. Based on findings, this paper points out the importance of automation in software engineering practices, for example, continuous automation, continuous delivery, and continuous feedback, to embed security controls at the early stages of the development process.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 6","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144244310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement of Software Testing Processes With Test Maturity Model Integration","authors":"Gökhan Şit, Süleyman Ersöz, Mehmet Burak Bilgin","doi":"10.1002/smr.70031","DOIUrl":"https://doi.org/10.1002/smr.70031","url":null,"abstract":"<div>\u0000 \u0000 <p>In this study, a maturity-level determination and assessment method developed for companies operating in the software industry to perform TMMi Levels 2 and 3 assessments in-house, with the goal of improving testing processes, is presented. With this method, it is aimed to help companies to conduct their own self-assessments and improve their testing processes before participating in high-budget audits. The validity of this method was tested in practice for TMMi Levels 2 and 3 assessments. Companies can prepare for a formal TMMi audit by using the test maturity-level determination methodology developed in this study, or they can simply improve their processes to produce higher quality products. Additionally, if they already have TMMi certification, they can regularly self-audit to ensure continuity and compliance. Demonstrating how TMMi can be applied in practice and providing a guide to test process maturity determination are key contributions of this work.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ala Arman, Emiliano Di Reto, Massimo Mecella, Giuseppe Santucci
{"title":"Functional Size Measurement With Conceptual Models: A Systematic Literature Review","authors":"Ala Arman, Emiliano Di Reto, Massimo Mecella, Giuseppe Santucci","doi":"10.1002/smr.70030","DOIUrl":"https://doi.org/10.1002/smr.70030","url":null,"abstract":"<p>The demand for efficient functional size measurement (FSM) methods in the competitive software market today is undeniable. However, incomplete and imprecise system specifications pose significant challenges, particularly in scenarios that require fast, flexible, and accurate software size estimation, such as public tenders. Although the integration of conceptual models within FSMs offers a promising solution to these issues, a systematic exploration of such methods remains largely unexplored. This work evaluates FSM methods that integrate conceptual models by analyzing studies from the past 20 years. It highlights key contributions and advances in proposed conceptual model-based FSM methods. In addition, the study examines their limitations and challenges, offering insights for future improvements. A systematic literature review (SLR) was conducted to guide the research process. The review was organized around three research questions, each targeting the study's key objectives: (1) to explore FSM methods utilizing conceptual models, (2) to summarize proposals for their improvement, and (3) to identify the limitations of the proposed enhancements. Primary studies span two decades (2004–2024), with peaks in 2008 and 2015, averaging one to two studies annually. Of the 1371 initial studies, 13 were selected using strict criteria. These studies are categorized into <i>Measurement Techniques</i> (30.77%), <i>Automation</i> (38.46%), and <i>Application-Specific</i> topics (30.77%). The contributions of the primary studies are analyzed in terms of their approaches <i>Repeatability</i> and <i>Validation</i>. <i>Repeatability</i> is assessed by examining whether the primary studies proposed a formal model when using real datasets. In contrast, <i>Validation</i> focuses on whether the studies were tested in real-world projects. A total of 46.15% of the primary studies utilize formal models, whereas 53.85% rely on nonformal models, although dataset size is often unspecified. Most studies validate their methods using 1 to 30 projects. Common Software Measurement International Consortium (COSMIC) is the most widely used FSM method (69.23%), followed by the Function Point Analysis (FPA) (15.38%) and custom Methods (15.38%), with conceptual UML models appearing in 84.61% of the studies. Key limitations, including <i>Scalability and Generalizability</i>, <i>Complexity Robustness</i>, and <i>Flexibility</i>, persist across all categories. Notably, <i>Scalability and Generalizability</i> was identified as a limitation in 75% of <i>Measurement Techniques</i> studies, 80% of <i>Automation</i> studies, and 75% of <i>Application-Specific</i> studies, while <i>Flexibility</i> challenges were most pronounced, affecting 100% of <i>Application-Specific</i> studies. The limited number of primary studies underscores a substantial research gap in conceptual model-based FSM methods. Future research should focus on developing formal models to enhance theoretical rigor, lever","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin Korfmann, Patrick Beyersdorffer, Rainer Gerlich, Jürgen Münch, Marco Kuhrmann
{"title":"Overcoming Data Shortage in Critical Domains With Data Augmentation for Natural Language Software Requirements","authors":"Robin Korfmann, Patrick Beyersdorffer, Rainer Gerlich, Jürgen Münch, Marco Kuhrmann","doi":"10.1002/smr.70027","DOIUrl":"https://doi.org/10.1002/smr.70027","url":null,"abstract":"<p>Natural language processing (NLP) offers the potential to automate quality assurance of software requirement specifications. In particular, large-scale projects involving numerous suppliers can benefit from this improvement. However, due to privacy restrictions especially in highly restrictive industries, the availability of software requirements specification documents for training NLP tools is severely limited. Also, domain- and project-specific vocabulary, as such in the aerospace domain, require specialized models for processing effectively. To provide a sufficient amount of data to train such models, we studied algorithms for the augmentation of textual data. Four algorithms have been investigated by expanding a given set of requirements from the European Space projects generating correct and incorrect requirements. The initial study yielded data of poor quality due to the particularities of the domain-specific vocabulary, yet laid the foundation for the algorithms' improvement, which, eventually, resulted in an increased set of requirements, which is 20 times the size of the seed set. A complementing experiment demonstrated the usability of augmented requirements to support AI-based quality assurance of software requirements. Furthermore, a selected improvement of the augmentation algorithms demonstrated notable quality improvements by doubling the number of correctly augmented requirements.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143939432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Everaldo Gomes, Eduardo Guerra, Phyllipe Lima, Paulo Meirelles
{"title":"An Approach Based on Metadata to Implement Convention Over Configuration Decoupled From Framework Logic","authors":"Everaldo Gomes, Eduardo Guerra, Phyllipe Lima, Paulo Meirelles","doi":"10.1002/smr.70028","DOIUrl":"https://doi.org/10.1002/smr.70028","url":null,"abstract":"<p>Frameworks are essential for software development, providing code design and facilitating reuse for their users. Well-known Java frameworks and APIs rely on metadata configuration through code annotations, using Reflection API to consume and process them. Code elements that share the same annotations often exhibit similarities, creating the opportunity to use conventions as a metadata source. This paper proposes a model for defining Convention over Configuration (CoC) for annotation usage, decoupled from the metadata reading logic. With this model, if a convention is present, the framework will automatically consider that element to be annotated. We implemented this model in the Esfinge Metadata API and evaluated it in an experiment where participants implemented the CoC pattern using two approaches: our proposed one and the Java Reflection API. As a result, 75% of participants implemented our approach faster than with just the Reflection API, and we observed a higher failure rate with the Reflection API than with the Esfinge API. Moreover, the code produced with our approach also resulted in fewer lines of code. Based on these results, we confirmed that the proposed approach fulfilled its goal of supporting the definition of conventions decoupled from the framework logic, thereby improving code readability and maintainability.</p>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/smr.70028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143930498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Vulnerability-Detection Method Based on the Semantic Features of Source Code and the LLVM Intermediate Representation","authors":"Jinfu Chen, Jiapeng Zhou, Wei Lin, Dave Towey, Saihua Cai, Haibo Chen, Jingyi Chen, Yemin Yin","doi":"10.1002/smr.70026","DOIUrl":"https://doi.org/10.1002/smr.70026","url":null,"abstract":"<div>\u0000 \u0000 <p>With the increasingly frequent attacks on software systems, software security is an issue that must be addressed. Within software security, automated detection of software vulnerabilities is an important subject. Most existing vulnerability detectors rely on the features of a single code type (e.g., source code or intermediate representation [IR]), which may lead to both the global features of the code slices and the memory operation information not being captured or considered. In particular, vulnerability detection based on source-code features cannot usually include some macro or type definition content. In this paper, we propose a vulnerability-detection method that combines the semantic features of source code and the low level virtual machine (LLVM) IR. Our proposed approach starts by slicing (C/C++) source files using improved slicing techniques to cover more comprehensive code information. It then extracts semantic information from the LLVM IR based on the executable source code. This can enrich the features fed to the artificial neural network (ANN) model for learning. We conducted an experimental evaluation using a publicly-available dataset of 11,381 C/C++ programs. The experimental results show the vulnerability-detection accuracy of our proposed method to reach over 96% for code slices generated according to four different slicing criteria. This outperforms most other compared detection methods.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143888815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Gao, Siqi Lu, Yongjuan Wang, Haopeng Fan, Qingdi Han, Jingsheng Li
{"title":"ECP: Coprocessor Architecture to Protect Program Logic Consistency","authors":"Yang Gao, Siqi Lu, Yongjuan Wang, Haopeng Fan, Qingdi Han, Jingsheng Li","doi":"10.1002/smr.70023","DOIUrl":"https://doi.org/10.1002/smr.70023","url":null,"abstract":"<div>\u0000 \u0000 <p>Contemporary program protection methods focus on safeguarding either program generation, storage, or execution; however, no unified protection strategy exists for ensuring the security of a full program lifecycle. In this study, we combine the static security of program generation with the dynamic security of process execution and propose a novel program logic consistency security property. An encryption core processing (ECP) architecture is presented that provides coprocessor solutions to protect the program logic consistency at the granularity of instructions and data flows. The new authenticated encryption mode in the architecture uses the offset value of the program's instructions and data in relation to the segment-based address as its encryption parameters. Lightweight cryptographic primitives are adopted to ensure that the hardware burden added by the ECP is limited, especially under <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mo>×</mo>\u0000 </mrow>\u0000 <annotation>$$ times $$</annotation>\u0000 </semantics></math>64 architectures. We prove that the proposed scheme in the ECP architecture satisfies indistinguishability under chosen plaintext attack and demonstrate the effectiveness of the architecture against various attacks. Additionally, a theoretical performance analysis is provided for estimating the overhead introduced by the ECP architecture.</p>\u0000 </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 4","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143865784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}