{"title":"Design smells in multi-language systems and bug-proneness: a survival analysis","authors":"Mouna Abidi, Md Saidur Rahman, Moses Openja, Foutse Khomh","doi":"10.1007/s10664-024-10476-2","DOIUrl":"https://doi.org/10.1007/s10664-024-10476-2","url":null,"abstract":"<p>Modern applications are often developed using a combination of programming languages and technologies. Multi-language systems offer opportunities for code reuse and the possibility to leverage the strengths of multiple programming languages. However, multi-language development may also impede code comprehension and increase maintenance overhead. As a result of this, developers may introduce design smells by making poor design and implementation choices. Studies on mono-language systems suggest that design smells may increase the risk of bugs and negatively impact software quality. However, the impacts of multi-language smells on software quality are still under-investigated. In this paper, we aim to examine the impacts of multi-language smells on software quality, bug-proneness in particular. We performed survival analysis comparing the time until a bug occurrence in files with and without multi-language design smells. To have qualitative insights into the impacts of multi-language design smells on software bug-proneness, we performed topic modeling and manual investigations, to capture the categories and characteristics of bugs that frequently occur in files with multi-language smells. Our investigation shows that (1) files with multi-language smells experience bugs faster than files without those smells, and non-smelly files have hazard rates 87.5% lower than files with smells, (2) files with some specific types of smells experience bugs faster than the other smells, and (3) bugs related to “programming errors”, “external libraries and features support issues”, and “memory issues” are the most dominant types of bugs that occur in files with multi-language smells. Through this study, we aim to raise the awareness of developers about the impacts of multi-language design smells, and help them prioritize maintenance activities.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"17 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What causes exceptions in machine learning applications? Mining machine learning-related stack traces on Stack Overflow","authors":"Amin Ghadesi, Maxime Lamothe, Heng Li","doi":"10.1007/s10664-024-10499-9","DOIUrl":"https://doi.org/10.1007/s10664-024-10499-9","url":null,"abstract":"<p>Machine learning (ML), including deep learning, has recently gained tremendous popularity in a wide range of applications. However, like traditional software, ML applications are not immune to the bugs that result from programming errors. Explicit programming errors usually manifest through error messages and stack traces. These stack traces describe the chain of function calls that lead to an anomalous situation, or exception. Indeed, these exceptions may cross the entire software stack (including applications and libraries). Thus, studying the ML-related patterns in stack traces can help practitioners and researchers understand the causes of exceptions in ML applications and the challenges faced by ML developers. To that end, we mine Stack Overflow (SO) and study 18, 538 ML-related stack traces related to seven popular Python ML libraries. First, we observe that ML questions that contain stack traces are less likely to get accepted answers than questions that don’t, even though they gain more attention (i.e., more views and comments). Second, we observe that recurrent patterns exist in ML stack traces, even across different ML libraries, with a small portion of patterns covering many stack traces. Third, we derive five high-level categories and 26 low-level types from the stack trace patterns: most patterns are related to <i>model training</i>, <i>python basic syntax</i>, <i>parallelization</i>, <i>subprocess invocation</i>, and <i>external module execution</i>. Furthermore, the patterns related to external dependencies (e.g., file operations) or manipulations of artifacts (e.g., model conversion) are among the least likely to get accepted answers on SO. Our findings provide insights for researchers, ML library developers, and technical forum moderators to better support ML developers in writing error-free ML code. For example, future research can leverage the common patterns of stack traces to help ML developers locate solutions to problems similar to theirs or to identify experts who have experience solving similar patterns of problems. Researchers and ML library developers could prioritize efforts to help ML developers identify misuses of ML APIs, mismatches in data formats, and potential data/resource contentions so that ML developers can better avoid/fix model-related exception patterns, data-related exception patterns, and multi-process-related exception patterns, respectively.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Systematic Evaluation of Deep Learning Models for Log-based Failure Prediction","authors":"Fatemeh Hadadi, Joshua H. Dawes, Donghwan Shin, Domenico Bianculli, Lionel Briand","doi":"10.1007/s10664-024-10501-4","DOIUrl":"https://doi.org/10.1007/s10664-024-10501-4","url":null,"abstract":"<p>With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types—Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), and transformer—as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioural models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size <span>(>350)</span> or a failure percentage <span>(>7.5%)</span>, under which this configuration demonstrates high accuracy for failure prediction.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"16 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comprehensive analysis of challenges and strategies for software release notes on GitHub","authors":"Jianyu Wu, Hao He, Kai Gao, Wenxin Xiao, Jingyue Li, Minghui Zhou","doi":"10.1007/s10664-024-10486-0","DOIUrl":"https://doi.org/10.1007/s10664-024-10486-0","url":null,"abstract":"<p>Release notes (RNs) refer to the technical documentation that offers users, developers, and other stakeholders comprehensive information about the changes and updates of a new software version. Producing high-quality RNs can be challenging, and it remains unknown what issues developers commonly encounter and what effective strategies can be adopted to mitigate them. To bridge this knowledge gap, we conduct a manual analysis of 1,529 latest RN-related issues in the GitHub issue tracker by using multiple rounds of open coding and construct 1) a comprehensive taxonomy of RN-related issues with four dimensions validated through three semi-structured interviews; 2) an effective framework with eight categories of strategies to overcome these challenges. The four dimensions of RN-related issues revealed by the taxonomy and the corresponding strategies from the framework include: 1) <i>Content</i> (419, 25.47%): RN producers tend to overlook information rather than include inaccurate details, especially for breaking changes. To address this, effective completeness validations are recommended, such as managing Pull Requests, issues, and commits related to RNs; 2) <i>Presentation</i> (150, 9.12%): inadequate layout may bury important information and lead to end users’ confusion, which can be mitigated by employing a hierarchical structure, standardized format, rendering RNs, and folding techniques; 3) <i>Accessibility</i> (303, 18.42%): many users find RNs inaccessible due to link deterioration, insufficient notification, and obfuscated RN locations. This can be alleviated by adopting appropriate locations and channels (such as project websites) and standardizing link management.; 4) <i>Production</i> (773, 46.99%): despite the high demand from RN producers, automating and standardizing the RN production process remains challenging. Developers resolve this problem by using some mature tools on GitHub (like Release Drafter). Additionally, offering guidance, clarifying responsibilities, and distributing workloads are effective in improving collaboration within the team. Mechanisms for distributing and verifying RNs are also selected to enhance synchronization management. Our taxonomy provides a comprehensive blueprint to improve RN production in practice and also reveals interesting future research directions.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"60 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Amine Batoun, Mohammed Sayagh, Roozbeh Aghili, Ali Ouni, Heng Li
{"title":"A literature review and existing challenges on software logging practices","authors":"Mohamed Amine Batoun, Mohammed Sayagh, Roozbeh Aghili, Ali Ouni, Heng Li","doi":"10.1007/s10664-024-10452-w","DOIUrl":"https://doi.org/10.1007/s10664-024-10452-w","url":null,"abstract":"<p>Software logging is the practice of recording different events and activities that occur within a software system, which are useful for different activities such as failure prediction and anomaly detection. While previous research focused on improving different aspects of logging practices, the goal of this paper is to conduct a systematic literature review and the existing challenges of practitioners in software logging practices. In this paper, we focus on the logging practices that cover the steps from the instrumentation of a software system, the storage of logs, up to the preprocessing steps that prepare log data for further follow-up analysis. Our systematic literature review (SLR) covers 204 research papers and a quantitative and qualitative analysis of 20,766 and 149 questions on StackOverflow (SO). We observe that 53% of the studies focus on improving the techniques that preprocess logs for analysis (e.g., the practices of log parsing, log clustering and log mining), 37% focus on how to create new log statements, and 10% focus on how to improve log file storage. Our analysis of SO topics reveals that five out of seven identified high-level topics are not covered by the literature and are related to dependency configuration of logging libraries, infrastructure related configuration, scattered logging, context-dependant usage of logs and handling log files.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"158 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lissette Almonte, Esther Guerra, Iván Cantador, Juan de Lara
{"title":"Engineering recommender systems for modelling languages: concept, tool and evaluation","authors":"Lissette Almonte, Esther Guerra, Iván Cantador, Juan de Lara","doi":"10.1007/s10664-024-10483-3","DOIUrl":"https://doi.org/10.1007/s10664-024-10483-3","url":null,"abstract":"<p>Recommender systems (RSs) are ubiquitous in all sorts of online applications, in areas like shopping, media broadcasting, travel and tourism, among many others. They are also common to help in software engineering tasks, including software modelling, where we are recently witnessing proposals to enrich modelling languages and environments with RSs. Modelling recommenders assist users in building models by suggesting items based on previous solutions to similar problems in the same domain. However, building a RS for a modelling language requires considerable effort and specialised knowledge. To alleviate this problem, we propose an automated, model-driven approach to create RSs for modelling languages. The approach provides a domain-specific language called <span>Droid</span> to configure every aspect of the RS: the type of the recommended modelling elements, the gathering and preprocessing of training data, the recommendation method, and the metrics used to evaluate the created RS. The RS so configured can be deployed as a service, and we offer out-of-the-box integration with Eclipse modelling editors. Moreover, the language is extensible with new data sources and recommendation methods. To assess the usefulness of our proposal, we report on two evaluations. The first one is an offline experiment measuring the precision, completeness and diversity of recommendations generated by several methods. The second is a user study – with 40 participants – to assess the perceived quality of the recommendations. The study also contributes with a novel evaluation methodology and metrics for RSs in model-driven engineering.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"345 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernando Richter Vidal, Naghmeh Ivaki, Nuno Laranjeiro
{"title":"OpenSCV: an open hierarchical taxonomy for smart contract vulnerabilities","authors":"Fernando Richter Vidal, Naghmeh Ivaki, Nuno Laranjeiro","doi":"10.1007/s10664-024-10446-8","DOIUrl":"https://doi.org/10.1007/s10664-024-10446-8","url":null,"abstract":"<p>Smart contracts are nowadays at the core of most blockchain systems. Like all computer programs, smart contracts are subject to the presence of residual faults, including severe security vulnerabilities. However, the key distinction lies in how these vulnerabilities are addressed. In smart contracts, when a vulnerability is identified, the affected contract must be terminated within the blockchain, as due to the immutable nature of blockchains, it is impossible to patch a contract once deployed. In this context, research efforts have been focused on proactively preventing the deployment of smart contracts containing vulnerabilities, mainly through the development of vulnerability detection tools. Along with these efforts, several heterogeneous vulnerability classification schemes appeared (e.g., most notably DASP and SWC). At the time of writing, these are mostly outdated initiatives, even though new smart contract vulnerabilities are consistently uncovered. In this paper, we propose OpenSCV, a new and Open hierarchical taxonomy for Smart Contract vulnerabilities, which is open to community contributions and matches the current state of the practice while being prepared to handle future modifications and evolution. The taxonomy was built based on the analysis of the existing research on vulnerability classification, community-maintained classification schemes, and research on smart contract vulnerability detection. We show how OpenSCV covers the announced detection ability of the current vulnerability detection tools and highlight its usefulness in smart contract vulnerability research. To validate OpenSCV, we performed an expert-based analysis wherein we invited multiple experts engaged in smart contract security research to participate in a questionnaire. The feedback from these experts indicated that the categories in OpenSCV are representative, clear, easily understandable, comprehensive, and highly useful. Regarding the vulnerabilities, the experts confirmed that they are easily understandable.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"174 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhimin Zhao, Yihao Chen, Abdul Ali Bangash, Bram Adams, Ahmed E. Hassan
{"title":"An empirical study of challenges in machine learning asset management","authors":"Zhimin Zhao, Yihao Chen, Abdul Ali Bangash, Bram Adams, Ahmed E. Hassan","doi":"10.1007/s10664-024-10474-4","DOIUrl":"https://doi.org/10.1007/s10664-024-10474-4","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">\u0000<i>Context:</i>\u0000</h3><p>In machine learning (ML) applications, assets include not only the ML models themselves, but also the datasets, algorithms, and deployment tools that are essential in the development, training, and implementation of these models. Efficient management of ML assets is critical to ensure optimal resource utilization, consistent model performance, and a streamlined ML development lifecycle. This practice contributes to faster iterations, adaptability, reduced time from model development to deployment, and the delivery of reliable and timely outputs.</p><h3 data-test=\"abstract-sub-heading\">\u0000<i>Objective:</i>\u0000</h3><p>Despite research on ML asset management, there is still a significant knowledge gap on operational challenges, such as model versioning, data traceability, and collaboration issues, faced by asset management tool users. These challenges are crucial because they could directly impact the efficiency, reproducibility, and overall success of machine learning projects. Our study aims to bridge this empirical gap by analyzing user experience, feedback, and needs from Q &A posts, shedding light on the real-world challenges they face and the solutions they have found.</p><h3 data-test=\"abstract-sub-heading\">\u0000<i>Method:</i>\u0000</h3><p>We examine 15, 065 Q &A posts from multiple developer discussion platforms, including Stack Overflow, tool-specific forums, and GitHub/GitLab. Using a mixed-method approach, we classify the posts into knowledge inquiries and problem inquiries. We then apply BERTopic to extract challenge topics and compare their prevalence. Finally, we use the open card sorting approach to summarize solutions from solved inquiries, then cluster them with BERTopic, and analyze the relationship between challenges and solutions.</p><h3 data-test=\"abstract-sub-heading\">\u0000<i>Results:</i>\u0000</h3><p>We identify 133 distinct topics in ML asset management-related inquiries, grouped into 16 macro-topics, with software environment and dependency, model deployment and service, and model creation and training emerging as the most discussed. Additionally, we identify 79 distinct solution topics, classified under 18 macro-topics, with software environment and dependency, feature and component development, and file and directory management as the most proposed.</p><h3 data-test=\"abstract-sub-heading\">\u0000<i>Conclusions:</i>\u0000</h3><p>This study highlights critical areas within ML asset management that need further exploration, particularly around prevalent macro-topics identified as pain points for ML practitioners, emphasizing the need for collaborative efforts between academia, industry, and the broader research community.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"13 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Capilla, Victor Salamanca, Alejandro Valdezate, Gregorio Robles
{"title":"Can instability variations warn developers when open-source projects boost?","authors":"Rafael Capilla, Victor Salamanca, Alejandro Valdezate, Gregorio Robles","doi":"10.1007/s10664-024-10482-4","DOIUrl":"https://doi.org/10.1007/s10664-024-10482-4","url":null,"abstract":"<p>Although architecture instability has been studied and measured using a variety of metrics, a deeper analysis of which project parts are less stable and how such instability varies over time is still needed. While having more information on architecture instability is, in general, useful for any software development project, it is especially important in Open Source Software (OSS) projects where the supervision of the development process is more difficult to achieve. In particular, we are interested when OSS projects grow from a small controlled environment (i.e., the <i>cathedral</i> phase) to a community-driven project (i.e., the <i>bazaar</i> phase). In such a transition, the project often explodes in terms of software size and number of contributing developers. Hence, the complexity of the newly added features, and the frequency of the commits and files modified may cause significant variations of the instability of the structure of the classes and packages. Consequently, in this article we analyze the instability in OSS projects, especially during that sensitive phase where they become community-driven. Our results show that instability metrics can be easily obtained in such type of transitions. We also observed from our case studies that instability metrics can help finding out the balance between adding new functionality and performing refactoring. As a conclusions we state that instability metrics offer relevant information in the transition phase from the cathedral to the bazaar.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"31 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-solution Study on GDPR AI-enabled Completeness Checking of DPAs","authors":"Muhammad Ilyas Azeem, Sallam Abualhaija","doi":"10.1007/s10664-024-10491-3","DOIUrl":"https://doi.org/10.1007/s10664-024-10491-3","url":null,"abstract":"<p>Specifying legal requirements for software systems to ensure their compliance with the applicable regulations is a major concern of requirements engineering. Personal data which is collected by an organization is often shared with other organizations to perform certain processing activities. In such cases, the General Data Protection Regulation (GDPR) requires issuing a data processing agreement (DPA) which regulates the processing and further ensures that personal data remains protected. Violating GDPR can lead to huge fines reaching to billions of Euros. Software systems involving personal data processing must adhere to the legal obligations stipulated both at a general level in GDPR as well as the obligations outlined in DPAs highlighting specific business. In other words, a DPA is yet another source from which requirements engineers can elicit legal requirements. However, the DPA must be complete according to GDPR to ensure that the elicited requirements cover the complete set of obligations. Therefore, checking the completeness of DPAs is a prerequisite step towards developing a compliant system. Analyzing DPAs with respect to GDPR entirely manually is time consuming and requires adequate legal expertise. In this paper, we propose an automation strategy that addresses the completeness checking of DPAs against GDPR provisions as a text classification problem. Specifically, we pursue ten alternative solutions which are enabled by different technologies, namely traditional machine learning, deep learning, language modeling, and few-shot learning. The goal of our work is to empirically examine how these different technologies fare in the legal domain. We computed F<span>(_2)</span> score on a set of 30 real DPAs. Our evaluation shows that best-performing solutions yield F<span>(_2)</span> score of 86.7% and 89.7% are based on pre-trained BERT and RoBERTa language models. Our analysis further shows that other alternative solutions based on deep learning (e.g., BiLSTM) and few-shot learning (e.g., SetFit) can achieve comparable accuracy, yet are more efficient to develop.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}