{"title":"Structural contrastive learning based automatic bug triaging","authors":"Yi Tao, Jie Dai, Lingna Ma, Zhenhui Ren, Fei Wang","doi":"10.1007/s10515-025-00517-5","DOIUrl":"10.1007/s10515-025-00517-5","url":null,"abstract":"<div><p>Bug triaging is crucial for software maintenance, as it matches developers with bug reports they are most qualified to handle. This task has gained importance with the growth of the open-source community. Traditionally, methods have emphasized semantic classification of bug reports, but recent approaches focus on the associations between bugs and developers. Leveraging latent patterns from bug-fixing records can enhance triaging predictions; however, the limited availability of these records presents a significant challenge. This scarcity highlights a broader issue in supervised learning: the inadequacy of labeled data and the underutilization of unlabeled data. To address these limitations, we propose a novel framework named SCL-BT (Structural Contrastive Learning-based Bug Triaging). This framework improves the utilization of labeled heterogeneous associations through edge perturbation and leverages unlabeled homogeneous associations via hypergraph sampling. These processes are integrated with a graph convolutional network backbone to enhance the prediction of associations and, consequently, bug triaging accuracy. Experimental results demonstrate that SCL-BT significantly outperforms existing models on public datasets. Specifically, on the Google Chromium dataset, SCL-BT surpasses the GRCNN method by 18.64<span>(%)</span> in terms of the Top-9 Hit Ratio metric. The innovative approach of SCL-BT offers valuable insights for the research of automatic bug-triaging.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haichi Wang, Ruiguo Yu, Dong Wang, Yiheng Du, Yingquan Zhao, Junjie Chen, Zan Wang
{"title":"An empirical study of test case prioritization on the Linux Kernel","authors":"Haichi Wang, Ruiguo Yu, Dong Wang, Yiheng Du, Yingquan Zhao, Junjie Chen, Zan Wang","doi":"10.1007/s10515-025-00522-8","DOIUrl":"10.1007/s10515-025-00522-8","url":null,"abstract":"<div><p>The Linux kernel is a complex and constantly evolving system, where each code change can impact different components of the system. Regression testing ensures that new changes do not affect existing functionality or introduce new defects. However, due to the complexity of the Linux kernel, maintenance remains challenging. While practices like Continuous Integration (CI) facilitate rapid commits through automated regression testing, each CI process still incurs substantial costs due to the extensive number of test cases. Traditional software testing employs test case prioritization (TCP) techniques to prioritize test cases, thus enabling the early detection of defects. Due to the unique characteristics of the Linux kernel, it remains unclear whether the existing TCP techniques are suitable for its regression testing. In this paper, we present the first empirical study by comparing various TCP techniques in Linux kernel context. Specifically, we examined a total of 17 TCP techniques, including similarity-based, information-retrieval-based, and coverage-based techniques. The experimental results demonstrate that: (1) Similarity-based TCP techniques perform best on the Linux kernel, achieving a mean APFD (Average Percentage of Faults Detected) value of 0.7583 and requiring significantly less time; (2) The majority of TCP techniques show relatively stable performance across multiple commits, where similarity-based TCP techniques are more stable with a maximum decrease of 3.03% and 3.92% in terms of mean and median APFD values, respectively; (3) More than half of the studied techniques are significantly affected by flaky tests, with both mean and median APFD values ranging from -29.9% to -63.5%. This work takes the first look at the adoption of TCP techniques in the Linux kernel, confirming its potential for effective and efficient prioritization.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143938562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"iALBMAD: an improved agile-based layered approach for mobile app development","authors":"Anil Patidar, Ugrasen Suman","doi":"10.1007/s10515-025-00520-w","DOIUrl":"10.1007/s10515-025-00520-w","url":null,"abstract":"<div><p>The demand to acquire improved efficiency, agility, and adaptability led to rapid evolution in mobile app development (MAD). Agile approaches are recognized for being cooperative and iterative, but there are still issues in handling a range of MAD necessities. The objective here is to blend the best practices of several prominent agile approaches and non-agile approaches to form an innovative and improved MAD approach, which we refer to as the improved Agile and Lean-based MAD Approach (iALBMAD), and this approach was the improved upon our previous work, ALBMAD. Here, three aspects of improvement concerning the discovery of suitable app attributes and best practices at various MAD activities and strengthening requirement gathering activities are exploited. For this to be accomplished, first we determined different app attributes that affect the MAD, agile and non-agile best practices, and machine learning (ML) functioning in MAD from the accessible literature. Now, we have equipped ALBMAD with all these gained aspects as per their applicability and offered it to 18 MAD experts to obtain suggestions for its improvement. Considering the experts’ opinions, a three-layered approach, namely, iALBMAD, was developed. In iALBMAD, automation and an iterative cycle are established to meet finished needs; these revisions may boost the quality of requirements and minimize time. Specific and experts validated best practices and app attributes suitable for each activity of iALBMAD are offered, which will assist less-skilled developers. Thirteen users verified the usability of six teams’ apps created using three different approaches, and the results show that the iALBMAD performs better than other approaches. The suggested approach and the discoveries will provide insightful information for individuals and MAD firms aiming to improve the way of MAD.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143930134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge-guided large language models are trustworthy API recommenders","authors":"Hongwei Wei, Xiaohong Su, Weining Zheng, Wenxing Tao, Hailong Yu, Yuqian Kuang","doi":"10.1007/s10515-025-00518-4","DOIUrl":"10.1007/s10515-025-00518-4","url":null,"abstract":"<div><p><b>A</b>pplication <b>P</b>rogramming <b>I</b>nterface (API) recommendation aims to recommend APIs for developers that meet their functional requirements, which can compensate for developers’ lack of API knowledge. In team-based software development, developers often need to implement functionality based on specific interface parameter types predefined by the software architect. Therefore, we propose <b>API</b> <b>R</b>ecommendation under specific <b>I</b>nterface <b>P</b>arameter Types (APIRIP), a special variant of the API recommendation task that requires the recommended APIs to conform to the interface parameter types. To realize APIRIP, we enlist the support of <b>L</b>arge <b>L</b>anguage <b>M</b>odels (LLMs). However, LLMs are susceptible to the phenomenon known as hallucination, wherein they may recommend untrustworthy API sequences. Instances of this include recommending fictitious APIs, APIs whose calling conditions cannot be satisfied, or API sequences that fail to conform to the interface parameter types. To mitigate these issues, we propose a <b>K</b>nowledge-<b>g</b>uided framework <b>for</b> <b>LLM</b>-based API Recommendation (KG4LLM), which incorporates knowledge-guided data augmentation and beam search. The core idea of KG4LLM is to leverage API knowledge derived from the <b>J</b>ava <b>D</b>evelopment <b>K</b>it (JDK) documentation to enhance the trustworthiness of LLM-generated recommendations. Experimental results demonstrate that KG4LLM can improve the trustworthiness of recommendation results provided by LLM and outperform advanced LLMs in the APIRIP task.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143913818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yonghui Liu, Xiao Chen, Yue Liu, Pingfan Kong, Tegawendé F. Bissyandé, Jacques Klein, Xiaoyu Sun, Li Li, Chunyang Chen, John Grundy
{"title":"A comparative study between android phone and TV apps","authors":"Yonghui Liu, Xiao Chen, Yue Liu, Pingfan Kong, Tegawendé F. Bissyandé, Jacques Klein, Xiaoyu Sun, Li Li, Chunyang Chen, John Grundy","doi":"10.1007/s10515-025-00514-8","DOIUrl":"10.1007/s10515-025-00514-8","url":null,"abstract":"<div><p>Smart TVs have surged in popularity, leading developers to create TV versions of mobile apps. Understanding the relationship between TV and mobile apps is key to building consistent, secure, and optimized cross-platform experiences while addressing TV-specific SDK challenges. Despite extensive research on mobile apps, TV apps have been given little attention, leaving the relationship between phone and TV apps unexplored. Our study addresses this gap by compiling an extensive collection of 3445 Android phone/TV app pairs from the Google Play Store, launching the first comparative analysis of its kind. We examined these pairs across multiple dimensions, including non-code elements, code structure, security, and privacy aspects. Our findings reveal that while these app pairs could get identified with the same package names, they deploy different artifacts with varying functionality across platforms. TV apps generally exhibit less complexity in terms of hardware-dependent features and code volume but maintain significant shared resource files and components with their phone versions. Interestingly, some categories of TV apps show similar or even severe security and privacy concerns compared to their mobile counterparts. This research aims to assist developers and researchers in understanding phone-TV app relationships, highlight domain-specific concerns necessitating TV-specific tools, and provide insights for migrating apps from mobile to TV platforms.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143904672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving prompt tuning-based software vulnerability assessment by fusing source code and vulnerability description","authors":"Jiyu Wang, Xiang Chen, Wenlong Pei, Shaoyu Yang","doi":"10.1007/s10515-025-00525-5","DOIUrl":"10.1007/s10515-025-00525-5","url":null,"abstract":"<div><p>To effectively allocate resources for vulnerability remediation, it is crucial to prioritize vulnerability fixes based on vulnerability severity. With the increasingnumber of vulnerabilities in recent years, there is an urgent need for automated methods for software vulnerability assessment (SVA). Most of the previous SVA studies mainly rely on traditional machine learning methods. Recently, fine-tuning pre-trained language models has emerged as an intuitive method for improving performance. However, there is a gap between pre-training and fine-tuning, and their performance heavily depends on the dataset’s quality of the downstream task. Therefore, we propose a prompt tuning-based method PT-SVA. Different from the fine-tuning paradigm, the prompt-tuning paradigm involves adding prompts to make the training process similar to pre-training, thereby better adapting to downstream tasks. Moreover, previous research aimed to automatically predict severity by only analyzing either the vulnerability descriptions or the source code of the vulnerability. Therefore, we further consider both types of vulnerability information for designing hybrid prompts (i.e., a combination of hard and soft prompts). To evaluate PT-SVA, we construct the SVA dataset based on the CVSS V3 standard, while previous SVA studies only consider the CVSS V2 standard. Experimental results show that PT-SVA outperforms ten state-of-the-art SVA baselines, such as by 13.7% to 42.1% in terms of MCC. Finally, our ablation experiments confirm the effectiveness of PT-SVA’s design, specifically in replacing fine-tuning with prompt tuning, incorporating both types of vulnerability information, and adopting hybrid prompts. Our promising results indicate that prompt tuning-based SVA is a promising direction and needs more follow-up studies.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143900652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mashal Afzal Memon, Gian Luca Scoccia, Marco Autili
{"title":"A systematic mapping study on automated negotiation for autonomous intelligent systems","authors":"Mashal Afzal Memon, Gian Luca Scoccia, Marco Autili","doi":"10.1007/s10515-025-00515-7","DOIUrl":"10.1007/s10515-025-00515-7","url":null,"abstract":"<div><p>Autonomous intelligent systems are known as artificial intelligence software entities that can act on their own and can take decisions without any human intervention. The communication between such systems to reach an agreement for problem-solving is known as automated negotiation. This study aims to systematically identify and analyze the literature on automated negotiation from four distinct viewpoints: (1) the existing literature on negotiation with focus on automation, (2) the specific purpose and application domain of the studies published in the domain of automated negotiation, (3) the input, and techniques used to model the negotiation process, and (4) the limitations of the state of the art and future research directions. For this purpose, we performed a systematic mapping study (SMS) starting from 73,760 potentially relevant studies belonging to 24 conference proceedings and 22 journal issues. Through a precise selection procedure, we identified 50 primary studies, published from the year 2000 onward, which were analyzed by applying a classification framework. As a result, we provide: (a) a classification framework to analyze the automated negotiation literature according to several parameters (e.g., focus of the paper, inputs required to carry on the negotiation process, techniques applied, and type of agents involved in the negotiation), (b) an up-to-date map of the literature specifying the purpose and application domain of each study, (c) a list of techniques used to automate the negotiation process and the list of input to carry out the negotiation, and (d) a discussion about promising challenges and their consequences for future research. We also provide a replication package to help researchers replicate and verify our systematic mapping study. The results and findings will benefit researchers and practitioners in identifying the research gap and conducting further research to bring dedicated solutions for automated negotiation.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10515-025-00515-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ExtRep: a GUI test repair method for mobile applications based on test-extension","authors":"Yonghao Long, Yuanyuan Chen, Chu Zeng, Xiangping Chen, Xing Chen, Xiaocong Zhou, Jingru Yang, Gang Huang, Zibin Zheng","doi":"10.1007/s10515-025-00513-9","DOIUrl":"10.1007/s10515-025-00513-9","url":null,"abstract":"<div><p>GUI testing ensures the software quality and user experience in the ever-changing mobile application development. Using test scripts is one of the main GUI testing manner, but it might be obsolete when the GUI changes with the app’s evolution. Current studies often rely on textual or visual similarity to perform test repair, but may be less effective when the interacted event sequence changes dramatically. In the interaction design, practitioners often provide multiple entry points to access the same function to gain higher openness and flexibility, which indicates that there may be multiple routes for reference in test repair. To evaluate the feasibility, we first conducted an exploratory study on 37 tests from 18 apps. The result showed that over 81% tests could be represented with alternative event paths, and using the extended paths could help enhance the test replay rate. Based on this finding, we propose a test-<b>ext</b>ension-based test <b>rep</b>air algorithm named <i>ExtRep</i>. The method first uses test-extension to find alternative paths with similar test objectives based on feature coverage, and then finds repaired result with the help of sequence transduction probability proposed in NLP area. Experiments conducted on 40 popular applications demonstrate that <i>ExtRep</i> can achieve a success rate of 73.68% in repairing 97 tests, which significantly outperforms current approaches <span>Water</span>, <span>Meter</span>, and <span>Guider</span>. Moreover, the test-extension approach displays immense potential for optimizing test repairs. A tool that implements the <i>ExtRep</i> is available for practical use and future research.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yingling Li, Muxin Cai, Junjie Chen, Yang Xu, Lei Huang, Jianping Li
{"title":"Context-aware prompting for LLM-based program repair","authors":"Yingling Li, Muxin Cai, Junjie Chen, Yang Xu, Lei Huang, Jianping Li","doi":"10.1007/s10515-025-00512-w","DOIUrl":"10.1007/s10515-025-00512-w","url":null,"abstract":"<div><p>Automated program repair (APR) plays a crucial role in ensuring the quality of software code, as manual bug-fixing is extremely time-consuming and labor-intensive. Traditional APR tools (e.g., template-based approaches) face the challenge of generalizing to different bug patterns, while deep learning (DL)-based methods heavily rely on training datasets and struggle to fix unseen bugs. Recently, large language models (LLMs) have shown great potential in APR due to their ability to generate patches, having achieved promising results. However, their effectiveness is still constrained by the casually-determined context (e.g., being unable to adaptively select the specific context according to the situation of each defect). Therefore, a more effective APR approach is highly needed, which provides more precise and comprehensive context for the given defect to enhance the robustness of LLM-based APRs. In this paper, we propose a context-aware APR approach named <b>CodeCorrector</b>, which designs a Chain-of-Thought (CoT) approach to follow developers’ program repair behaviors. Given a failing test and its buggy file, CodeCorrector first analyzes why the test fails based on the failure message to infer repair direction; then selects the relevant context information to this repair direction; finally builds the context-aware repair prompt to guide LLMs for patch generation. Our motivation is to offer a novel perspective for enhancing LLM-based program repair through context-aware prompting, which adaptively selects specific context for a given defect. The evaluation on the widely-used Defects4J (i.e., v1.2 and v2.0) benchmark shows that overall, by executing a small number of repairs (i.e., as few as ten rounds), CodeCorrector outperforms all the state-of-the-art baselines on the more complex defects in Defects4J v2.0 and the defects without fine-grained defect localization information in Defects4J v1.2. Specifically, a total of 38 defects are fixed by only CodeCorrector. We further analyze the contributions of two core components (i.e., repair directions, global context selection) to the performance of CodeCorrector, especially repair directions, which improve CodeCorrector by 112% in correct patches and 78% in plausible patches on Defects4J v1.2. Moreover, CodeCorrector generates more valid and correct patches, achieving a 377% improvement over the base LLM GPT-3.5 and a 268% improvement over GPT-4.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UI2HTML: utilizing LLM agents with chain of thought to convert UI into HTML code","authors":"Dawei Yuan, Guocang Yang, Tao Zhang","doi":"10.1007/s10515-025-00509-5","DOIUrl":"10.1007/s10515-025-00509-5","url":null,"abstract":"<div><p>The exponential growth of the internet has led to the creation of over 1.11 billion active websites, with approximately 252,000 new sites emerging daily. This burgeoning landscape underscores a pressing need for rapid and diverse website development, particularly to support advanced functionalities like Web3 interfaces and AI-generated content platforms. Traditional methods that manually convert visual designs into functional code are not only time-consuming but also error-prone, especially challenging for non-experts. In this paper, we introduce “UI2HTML” an innovative system that harnesses the capabilities of Web Real-Time Communication and Large Language Models (LLMs) to convert website layout designs into functional user interface (UI) code. The UI2HTML system employs a sophisticated divide-and-conquer approach, augmented by Chain of Thought reasoning, to enhance the processing and accurate analysis of UI designs. It efficiently captures real-time video and audio inputs from product managers via mobile devices, utilizing advanced image processing algorithms like OpenCV to extract and categorize UI elements. This rich data, complemented by audio descriptions of UI components, is processed by backend cloud services employing Multimodal Large Language Models (MLLMs). These AI agents interpret the multimodal data to generate requirement documents and initial software architecture drafts, effectively automating the translation of webpage designs into executable code. Our comprehensive evaluation demonstrates that UI2HTML significantly outperforms existing methods in terms of visual similarity and functional accuracy through extensive testing across real-world datasets and various MLLM configurations. By offering a robust solution for the automated generation of UI code from screenshots, UI2HTML sets a new benchmark in the field, particularly beneficial in today’s fast-evolving digital environment.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}