Xiaoli Zhang;Yiqiao Song;Yuefeng Du;Chengjun Cai;Hongbing Cheng;Ke Xu;Qi Li
{"title":"SmartUpdater: Enabling Transparent, Automated, and Secure Maintenance of Stateful Smart Contracts","authors":"Xiaoli Zhang;Yiqiao Song;Yuefeng Du;Chengjun Cai;Hongbing Cheng;Ke Xu;Qi Li","doi":"10.1109/TSE.2025.3548730","DOIUrl":"10.1109/TSE.2025.3548730","url":null,"abstract":"Smart contracts in the Ethereum system are stored tamper-resistant, complicating necessary maintenance for offering new functionalities or fixing security vulnerabilities. Previous contract maintenance approaches mainly focus on logic modification using delegatecall-based patterns. While popular, they fail to handle data state updates (like storage layout changes), leading to impracticality and security risks in real-world applications. To address these challenges, this paper introduces SmartUpdater, a novel toolchain designed for transparent, automated, and secure maintenance of stateful smart contracts. SmartUpdater employs a hyperproxy-based contract maintenance pattern, where the hyperproxy serves as a constant entry and ensures that any state/logic modifications remain transparent to end users. SmartUpdater automates the maintenance process in terms of development streamlining, gas cost efficiency, and state migration verifiability. In extensive evaluations, we show that SmartUpdater can reduce gas consumption in contract maintenance compared with actual maintenance approaches. The evaluations point out the potential of SmartUpdater to significantly simplify the maintenance process for developers.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1266-1283"},"PeriodicalIF":6.5,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SecureFalcon: Are We There Yet in Automated Software Vulnerability Detection With LLMs?","authors":"Mohamed Amine Ferrag;Ammar Battah;Norbert Tihanyi;Ridhi Jain;Diana Maimuţ;Fatima Alwahedi;Thierry Lestable;Narinderjit Singh Thandi;Abdechakour Mechri;Merouane Debbah;Lucas C. Cordeiro","doi":"10.1109/TSE.2025.3548168","DOIUrl":"10.1109/TSE.2025.3548168","url":null,"abstract":"Software vulnerabilities can cause numerous problems, including crashes, data loss, and security breaches. These issues greatly compromise quality and can negatively impact the market adoption of software applications and systems. Traditional bug-fixing methods, such as static analysis, often produce false positives. While bounded model checking, a form of Formal Verification (FV), can provide more accurate outcomes compared to static analyzers, it demands substantial resources and significantly hinders developer productivity. Can Machine Learning (ML) achieve accuracy comparable to FV methods and be used in popular instant code completion frameworks in near real-time? In this paper, we introduce <monospace>SecureFalcon</monospace>, an innovative model architecture with only 121 million parameters derived from the Falcon-40B model and explicitly tailored for classifying software vulnerabilities. To achieve the best performance, we trained our model using two datasets, namely the FormAI dataset and the FalconVulnDB. The FalconVulnDB is a combination of recent public datasets, namely the SySeVR framework, Draper VDISC, Bigvul, Diversevul, SARD Juliet, and ReVeal datasets. These datasets contain the top 25 most dangerous software weaknesses, such as CWE-119, CWE-120, CWE-476, CWE-122, CWE-190, CWE-121, CWE-78, CWE-787, CWE-20, and CWE-762. <monospace>SecureFalcon</monospace> achieves 94% accuracy in binary classification and up to 92% in multiclassification, with instant CPU inference times. It outperforms existing models such as BERT, RoBERTa, CodeBERT, and traditional ML algorithms, promising to push the boundaries of software vulnerability detection and instant code completion frameworks.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1248-1265"},"PeriodicalIF":6.5,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143569515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Retrieval-Augmented Deep Assertion Generation via Joint Training","authors":"Quanjun Zhang;Chunrong Fang;Yi Zheng;Ruixiang Qian;Shengcheng Yu;Yuan Zhao;Jianyi Zhou;Yun Yang;Tao Zheng;Zhenyu Chen","doi":"10.1109/TSE.2025.3545970","DOIUrl":"10.1109/TSE.2025.3545970","url":null,"abstract":"Unit testing attempts to validate the correctness of basic units of the software system under test and has a crucial role in software development and testing. However, testing experts have to spend a huge amount of effort to write unit test cases manually. Very recent work proposes a retrieve-and-edit approach to automatically generate unit test oracles, <italic>i.e.,</i> assertions. Despite being promising, it is still far from perfect due to some limitations, such as splitting assertion retrieval and generation into two separate components without benefiting each other. In this paper, we propose AG-RAG, a retrieval-augmented automated assertion generation (AG) approach that leverages external codebases and joint training to address various technical limitations of prior work. Inspired by the plastic surgery hypothesis, AG-RAG attempts to combine relevant unit tests and advanced pre-trained language models (PLMs) with retrieval-augmented fine-tuning. The key insight of AG-RAG is to simultaneously optimize the retriever and the generator as a whole pipeline with a joint training strategy, enabling them to learn from each other. Particularly, AG-RAG builds a dense retriever to search for relevant test-assert pairs (TAPs) with semantic matching and a retrieval-augmented generator to synthesize accurate assertions with the focal-test and retrieved TAPs as input. Besides, AG-RAG leverages a code-aware language model CodeT5 as the cornerstone to facilitate both assertion retrieval and generation tasks. Furthermore, AG-RAG designs a joint training strategy that allows the retriever to learn from the feedback provided by the generator. This unified design fully adapts both components specifically for retrieving more useful TAPs, thereby generating accurate assertions. AG-RAG is a generic framework that can be adapted to various off-the-shelf PLMs. We extensively evaluate AG-RAG against six state-of-the-art AG approaches on two benchmarks and three metrics. Experimental results show that AG-RAG significantly outperforms previous AG approaches on all benchmarks and metrics, <italic>e.g.,</i> improving the most recent baseline <sc>EditAS</small> by 20.82% and 26.98% in terms of accuracy. AG-RAG also correctly generates 1739 and 2866 unique assertions that all baselines fail to generate, 3.45X and 9.20X more than <sc>EditAS</small>. We further demonstrate the positive contribution of our joint training strategy, <italic>e.g.,</i> AG-RAG improving a variant without the retriever by an average accuracy of 14.11%. Besides, adopting other PLMs can provide substantial advancement, <italic>e.g.,</i> AG-RAG with four different PLMs improving EditAS by an average accuracy of 9.02%, highlighting the generalizability of our framework. Overall, our work demonstrates the promising potential of jointly fine-tuning the PLM-based retriever and generator to predict accurate assertions by incorporating external knowledge sources, thereby reducing the manual efforts of unit ","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1232-1247"},"PeriodicalIF":6.5,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ju Qian;Guizhou Lv;Yiming Jin;Zhengyu Shang;Shuoyan Yan;Yan Wang;Lin Chen
{"title":"Robotic Visual GUI Testing for Truly Non-Intrusive Test Automation of Touch Screen Applications","authors":"Ju Qian;Guizhou Lv;Yiming Jin;Zhengyu Shang;Shuoyan Yan;Yan Wang;Lin Chen","doi":"10.1109/TSE.2025.3544441","DOIUrl":"10.1109/TSE.2025.3544441","url":null,"abstract":"Test automation intrusive to the devices under test is difficult to apply on closed or uncommon touch screen systems, e.g., a Switch game console or a digital instrument running a self-defined operating system. There is a lack of non-intrusive test automation techniques for situations where intrusive testing is impossible or not easy to apply. This paper presents RoScript, a novel robotic visual GUI testing system for truly non-intrusive test automation of touch screen applications. RoScript expresses GUI actions in visual test scripts and executes them via a physical robot. A key innovation of RoScript is a test engine armed with environment calibration techniques to achieve automated test execution without manually setting any environment parameter or adjusting the robot arms for a new subject under test. Additionally, two complementary computer vision-based methods are also introduced to record test scripts from videos of human actions on a touch screen. The RoScript test automation does not rely on the internal system of a device under test, making it truly non-intrusive and suitable for touch screen applications running on almost any platform. We evaluated RoScript on a diverse range of devices--including three Android/iOS phones, a Windows tablet, a Linux-based Raspberry Pi, a GoPro camera, and a Switch game console--across over 1100 GUI actions in 160 test scenarios. The results demonstrate RoScript’s high accuracy in test execution: 94% for executing test scripts and 97% for replicating GUI actions. Furthermore, RoScript accurately recorded about 85% of human touch screen actions into test code. These results highlight RoScript’s potential as a truly non-intrusive, cross-platform solution for GUI test automation.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1205-1231"},"PeriodicalIF":6.5,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143470760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Co-Evolution of Metamodels and Code","authors":"Zohra Kaouter Kebaili;Djamel Eddine Khelladi;Mathieu Acher;Olivier Barais","doi":"10.1109/TSE.2025.3540545","DOIUrl":"10.1109/TSE.2025.3540545","url":null,"abstract":"<bold><i>Context.</i> </b> In Software Engineering, Model-Driven Engineering (MDE) is a methodology that considers Metamodels as a cornerstone. As an abstract artifact, a metamodel plays a significant role in the specification of a software language, particularly, in generating other artifacts of lower abstraction level, such as code. Developers then enrich the generated code to build their language services and tooling, e.g., editors, and checkers. <bold><i>Problem.</i> </b> When a metamodel evolves, the generated code is automatically updated. As a consequence, the developers’ additional code is impacted and needs to be co-evolved accordingly. <bold><i>Contribution.</i> </b> This paper proposes a new fully automatic code co-evolution approach with the evolution of the Ecore metamodel. The approach relies on pattern matching of the additional code errors. This process aims to analyze the abstraction gap between the evolved metamodel elements and the code errors to co-evolve them. <bold><i>Evaluation and Results.</i> </b> We evaluated our approach on nine Eclipse projects from OCL, Modisco, and Papyrus over several evolved versions of three metamodels. Results show that we automatically co-evolved 771 errors due to metamodel evolution with 631 matched and applied resolutions. Our approach reached an average of 82% of precision and 81% of recall, varying from 48% to 100% for precision and recall respectively. To check the effect of the co-evolution and its behavioral correctness, we rely on generated test cases before and after co-evolution. We observed that the percentage of passing, failing, and erroneous tests remained the same with insignificant variations in some projects. Thus, suggesting the behavioral correctness of the co-evolution Moreover, we conducted a comparison with the use of quick fixes that represent a usual tool for correcting code errors in an IDE. We found that our automatic co-evolution approach outperforms the use of quick fixes that lacked the context of metamodel evolution. Finally, we also compared our approach with the state-of-the-art semi-automatic co-evolution approach. As expected, precision and recall are slightly better with semi-automation, but with the burden of manual intervention, which is alleviated with our automatic co-evolution.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1067-1085"},"PeriodicalIF":6.5,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143462505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Security Issues in the DAO Governance Process","authors":"Junjie Ma;Muhui Jiang;Jinan Jiang;Xiapu Luo;Yufeng Hu;Yajin Zhou;Qi Wang;Fengwei Zhang","doi":"10.1109/TSE.2025.3543280","DOIUrl":"10.1109/TSE.2025.3543280","url":null,"abstract":"The Decentralized Autonomous Organization (DAO) has emerged as a popular governance solution for decentralized applications (dApps), enabling them to manage their members across the world. This structure ensures that no single entity can arbitrarily control the dApp without approval from the majority of members. However, despite its advantages, DAOs face several challenges within their governance processes that can compromise their integrity and potentially lead to the loss of dApp assets. In this paper, we first provided an overview of the DAO governance process within the blockchain. Next, we identified issues within 3 key components of the governance process: the Governance Contract, Documentation, and Proposal. Regarding the Governance Contract, malicious developers could embed backdoors or malicious code to manipulate the governance process. In terms of Documentation, inadequate or unclear documentation from developers may prevent members from effectively participating, increasing the risk of undetected governance attacks or enabling a small group of members to dominate the process. Lastly, with Proposals, members could submit malicious proposals with embedded malicious code in an attempt to gain control of the DAO. To address these issues, we developed automated methods to detect such vulnerabilities. To investigate the prevalence of these issues within the current DAO ecosystem, we constructed a state-of-the-art dataset that includes 3,348 DAOs, 144 documentation, and 65,436 proposals across 9 different blockchains. Our analysis reveals that many DAO developers and members have not given sufficient attention to these issues. For the Governance Contract, 176 DAOs allow external entities to control their governance contracts, while one DAO permits developers to arbitrarily change the contract's logic. In terms of Documentation, only 71 DAOs provide adequate guidance for their members on governance processes. As for Proposals, over 90% of the examined proposals (32,500) fail to provide consistent descriptions and code for their members, highlighting a significant gap in transparency within the DAO governance process. For a better DAO governance ecosystem, DAO developers and members can utilize the methods to identify and address issues within the governance process.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1188-1204"},"PeriodicalIF":6.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Qin;Shangwen Wang;Yiling Lou;Jinhao Dong;Kaixin Wang;Xiaoling Li;Xiaoguang Mao
{"title":"SoapFL: A Standard Operating Procedure for LLM-Based Method-Level Fault Localization","authors":"Yihao Qin;Shangwen Wang;Yiling Lou;Jinhao Dong;Kaixin Wang;Xiaoling Li;Xiaoguang Mao","doi":"10.1109/TSE.2025.3543187","DOIUrl":"10.1109/TSE.2025.3543187","url":null,"abstract":"Fault Localization (FL) is an essential step during the debugging process. With the strong capabilities of code comprehension, the recent Large Language Models (LLMs) have demonstrated promising performance in diagnosing bugs in the code. Nevertheless, due to LLMs’ limited performance in handling long contexts, existing LLM-based fault localization remains on localizing bugs within a <italic>small code scope</i> (i.e., a method or a class), which struggles to diagnose bugs for a <italic>large code scope</i> (i.e., an entire software system). To address the limitation, this paper presents S<sc>oap</small>FL, which builds an LLM-driven standard operating procedure (SOP) to automatically localize buggy methods from the entire software. By simulating the behavior of a human developer, S<sc>oap</small>FL models the FL task as a three-step process, which involves comprehension, navigation, and confirmation. Within specific steps, S<sc>oap</small>FL provides useful test behavior or coverage information to LLM through program analysis. Particularly, we adopt a series of auxiliary strategies such as Test Behavior Tracking, Document-Guided Search, and Multi-Round Dialogue to overcome the challenges in each step. The evaluation on the widely used Defects4J-V1.2.0 benchmark shows that S<sc>oap</small>FL can localize 175 out of 395 bugs within Top-1, which outperforms the other LLM-based approaches and exhibits complementarity to the state-of-the-art learning-based techniques. Additionally, we confirm the indispensability of the components in S<sc>oap</small>FL with the ablation study and demonstrate the usability of S<sc>oap</small>FL through a user study. Finally, the cost analysis shows that S<sc>oap</small>FL spends an average of only 0.081 dollars and 92 seconds for a single bug.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1173-1187"},"PeriodicalIF":6.5,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roselane Silva Farias;Iftekhar Ahmed;Eduardo Santana de Almeida
{"title":"What Makes a Great Software Quality Assurance Engineer?","authors":"Roselane Silva Farias;Iftekhar Ahmed;Eduardo Santana de Almeida","doi":"10.1109/TSE.2025.3542763","DOIUrl":"10.1109/TSE.2025.3542763","url":null,"abstract":"Software Quality Assurance (SQA) Engineers play a critical role in evaluating products throughout the software development lifecycle to ensure that the outcomes of each phase and the final product possess the desired quality standards. In general, a great SQA engineer requires a different set of abilities from development engineers to effectively oversee the entire product development process. While recent empirical studies have explored the attributes of software engineers and managers, the quality assurance role is overlooked. As software quality gains increasing priority in the development cycles, both employers seeking skilled professionals and new graduates aspiring to excel in Software Quality Assurance (SQA) roles face a critical question: What makes a great SQA Engineer? To address this gap, we conducted 25 semi-structured interviews and surveyed 363 SQA engineers from diverse companies worldwide. We use the data collected from these activities to derive a comprehensive set of attributes for great SQA Engineers, categorized into five key areas: personal, social, technical, management, and decision-making attributes. Among these, curiosity, effective communication, and critical thinking emerged as defining characteristics of great SQA engineers. These findings offer valuable insights for future research with SQA practitioners, contextual considerations, and practical implications for research and practice.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1153-1172"},"PeriodicalIF":6.5,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Wolf;Marco Edoardo Palma;Pasquale Salza;Harald C. Gall
{"title":"Trustworthy Distributed Certification of Program Execution","authors":"Alex Wolf;Marco Edoardo Palma;Pasquale Salza;Harald C. Gall","doi":"10.1109/TSE.2025.3541810","DOIUrl":"10.1109/TSE.2025.3541810","url":null,"abstract":"Verifying the execution of a program is complicated and often limited by the inability to validate the code's correctness. It is a crucial aspect of scientific research, where it is needed to ensure the reproducibility and validity of experimental results. Similarly, in customer software testing, it is difficult for customers to verify that their specific program version was tested or executed at all. Existing state-of-the-art solutions, such as hardware-based approaches, constraint solvers, and verifiable computation systems, do not provide definitive proof of execution, which hinders reliable testing and analysis of program results. In this paper, we propose an innovative approach that combines a prototype programming language called Mona with a certification protocol OCCP to enable the distributed and decentralized re-execution of program segments. Our protocol allows for certification of program segments in a distributed, immutable, and trustworthy system without the need for naive re-execution, resulting in significant improvements in terms of time and computational resources used. We also explore the use of blockchain technology to manage the protocol workflow following other approaches in this space. Our approach offers a promising solution to the challenges of program execution verification and opens up opportunities for further research and development in this area. Our findings demonstrate the efficiency of our approach in reducing the number of program executions by up to 20-fold, while maintaining resilience against various malicious attacks compared to existing state-of-the-art methods, thus improving the efficiency of certifying program executions. Additionally, our approach handles up to 40% malicious workers effectively, showcasing resilience in detecting and mitigating malicious behavior. In the <small>EquivalentRegistersAttack</small> scenario, it successfully identifies divergent executions even when register values and results appear identical. Moreover, our findings highlight improvements in time and gas efficiency for longer-running problems (scaled with a multiplier of <inline-formula><tex-math>$1{,}000$</tex-math></inline-formula>) compared to baseline methods. Specifically, adopting an informed step size reduces execution time by up to 43-fold and gas costs by up to 12-fold compared to the baseline. Similarly, the informed step size approach reduces execution time by up to 6-fold and gas costs by up to 26-fold compared to a non-informed variation using a step size of <inline-formula><tex-math>$1{,}000$</tex-math></inline-formula>.","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"51 4","pages":"1134-1152"},"PeriodicalIF":6.5,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}