Chien-Lun Chen, Sara Babakniya, Marco Paolieri, L. Golubchik
{"title":"Defending against Poisoning Backdoor Attacks on Federated Meta-learning","authors":"Chien-Lun Chen, Sara Babakniya, Marco Paolieri, L. Golubchik","doi":"10.1145/3523062","DOIUrl":"https://doi.org/10.1145/3523062","url":null,"abstract":"Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124219197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Yang, Yongxin Tong, Yang Liu, Yangqiu Song, Hao Peng, B. Faltings
{"title":"Introduction to the Special Issue on the Federated Learning: Algorithms, Systems, and Applications: Part 1","authors":"Qian Yang, Yongxin Tong, Yang Liu, Yangqiu Song, Hao Peng, B. Faltings","doi":"10.1145/3514223","DOIUrl":"https://doi.org/10.1145/3514223","url":null,"abstract":"We are delighted to present this special issue on Federated Learning: Algorithms, Systems, and Applications. Federated Learning enables to collaboratively learn a shared learning framework while distributing the data to clients instead of Centralized storage. It allows for governments and businesses to design lower latency and less power consumption models while ensuring data privacy, which is crucial for the development of systems and applications such as healthcare systems, the Internet of Vehicles, and smart city. Since stricter regulations on privacy and security exacerbate the data fragmentation and isolation problem, where data holders are unwilling or prohibited to share their raw data freely, emerging frameworks based on federated learning are required to solve the above problems. The purpose of this special issue is to provide a forum for researchers and practitioners to present their latest research findings and engineering experiences in the theoretical foundations, empirical studies, and novel applications of federated learning for next-generation intelligent systems. This special issue consists of two parts. In Part 1, the guest editors selected 16 contributions that cover varying topics within this theme, ranging from algorithm-cryptographic co-designed deep neural network to personalized humor recognition. Zhou et al. in “Towards Scalable and Privacy-preserving Deep Neural Network via Algorithmiccryptographic Co-design” proposed a scalable and privacy-preserving deep learning network learning framework from algorithm-cryptographic co-perspective, which improves the computation ability and privacy. Antunes et al. in “Federated Learning for Healthcare: Systematic Review and Architecture Proposal” presented a systematic literature review on current research about federated learning (FL) in the context of electronic health records data for healthcare applications. They highlighted some proposed solutions and respective machine learning methods and discussed a general architecture for FL in healthcare. Liu et al. in “Federated Social Recommendation with Graph Neural Network” proposed a novel framework federated social recommendation with graph neural network for the social recommendation task, which is capable of handling heterogeneity and protecting the user’s privacy in social recommendation. Jiang et al. in “Federated Dynamic Graph Neural Networks with Secure Aggregation for Video-based Distributed Surveillance” introduced Federated Dynamic Graph Neural Network (Feddy), a distributed and secured framework to learn the object representations from graph sequences, which adopts the advantage of the graph neural network and is trained in a federated learning manner.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134389429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Yang, Yongxin Tong, Yang Liu, Yangqiu Song, Hao Peng, B. Faltings
{"title":"Preface to Federated Learning: Algorithms, Systems, and Applications: Part 2","authors":"Qian Yang, Yongxin Tong, Yang Liu, Yangqiu Song, Hao Peng, B. Faltings","doi":"10.1145/3536420","DOIUrl":"https://doi.org/10.1145/3536420","url":null,"abstract":"We are delighted to present this special issue on Federated Learning: Algorithms, Systems, and Applications. Federated learning (FL) enables us to collaboratively learn a shared learning framework while distributing the data to clients instead of centralized storage. It allows for governments and businesses to design lower-latency and less-power-consumingmodels while ensuring data privacy, which is crucial for the development of systems and applications such as healthcare systems, the Internet of Vehicles (IoV), and smart cities. Since stricter regulations on privacy and security exacerbate the data fragmentation and isolation problem, where data holders are unwilling or prohibited to share their raw data freely, emerging frameworks based on federated learning are required to solve the above problems. The purpose of this special issue is to provide a forum for researchers and practitioners to present their latest research findings and engineering experiences in the theoretical foundations, empirical studies, and novel applications of federated learning for next-generation intelligent systems. This special issue consists of two parts. In Part 2, the guest editors selected 11 contributions that cover varying topics within this theme, ranging from privacy-aware IoV service deployment with federated learning in cloud-edge computing to federated multi-task graph learning. From this part, you can get the latest progress of federated learning, which may provide a new direction for your research. You can also learn the basic ideas and methods of federated learning and find inspiration from their research ideas on problems. In addition, the frame structure and diagram configuration of their articles may provide a template for your relevant papers. Xu et al., in “PSDF: Privacy-Aware IoV Service Deployment with Federated Learning in CloudEdge Computing,” propose a method for privacy-aware IoV service deployment with federated learning in cloud-edge computing that copes with the dynamical service deployment problem for IoV in cloud-edge computing while protecting the privacy of edge servers. Zhong et al., in “FLEE: A Hierarchical Federated Learning Framework for Distributed Deep Neural Network over Cloud, Edge and End Device,” comprehensively consider various data distributions on end devices and edges, proposing a hierarchical federated learning framework, FLEE, which can realize dynamical updates of models without redeploying them. Dang et al., in “Federated Learning for Electronic Health Records,” survey existing works on FL applications in electronic health records (EHRs) and evaluate the performance of current state-ofthe-art FL algorithms on two EHR machine learning tasks of significant clinical importance on a real-world multi-center EHR dataset.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133242734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PSDF: Privacy-aware IoV Service Deployment with Federated Learning in Cloud-Edge Computing","authors":"Xiaolong Xu, Wentao Liu, Yulan Zhang, Xuyun Zhang, Wanchun Dou, Lianyong Qi, Md Zakirul Alam Bhuiyan","doi":"10.1145/3501810","DOIUrl":"https://doi.org/10.1145/3501810","url":null,"abstract":"Through the collaboration of cloud and edge, cloud-edge computing allows the edge that approximates end-users undertakes those non-computationally intensive service processing of the cloud, reducing the communication overhead and satisfying the low latency requirement of Internet of Vehicle (IoV). With cloud-edge computing, the computing tasks in IoV is able to be delivered to the edge servers (ESs) instead of the cloud and rely on the deployed services of ESs for a series of processing. Due to the storage and computing resource limits of ESs, how to dynamically deploy partial services to the edge is still a puzzle. Moreover, the decision of service deployment often requires the transmission of local service requests from ESs to the cloud, which increases the risk of privacy leakage. In this article, a method for privacy-aware IoV service deployment with federated learning in cloud-edge computing, named PSDF, is proposed. Technically, federated learning secures the distributed training of deployment decision network on each ES by the exchange and aggregation of model weights, avoiding the original data transmission. Meanwhile, homomorphic encryption is adopted for the uploaded weights before the model aggregation on the cloud. Besides, a service deployment scheme based on deep deterministic policy gradient is proposed. Eventually, the performance of PSDF is evaluated by massive experiments.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127412669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vinayak Gupta, Srikanta J. Bedathur, Sourangshu Bhattacharya, A. De
{"title":"Modeling Continuous Time Sequences with Intermittent Observations using Marked Temporal Point Processes","authors":"Vinayak Gupta, Srikanta J. Bedathur, Sourangshu Bhattacharya, A. De","doi":"10.1145/3545118","DOIUrl":"https://doi.org/10.1145/3545118","url":null,"abstract":"A large fraction of data generated via human activities such as online purchases, health records, spatial mobility, etc. can be represented as a sequence of events over a continuous-time. Learning deep learning models over these continuous-time event sequences is a non-trivial task as it involves modeling the ever-increasing event timestamps, inter-event time gaps, event types, and the influences between different events within and across different sequences. In recent years, neural enhancements to marked temporal point processes (MTPP) have emerged as a powerful framework to model the underlying generative mechanism of asynchronous events localized in continuous time. However, most existing models and inference methods in the MTPP framework consider only the complete observation scenario i.e., the event sequence being modeled is completely observed with no missing events – an ideal setting that is rarely applicable in real-world applications. A recent line of work which considers missing events while training MTPP utilizes supervised learning techniques that require additional knowledge of missing or observed label for each event in a sequence, which further restricts its practicability as in several scenarios the details of missing events is not known a priori. In this work, we provide a novel unsupervised model and inference method for learning MTPP in presence of event sequences with missing events. Specifically, we first model the generative processes of observed events and missing events using two MTPP, where the missing events are represented as latent random variables. Then, we devise an unsupervised training method that jointly learns both the MTPP by means of variational inference. Such a formulation can effectively impute the missing data among the observed events, which in turn enhances its predictive prowess, and can identify the optimal position of missing events in a sequence. Experiments with eight real-world datasets show that IMTPP outperforms the state-of-the-art MTPP frameworks for event prediction and missing data imputation, and provides stable optimization.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133976554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AggEnhance: Aggregation Enhancement by Class Interior Points in Federated Learning with Non-IID Data","authors":"Jinxiang Ou, Yunheng Shen, Feng Wang, Qiao Liu, Xuegong Zhang, Hairong Lv","doi":"10.1145/3544495","DOIUrl":"https://doi.org/10.1145/3544495","url":null,"abstract":"Federated learning (FL) is a privacy-preserving paradigm for multi-institutional collaborations, where the aggregation is an essential procedure after training on the local datasets. Conventional aggregation algorithms often apply a weighted averaging of the updates generated from distributed machines to update the global model. However, while the data distributions are non-IID, the large discrepancy between the local updates might lead to a poor averaged result and a lower convergence speed, i.e., more iterations required to achieve a certain performance. To solve this problem, this article proposes a novel method named AggEnhance for enhancing the aggregation, where we synthesize a group of reliable samples from the local models and tune the aggregated result on them. These samples, named class interior points (CIPs) in this work, bound the relevant decision boundaries that ensure the performance of aggregated result. To the best of our knowledge, this is the first work to explicitly design an enhancing method for the aggregation in prevailing FL pipelines. A series of experiments on real data demonstrate that our method has noticeable improvements of the convergence in non-IID scenarios. In particular, our approach reduces the iterations by 31.87% on average for the CIFAR10 dataset and 43.90% for the PASCAL VOC dataset. Since our method does not modify other procedures of FL pipelines, it is easy to apply to most existing FL frameworks. Furthermore, it does not require additional data transmitted from the local clients to the global server, thus holding the same security level as the original FL algorithms.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131393691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marius Hogräfer, M. Angelini, G. Santucci, Hans-Jörg Schulz
{"title":"Steering-by-example for Progressive Visual Analytics","authors":"Marius Hogräfer, M. Angelini, G. Santucci, Hans-Jörg Schulz","doi":"10.1145/3531229","DOIUrl":"https://doi.org/10.1145/3531229","url":null,"abstract":"Progressive visual analytics allows users to interact with early, partial results of long-running computations on large datasets. In this context, computational steering is often brought up as a means to prioritize the progressive computation. This is meant to focus computational resources on data subspaces of interest so as to ensure their computation is completed before all others. Yet, current approaches to select a region of the view space and then to prioritize its corresponding data subspace either require a one-to-one mapping between view and data space, or they need to establish and maintain computationally costly index structures to trace complex mappings between view and data space. We present steering-by-example, a novel interactive steering approach for progressive visual analytics, which allows prioritizing data subspaces for the progression by generating a relaxed query from a set of selected data items. Our approach works independently of the particular visualization technique and without additional index structures. First benchmark results show that steering-by-example considerably improves Precision and Recall for prioritizing unprocessed data for a selected view region, clearly outperforming random uniform sampling.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116463977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy-preserving Collaborative Filtering by Distributed Mediation","authors":"Tamir Tassa, Alon Ben Horin","doi":"10.1145/3542950","DOIUrl":"https://doi.org/10.1145/3542950","url":null,"abstract":"Recommender systems have become very influential in our everyday decision making, e.g., helping us choose a movie from a content platform, or offering us suitable products on e-commerce websites. While most vendors who utilize recommender systems rely exclusively on training data consisting of past transactions that took place through them, it would be beneficial to base recommendations on the rating data of more than one vendor. However, enlarging the training data by means of sharing information between different vendors may jeopardize the privacy of users. We devise here secure multi-party protocols that enable the practice of Collaborative Filtering (CF) in a manner that preserves the privacy of the vendors and users. Shmueli and Tassa [38] introduced privacy-preserving protocols of CF that involved a mediator; namely, an external entity that assists in performing the computations. They demonstrated the significant advantages of mediation in that context. We take here the mediation approach into the next level by using several independent mediators. Such distributed mediation maintains all of the advantages that were identified by Shmueli and Tassa, and offers additional ones, in comparison with the single-mediator protocols: stronger security and dramatically shorter runtimes. In addition, while all prior art assumed limited and unrealistic settings, in which each user can purchase any given item through only one vendor, we consider here a general and more realistic setting, which encompasses all previously considered settings, where users can choose between different competing vendors. We demonstrate the appealing performance of our protocols through extensive experimentation.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127401729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Evaluation of Aggregation-based Group Recommender Systems for Ephemeral Groups","authors":"E. Ceh-Varela, H. Cao, Hady W. Lauw","doi":"10.1145/3542804","DOIUrl":"https://doi.org/10.1145/3542804","url":null,"abstract":"Recommender Systems (RecSys) provide suggestions in many decision-making processes. Given that groups of people can perform many real-world activities (e.g., a group of people attending a conference looking for a place to dine), the need for recommendations for groups has increased. A wide range of Group Recommender Systems (GRecSys) has been developed to aggregate individual preferences to group preferences. We analyze 175 studies related to GRecSys. Previous works evaluate their systems using different types of groups (sizes and cohesiveness), and most of such works focus on testing their systems using only one type of item, called Experience Goods (EG). As a consequence, it is hard to get consistent conclusions about the performance of GRecSys. We present the aggregation strategies and aggregation functions that GRecSys commonly use to aggregate group members’ preferences. This study experimentally compares the performance (i.e., accuracy, ranking quality, and usefulness) using four metrics (Hit Ratio, Normalize Discounted Cumulative Gain, Diversity, and Coverage) of eight representative RecSys for group recommendations on ephemeral groups. Moreover, we use two different aggregation strategies, 10 different aggregation functions, and two different types of items on two types of datasets (EG and Search Goods (SG)) containing real-life datasets. The results show that the evaluation of GRecSys needs to use both EG and SG types of data, because the different characteristics of datasets lead to different performance. GRecSys using Singular Value Decomposition or Neural Collaborative Filtering methods work better than others. It is observed that the Average aggregation function is the one that produces better results.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128278726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Budget Distributed Support Vector Machine for Non-ID Federated Learning Scenarios","authors":"Á. Navia-Vázquez, Roberto Díaz-Morales, Marcos Fernández Díaz","doi":"10.1145/3539734","DOIUrl":"https://doi.org/10.1145/3539734","url":null,"abstract":"In recent years, there has been remarkable growth in Federated Learning (FL) approaches because they have proven to be very effective in training large Machine Learning (ML) models and also serve to preserve data confidentiality, as recommended by the GDPR or other business confidentiality restrictions that may apply. Despite the success of FL, performance is greatly reduced when data is not distributed identically (non-ID) across participants, as local model updates tend to diverge from the optimal global solution and thus the model averaging procedure in the aggregator is less effective. Kernel methods such as Support Vector Machines (SVMs) have not seen an equivalent evolution in the area of privacy preserving edge computing because they suffer from inherent computational, privacy and scalability issues. Furthermore, non-linear SVMs do not naturally lead to federated schemes, since locally trained models cannot be passed to the aggregator because they reveal training data (they are built on Support Vectors), and the global model cannot be updated at every worker using gradient descent. In this article, we explore the use of a particular controlled complexity (“Budget”) Distributed SVM (BDSVM) in the FL scenario with non-ID data, which is the least favorable situation, but very common in practice. The proposed BDSVM algorithm is as follows: model weights are broadcasted to workers, which locally update some kernel Gram matrices computed according to a common architectural base and send them back to the aggregator, which finally combines them, updates the global model, and repeats the procedure until a convergence criterion is met. Experimental results using synthetic 2D datasets show that the proposed method can obtain maximal margin decision boundaries even when the data is non-ID distributed. Further experiments using real-world datasets with non-ID data distribution show that the proposed algorithm provides better performance with less communication requirements than a comparable Multilayer Perceptron (MLP) trained using FedAvg. The advantage is more remarkable for a larger number of edge devices. We have also demonstrated the robustness of the proposed method against information leakage, membership inference attacks, and situations with dropout or straggler participants. Finally, in experiments run on separate processes/machines interconnected via the cloud messaging service developed in the context of the EU-H2020 MUSKETEER project, BDSVM is able to train better models than FedAvg in about half the time.","PeriodicalId":123526,"journal":{"name":"ACM Transactions on Intelligent Systems and Technology (TIST)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128736950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}