Wei Zhang;Jiaming Li;Meng Xia;Xu Gao;Xiao Tan;Yifeng Shi;Zhenhua Huang;Guanbin Li
{"title":"OffsetNet: Towards Efficient Multiple Object Tracking, Detection, and Segmentation","authors":"Wei Zhang;Jiaming Li;Meng Xia;Xu Gao;Xiao Tan;Yifeng Shi;Zhenhua Huang;Guanbin Li","doi":"10.1109/TPAMI.2024.3485644","DOIUrl":"10.1109/TPAMI.2024.3485644","url":null,"abstract":"Offset-based representation has emerged as a promising approach for modeling semantic relations between pixels and object motion, demonstrating efficacy across various computer vision tasks. In this paper, we introduce a novel one-stage multi-tasking network tailored to extend the offset-based approach to MOTS. Our proposed framework, named OffsetNet, is designed to concurrently address amodal bounding box detection, instance segmentation, and tracking. It achieves this by formulating these three tasks within a unified pixel-offset-based representation, thereby achieving excellent efficiency and encouraging mutual collaborations. OffsetNet achieves several remarkable properties: first, the encoder is empowered by a novel Memory Enhanced Linear Self-Attention (MELSA) block to efficiently aggregate spatial-temporal features; second, all tasks are decoupled fairly using three lightweight decoders that operate in a one-shot manner; third, a novel cross-frame offsets prediction module is proposed to enhance the robustness of tracking against occlusions. With these merits, OffsetNet achieves 76.83% HOTA on KITTI MOTS benchmark, which is the best result without relying on 3D detection. Furthermore, OffsetNet achieves 74.83% HOTA at 50 FPS on the KITTI MOT benchmark, which is nearly 3.3 times faster than CenterTrack with better performance. We hope our approach will serve as a solid baseline and encourage future research in this field.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"949-960"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Zheng;Peng Yuan Zhou;Athanasios V. Vasilakos;Lin Wang
{"title":"360SFUDA++: Towards Source-Free UDA for Panoramic Segmentation by Learning Reliable Category Prototypes","authors":"Xu Zheng;Peng Yuan Zhou;Athanasios V. Vasilakos;Lin Wang","doi":"10.1109/TPAMI.2024.3490619","DOIUrl":"10.1109/TPAMI.2024.3490619","url":null,"abstract":"In this paper, we address the challenging source-free unsupervised domain adaptation (SFUDA) for pinhole-to-panoramic semantic segmentation, given only a pinhole image pre-trained model (i.e., source) and unlabeled panoramic images (i.e., target). Tackling this problem is non-trivial due to three critical challenges: 1) semantic mismatches from the distinct Field-of-View (FoV) between domains, 2) style discrepancies inherent in the UDA problem, and 3) inevitable distortion of the panoramic images. To tackle these problems, we propose \u0000<b>360SFUDA++</b>\u0000 that effectively extracts knowledge from the source pinhole model with only unlabeled panoramic images and transfers the reliable knowledge to the target panoramic domain. Specifically, we first utilize Tangent Projection (TP) as it has less distortion and meanwhile slits the equirectangular projection (ERP) to patches with fixed FoV projection (FFP) to mimic the pinhole images. Both projections are shown effective in extracting knowledge from the source model. However, as the distinct projections make it less possible to directly transfer knowledge between domains, we then propose Reliable Panoramic Prototype Adaptation Module (RP\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000AM) to transfer knowledge at both prediction and prototype levels. RP\u0000<inline-formula><tex-math>$^{2}$</tex-math></inline-formula>\u0000AM selects the confident knowledge and integrates panoramic prototypes for reliable knowledge adaptation. Moreover, we introduce Cross-projection Dual Attention Module (CDAM), which better aligns the spatial and channel characteristics across projections at the feature level between domains. Both knowledge extraction and transfer processes are synchronously updated to reach the best performance. Extensive experiments on the synthetic and real-world benchmarks, including outdoor and indoor scenarios, demonstrate that our 360SFUDA++ achieves significantly better performance than prior SFUDA methods.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1190-1204"},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Decoupling Concept Bottleneck Model","authors":"Rui Zhang;Xingbo Du;Junchi Yan;Shihua Zhang","doi":"10.1109/TPAMI.2024.3489597","DOIUrl":"10.1109/TPAMI.2024.3489597","url":null,"abstract":"The Concept Bottleneck Model (CBM) is an interpretable neural network that leverages high-level concepts to explain model decisions and conduct human-machine interaction. However, in real-world scenarios, the deficiency of informative concepts can impede the model's interpretability and subsequent interventions. This paper proves that insufficient concept information can lead to an inherent dilemma of concept and label distortions in CBM. To address this challenge, we propose the Decoupling Concept Bottleneck Model (DCBM), which comprises two phases: 1) DCBM for prediction and interpretation, which decouples heterogeneous information into explicit and implicit concepts while maintaining high label and concept accuracy, and 2) DCBM for human-machine interaction, which automatically corrects labels and traces wrong concepts via mutual information estimation. The construction of the interaction system can be formulated as a light min-max optimization problem. Extensive experiments expose the success of alleviating concept/label distortions, especially when concepts are insufficient. In particular, we propose the Concept Contribution Score (CCS) to quantify the interpretability of DCBM. Numerical results demonstrate that CCS can be guaranteed by the Jensen-Shannon divergence constraint in DCBM. Moreover, DCBM expresses two effective human-machine interactions, including forward intervention and backward rectification, to further promote concept/label accuracy via interaction with human experts.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1250-1265"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142562962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive Learning for Dynamic Features and Noisy Labels","authors":"Shilin Gu;Chao Xu;Dewen Hu;Chenping Hou","doi":"10.1109/TPAMI.2024.3489217","DOIUrl":"10.1109/TPAMI.2024.3489217","url":null,"abstract":"Applying current machine learning algorithms in complex and open environments remains challenging, especially when different changing elements are coupled and the training data is scarce. For example, in the activity recognition task, the motion sensors may change position or fall off due to the intensity of the activity, leading to changes in feature space and finally resulting in label noise. Learning from such a problem where the dynamic features are coupled with noisy labels is crucial but rarely studied, particularly when the noisy samples in new feature space are limited. In this paper, we tackle the above problem by proposing a novel two-stage algorithm, called Adaptive Learning for Dynamic features and Noisy labels (ALDN). Specifically, optimal transport is first modified to map the previously learned heterogeneous model to the prior model of the current stage. Then, to fully reuse the mapped prior model, we add a simple yet efficient regularizer as the consistency constraint to assist both the estimation of the noise transition matrix and the model training in the current stage. Finally, two implementations with direct (ALDN-D) and indirect (ALDN-ID) constraints are illustrated for better investigation. More importantly, we provide theoretical guarantees for risk minimization of ALDN-D and ALDN-ID. Extensive experiments validate the effectiveness of the proposed algorithms.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1219-1237"},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduardo Fernandes Montesuma;Fred Maurice Ngolè Mboula;Antoine Souloumiac
{"title":"Recent Advances in Optimal Transport for Machine Learning","authors":"Eduardo Fernandes Montesuma;Fred Maurice Ngolè Mboula;Antoine Souloumiac","doi":"10.1109/TPAMI.2024.3489030","DOIUrl":"10.1109/TPAMI.2024.3489030","url":null,"abstract":"Recently, Optimal Transport has been proposed as a probabilistic framework in Machine Learning for comparing and manipulating probability distributions. This is rooted in its rich history and theory, and has offered new solutions to different problems in machine learning, such as generative modeling and transfer learning. In this survey we explore contributions of Optimal Transport for Machine Learning over the period 2012 – 2023, focusing on four sub-fields of Machine Learning: supervised, unsupervised, transfer and reinforcement learning. We further highlight the recent development in computational Optimal Transport and its extensions, such as partial, unbalanced, Gromov and Neural Optimal Transport, and its interplay with Machine Learning practice.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1161-1180"},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Analysis of Overdispersed Data Using an Accurate Computation of the Dirichlet Multinomial Distribution","authors":"Sherenaz Al-Haj Baddar;Alessandro Languasco;Mauro Migliardi","doi":"10.1109/TPAMI.2024.3489645","DOIUrl":"10.1109/TPAMI.2024.3489645","url":null,"abstract":"Modeling count data using suitable statistical distributions has been instrumental for analyzing the patterns it conveys. However, failing to address critical aspects, like overdispersion, jeopardizes the effectiveness of such an analysis. In this paper, overdispersed count data is modeled using the Dirichlet Multinomial (\u0000<bold>DM</b>\u0000) distribution by maximizing its likelihood using a fixed-point iteration algorithm. This is achieved by estimating the \u0000<bold>DM</b>\u0000 distribution parameters while comparing the recent Languasco-Migliardi (\u0000<bold>LM</b>\u0000), and the Yu-Shaw (\u0000<bold>YS</b>\u0000) procedures, which address the well-known computational difficulties of evaluating its log-likelihood. Experiments were conducted using multiple datasets from different domains spanning polls, images, and IoT network traffic. They all showed the superiority of the \u0000<bold>LM</b>\u0000 procedure as it succeeded at estimating the \u0000<bold>DM</b>\u0000 parameters at the designated level of accuracy in all experiments, while the \u0000<bold>YS</b>\u0000 procedure failed to produce sufficiently accurate results (or any results at all) in several experiments. Moreover, the \u0000<bold>LM</b>\u0000 procedure achieved a speedup that ranged from 2-fold to 20-fold over \u0000<bold>YS</b>\u0000.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1181-1189"},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longguang Wang;Yulan Guo;Yingqian Wang;Xiaoyu Dong;Qingyu Xu;Jungang Yang;Wei An
{"title":"Unsupervised Degradation Representation Learning for Unpaired Restoration of Images and Point Clouds","authors":"Longguang Wang;Yulan Guo;Yingqian Wang;Xiaoyu Dong;Qingyu Xu;Jungang Yang;Wei An","doi":"10.1109/TPAMI.2024.3471571","DOIUrl":"10.1109/TPAMI.2024.3471571","url":null,"abstract":"Restoration tasks in low-level vision aim to restore high-quality (HQ) data from their low-quality (LQ) observations. To circumvents the difficulty of acquiring paired data in real scenarios, unpaired approaches that aim to restore HQ data solely on unpaired data are drawing increasing interest. Since restoration tasks are tightly coupled with the degradation model, unknown and highly diverse degradations in real scenarios make learning from unpaired data quite challenging. In this paper, we propose a degradation representation learning scheme to address this challenge. By learning to distinguish various degradations in the representation space, our degradation representations can extract implicit degradation information in an unsupervised manner. Moreover, to handle diverse degradations, we develop degradation-aware (DA) convolutions with flexible adaption to various degradations to fully exploit the degrdation information in the learned representations. Based on our degradation representations and DA convolutions, we introduce a generic framework for unpaired restoration tasks. Based on our framework, we propose UnIRnet and UnPRnet for unpaired image and point cloud restoration tasks, respectively. It is demonstrated that our degradation representation learning scheme can extract discriminative representations to obtain accurate degradation information. Experiments on unpaired image and point cloud restoration tasks show that our UnIRnet and UnPRnet achieve state-of-the-art performance.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 1","pages":"1-18"},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhao Zhang;Suiyi Zhao;Xiaojie Jin;Mingliang Xu;Yi Yang;Shuicheng Yan;Meng Wang
{"title":"Noise Self-Regression: A New Learning Paradigm to Enhance Low-Light Images Without Task-Related Data","authors":"Zhao Zhang;Suiyi Zhao;Xiaojie Jin;Mingliang Xu;Yi Yang;Shuicheng Yan;Meng Wang","doi":"10.1109/TPAMI.2024.3487361","DOIUrl":"10.1109/TPAMI.2024.3487361","url":null,"abstract":"Deep learning-based low-light image enhancement (LLIE) is a task of leveraging deep neural networks to enhance the image illumination while keeping the image content unchanged. From the perspective of training data, existing methods complete the LLIE task driven by one of the following three data types: paired data, unpaired data and zero-reference data. Each type of these data-driven methods has its own advantages, e.g., zero-reference data-based methods have very low requirements on training data and can meet the human needs in many scenarios. In this paper, we leverage pure Gaussian noise to complete the LLIE task, which further reduces the requirements for training data in LLIE tasks and can be used as another alternative in practical use. Specifically, we propose Noise SElf-Regression (NoiSER) without access to any task-related data, simply learns a convolutional neural network equipped with an instance-normalization layer by taking a random noise image, \u0000<inline-formula><tex-math>$mathcal {N}(0,sigma ^{2})$</tex-math></inline-formula>\u0000 for each pixel, as both input and output for each training pair, and then the low-light image is fed to the trained network for predicting the normal-light image. Technically, an intuitive explanation for its effectiveness is as follows: 1) the self-regression reconstructs the contrast between adjacent pixels of the input image, 2) the instance-normalization layer may naturally remediate the overall magnitude/lighting of the input image, and 3) the \u0000<inline-formula><tex-math>$mathcal {N}(0,sigma ^{2})$</tex-math></inline-formula>\u0000 assumption for each pixel enforces the output image to follow the well-known gray-world hypothesis (Buchsbaum, 1980) when the image size is big enough. Compared to current state-of-the-art LLIE methods with access to different task-related data, NoiSER is highly competitive in enhancement quality, yet with a much smaller model size, and much lower training and inference cost. In addition, the experiments also demonstrate that NoiSER has great potential in overexposure suppression and joint processing with other restoration tasks.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1073-1088"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyi Zhang;Chunyun Chen;Yun Liu;Xue Geng;Mohamed M. Sabry Aly;Jie Lin
{"title":"PSRR-MaxpoolNMS++: Fast Non-Maximum Suppression With Discretization and Pooling","authors":"Tianyi Zhang;Chunyun Chen;Yun Liu;Xue Geng;Mohamed M. Sabry Aly;Jie Lin","doi":"10.1109/TPAMI.2024.3485898","DOIUrl":"10.1109/TPAMI.2024.3485898","url":null,"abstract":"Non-maximum suppression (NMS) is an essential post-processing step for object detection. The de-facto standard for NMS, namely GreedyNMS, is not parallelizable and could thus be the performance bottleneck in object detection pipelines. MaxpoolNMS is introduced as a fast and parallelizable alternative to GreedyNMS. However, MaxpoolNMS is only capable of replacing the GreedyNMS at the first stage of two-stage detectors like Faster R-CNN. To address this issue, we observe that MaxpoolNMS employs the process of \u0000<italic>box coordinate discretization</i>\u0000 followed by \u0000<italic>local score argmax calculation</i>\u0000, to discard the nested-loop pipeline in GreedyNMS to enable parallelizable implementations. In this paper, we introduce a simple \u0000<italic>Relationship Recovery</i>\u0000 module and a \u0000<italic>Pyramid Shifted MaxpoolNMS</i>\u0000 module to improve the above two stages, respectively. With these two modules, our \u0000<bold>PSRR-MaxpoolNMS</b>\u0000 is a generic and parallelizable approach, which can completely replace GreedyNMS at all stages in all detectors. Furthermore, we extend PSRR-MaxpoolNMS to the more powerful \u0000<bold>PSRR-MaxpoolNMS++</b>\u0000. As for \u0000<italic>box coordinate discretization</i>\u0000, we propose \u0000<italic>Density-based Discretization</i>\u0000 for better adherence to the target density of the suppression. As for \u0000<italic>local score argmax calculation</i>\u0000, we propose an \u0000<italic>Adjacent Scale Pooling</i>\u0000 scheme for mining out the duplicated box pairs more accurately and efficiently. Extensive experiments demonstrate that both our PSRR-MaxpoolNMS and PSRR-MaxpoolNMS++ outperform MaxpoolNMS by a large margin. Additionally, PSRR-MaxpoolNMS++ not only surpasses PSRR-MaxpoolNMS but also attains competitive accuracy and much better efficiency when compared with GreedyNMS. Therefore, PSRR-MaxpoolNMS++ is a parallelizable NMS solution that can effectively replace GreedyNMS at all stages in all detectors.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"978-993"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Zhang;Kongming Liang;Ruoyi Du;Wei Chen;Zhanyu Ma
{"title":"Disentangling Before Composing: Learning Invariant Disentangled Features for Compositional Zero-Shot Learning","authors":"Tian Zhang;Kongming Liang;Ruoyi Du;Wei Chen;Zhanyu Ma","doi":"10.1109/TPAMI.2024.3487222","DOIUrl":"10.1109/TPAMI.2024.3487222","url":null,"abstract":"Compositional Zero-Shot Learning (CZSL) aims to recognize novel compositions using knowledge learned from seen attribute-object compositions in the training set. Previous works mainly project an image and its corresponding composition into a common embedding space to measure their compatibility score. However, both attributes and objects share the visual representations learned above, leading the model to exploit spurious correlations and bias towards seen compositions. Instead, we reconsider CZSL as an out-of-distribution generalization problem. If an object is treated as a domain, we can learn object-invariant features to recognize attributes attached to any object reliably, and vice versa. Specifically, we propose an invariant feature learning framework to align different domains at the representation and gradient levels to capture the intrinsic characteristics associated with the tasks. To further facilitate and encourage the disentanglement of attributes and objects, we propose an “encoding-reshuffling-decoding” process to help the model avoid spurious correlations by randomly regrouping the disentangled features into synthetic features. Ultimately, our method improves generalization by learning to disentangle features that represent two independent factors of attributes and objects. Experiments demonstrate that the proposed method achieves state-of-the-art or competitive performance in both closed-world and open-world scenarios.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 2","pages":"1132-1147"},"PeriodicalIF":0.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142523959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}