Xiaoyu Guo, Subing Huang, Borong He, Chuanlin Lan, Jodie J Xie, Kelvin Y S Lau, Tomohiko Takei, Arthur D P Mak, Roy T H Cheung, Kazuhiko Seki, Vincent C K Cheung, Rosa H M Chan
{"title":"Inhibitory Components in Muscle Synergies Factorized by The Rectified Latent Variable Model from Electromyographic Data.","authors":"Xiaoyu Guo, Subing Huang, Borong He, Chuanlin Lan, Jodie J Xie, Kelvin Y S Lau, Tomohiko Takei, Arthur D P Mak, Roy T H Cheung, Kazuhiko Seki, Vincent C K Cheung, Rosa H M Chan","doi":"10.1109/JBHI.2024.3453603","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3453603","url":null,"abstract":"<p><p>Non-negative matrix factorization (NMF), widely used in motor neuroscience for identifying muscle synergies from electromyographical signals (EMGs), extracts non-negative synergies and is yet unable to identify potential negative components (NegCps) in synergies underpinned by inhibitory spinal interneurons. To overcome this constraint, we propose to utilize rectified latent variable model (RLVM) to extract muscle synergies. RLVM uses an autoencoder neural network, and the weight matrix of its neural network could be negative, while latent variables must remain non-negative. If inputs to the model are EMGs, the weight matrix and latent variables represent muscle synergies and their temporal activation coefficients, respectively. We compared performances of NMF and RLVM in identifying muscle synergies in simulated and experimental datasets. Our simulated results showed that RLVM performed better in identifying muscle-synergy subspace and NMF had a good correlation with ground truth. Finally, we applied RLVM to a previously published experimental dataset comprising EMGs from upper-limb muscles and spike recordings of spinal premotor interneurons (PreM-INs) collected from two macaque monkeys during grasping tasks. RLVM and NMF synergies were highly similar, but a few small negative muscle components were observed in RLVM synergies. The muscles with NegCps identified by RLVM exhibited near-zero values in their corresponding synergies identified by NMF. Importantly, NegCps of RLVM synergies showed correspondence with the muscle connectivity of PreM-INs with inhibitory muscle fields, as identified by spike-triggered averaging of EMGs. Our results demonstrate the feasibility of RLVM in extracting potential inhibitory muscle-synergy components from EMGs.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Quantification of HER2 Amplification Levels Using Deep Learning.","authors":"Ching-Wei Wang, Kai-Lin Chu, Ting-Sheng Su, Keng-Wei Liu, Yi-Jia Lin, Tai-Kuang Chao","doi":"10.1109/JBHI.2024.3476554","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3476554","url":null,"abstract":"<p><p>HER2 assessment is necessary for patient selection in anti-HER2 targeted treatment. However, manual assessment of HER2 amplification is time-costly, labor-intensive, highly subjective and error-prone. Challenges in HER2 analysis in fluorescence in situ hybridization (FISH) and dual in situ hybridization (DISH) images include unclear and blurry cell boundaries, large variations in cell shapes and signals, overlapping and clustered cells and sparse label issues with manual annotations only on cells with high confidences, producing subjective assessment scores according to the individual choices on cell selection. To address the above-mentioned issues, we have developed a soft-sampling cascade deep learning model and a signal detection model in quantifying CEN17 and HER2 of cells to assist assessment of HER2 amplification status for patient selection of HER2 targeting therapy to breast cancer. In evaluation with two different kinds of clinical datasets, including a FISH data set and a DISH data set, the proposed method achieves high accuracy, recall and F1-score for both datasets in instance segmentation of HER2 related cells that must contain both CEN17 and HER2 signals. Moreover, the proposed method is demonstrated to significantly outperform seven state of the art recently published deep learning methods, including contour proposal network (CPN), soft label-based FCN (SL-FCN), modified fully convolutional network (M-FCN), bilayer convolutional network (BCNet), SOLOv2, Cascade R-CNN and DeepLabv3+ with three different backbones (p ≤ 0.01). Clinically, anti-HER2 therapy can also be applied to gastric cancer patients. We applied the developed model to assist in HER2 DISH amplification assessment for gastric cancer patients, and it also showed promising predictive results (accuracy 97.67 ±1.46%, precision 96.15 ±5.82%, respectively).</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiale Dun, Jun Wang, Juncheng Li, Qianhui Yang, Wenlong Hang, Xiaofeng Lu, Shihui Ying, Jun Shi
{"title":"A Trustworthy Curriculum Learning Guided Multi-Target Domain Adaptation Network for Autism Spectrum Disorder Classification.","authors":"Jiale Dun, Jun Wang, Juncheng Li, Qianhui Yang, Wenlong Hang, Xiaofeng Lu, Shihui Ying, Jun Shi","doi":"10.1109/JBHI.2024.3476076","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3476076","url":null,"abstract":"<p><p>Domain adaptation has demonstrated success in classification of multi-center autism spectrum disorder (ASD). However, current domain adaptation methods primarily focus on classifying data in a single target domain with the assistance of one or multiple source domains, lacking the capability to address the clinical scenario of identifying ASD in multiple target domains. In response to this limitation, we propose a Trustworthy Curriculum Learning Guided Multi-Target Domain Adaptation (TCL-MTDA) network for identifying ASD in multiple target domains. To effectively handle varying degrees of data shift in multiple target domains, we propose a trustworthy curriculum learning procedure based on the Dempster-Shafer (D-S) Theory of Evidence. Additionally, a domain-contrastive adaptation method is integrated into the TCL-MTDA process to align data distributions between source and target domains, facilitating the learning of domain-invariant features. The proposed TCL-MTDA method is evaluated on 437 subjects (including 220 ASD patients and 217 NCs) from the Autism Brain Imaging Data Exchange (ABIDE). Experimental results validate the effectiveness of our proposed method in multi-target ASD classification, achieving an average accuracy of 71.46% (95% CI: 68.85% - 74.06%) across four target domains, significantly outperforming most baseline methods (p<0.05).</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jialu Li, Lei Zhu, Zhaohu Xing, Baoliang Zhao, Ying Hu, Faqin Lv, Qiong Wang
{"title":"Cascaded Inner-Outer Clip Retformer for Ultrasound Video Object Segmentation.","authors":"Jialu Li, Lei Zhu, Zhaohu Xing, Baoliang Zhao, Ying Hu, Faqin Lv, Qiong Wang","doi":"10.1109/JBHI.2024.3464732","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3464732","url":null,"abstract":"<p><p>Computer-aided ultrasound (US) imaging is an important prerequisite for early clinical diagnosis and treatment. Due to the harsh ultrasound (US) image quality and the blurry tumor area, recent memory-based video object segmentation models (VOS) achieve frame-level segmentation by performing intensive similarity matching among the past frames which could inevitably result in computational redundancy. Furthermore, the current attention mechanism utilized in recent models only allocates the same attention level among whole spatial-temporal memory features without making distinctions, which may result in accuracy degradation. In this paper, we first build a larger annotated benchmark dataset for breast lesion segmentation in ultrasound videos, then we propose a lightweight clip-level VOS framework for achieving higher segmentation accuracy while maintaining the speed. The Inner-Outer Clip Retformer is proposed to extract spatialtemporal tumor features in parallel. Specifically, the proposed Outer Clip Retformer extracts the tumor movement feature from past video clips to locate the current clip tumor position, while the Inner Clip Retformer detailedly extracts current tumor features that can produce more accurate segmentation results. Then a Clip Contrastive loss function is further proposed to align the extracted tumor features along both the spatial-temporal dimensions to improve the segmentation accuracy. In addition, the Global Retentive Memory is proposed to maintain the complementary tumor features with lower computing resources which can generate coherent temporal movement features. In this way, our model can significantly improve the spatial-temporal perception ability without increasing a large number of parameters, achieving more accurate segmentation results while maintaining a faster segmentation speed. Finally, we conduct extensive experiments to evaluate our proposed model on several video object segmentation datasets, the results show that our framework outperforms state-of-theart segmentation methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengmeng Wu, Tiantian Liu, Xin Dai, Chuyang Ye, Jinglong Wu, Shintaro Funahashi, Tianyi Yan
{"title":"HMDA: A Hybrid Model with Multi-scale Deformable Attention for Medical Image Segmentation.","authors":"Mengmeng Wu, Tiantian Liu, Xin Dai, Chuyang Ye, Jinglong Wu, Shintaro Funahashi, Tianyi Yan","doi":"10.1109/JBHI.2024.3469230","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3469230","url":null,"abstract":"<p><p>Transformers have been applied to medical image segmentation tasks owing to their excellent longrange modeling capability, compensating for the failure of Convolutional Neural Networks (CNNs) to extract global features. However, the standardized self-attention modules in Transformers, characterized by a uniform and inflexible pattern of attention distribution, frequently lead to unnecessary computational redundancy with high-dimensional data, consequently impeding the model's capacity for precise concentration on salient image regions. Additionally, achieving effective explicit interaction between the spatially detailed features captured by CNNs and the long-range contextual features provided by Transformers remains challenging. In this architecture, we propose a Hybrid Transformer and CNN architecture with Multi-scale Deformable Attention(HMDA), designed to address the aforementioned issues effectively. Specifically, we introduce a Multi-scale Spatially Adaptive Deformable Attention (MSADA) mechanism, which attends to a small set of key sampling points around a reference within the multi-scale features, to achieve better performance. In addition, we propose the Cross Attention Bridge (CAB) module, which integrates multi-scale transformer and local features through channelwise cross attention enriching feature synthesis. HMDA is validated on multiple datasets, and the results demonstrate the effectiveness of our approach, which achieves competitive results compared to the previous methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TBE-Net: A Deep Network Based on Tree-like Branch Encoder for Medical Image Segmentation.","authors":"Shukai Yang, Xiaoqian Zhang, Youdong He, Yufeng Chen, Ying Zhou","doi":"10.1109/JBHI.2024.3468904","DOIUrl":"10.1109/JBHI.2024.3468904","url":null,"abstract":"<p><p>In recent years, encoder-decoder-based network structures have been widely used in designing medical image segmentation models. However, these methods still face some limitations: 1) The network's feature extraction capability is limited, primarily due to insufficient attention to the encoder, resulting in a failure to extract rich and effective features. 2) Unidirectional stepwise decoding of smaller-sized feature maps restricts segmentation performance. To address the above limitations, we propose an innovative Tree-like Branch Encoder Network (TBE-Net), which adopts a tree-like branch encoder to better perform feature extraction and preserve feature information. Additionally, we introduce the Depth and Width Expansion (D-WE) module to expand the network depth and width at low parameter cost, thereby enhancing network performance. Furthermore, we design a Deep Aggregation Module (DAM) to better aggregate and process encoder features. Subsequently, we directly decode the aggregated features to generate the segmentation map. The experimental results show that, compared to other advanced algorithms, our method, with the lowest parameter cost, achieved improvements in the IoU metric on the TNBC, PH2, CHASE-DB1, STARE, and COVID-19-CT-Seg datasets by 1.6%, 0.46%, 0.81%, 1.96%, and 0.86%, respectively.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renjie Lv, Wenwen Chang, Guanghui Yan, Wenchao Nie, Lei Zheng, Bin Guo, Muhammad Tariq Sadiq
{"title":"A novel recognition and classification approach for motor imagery based on spatio-temporal features.","authors":"Renjie Lv, Wenwen Chang, Guanghui Yan, Wenchao Nie, Lei Zheng, Bin Guo, Muhammad Tariq Sadiq","doi":"10.1109/JBHI.2024.3464550","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3464550","url":null,"abstract":"<p><p>Motor imagery, as a paradigm of brainmachine interfaces, holds vast potential in the field of medical rehabilitation. Addressing the challenges posed by the non-stationarity and low signal-to-noise ratio of EEG signals, the effective extraction of features from motor imagery signals for accurate recognition stands as a key focus in motor imagery brain-machine interface technology. This paper proposes a motor imagery EEG signal classification model that combines functional brain networks with graph convolutional networks. First, functional brain networks are constructed using different brain functional connectivity metrics, and graph theory features are calculated to deeply analyze the characteristics of brain networks under different motor tasks. Then, the constructed functional brain networks are combined with graph convolutional networks for the classification and recognition of motor imagery tasks. The analysis based on brain functional connectivity reveals that the functional connectivity strength during the both fists task is significantly higher than that of other motor imagery tasks, and the functional connectivity strength during actual movement is generally superior to that of motor imagery tasks. In experiments conducted on the Physionet public dataset, the proposed model achieved a classification accuracy of 88.39% under multi-subject conditions, significantly outperforming traditional methods. Under single-subject conditions, the model effectively addressed the issue of individual variability, achieving an average classification accuracy of 99.31%. These results indicate that the proposed model not only exhibits excellent performance in the classification of motor imagery tasks but also provides new insights into the functional connectivity characteristics of different motor tasks and their corresponding brain regions.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142390166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acoustic COVID-19 Detection Using Multiple Instance Learning.","authors":"Michael Reiter, Pernkopf Franz","doi":"10.1109/JBHI.2024.3474975","DOIUrl":"10.1109/JBHI.2024.3474975","url":null,"abstract":"<p><p>In the COVID-19 pandemic, a rigorous testing scheme was crucial. However, tests can be time-consuming and expensive. A machine learning-based diagnostic tool for audio recordings could enable widespread testing at low costs. In order to achieve comparability between such algorithms, the DiCOVA challenge was created. It is based on the Coswara dataset offering the recording categories cough, speech, breath and vowel phonation. Recording durations vary greatly, ranging from one second to over a minute. A base model is pre-trained on random, short time intervals. Subsequently, a Multiple Instance Learning (MIL) model based on self-attention is incorporated to make collective predictions for multiple time segments within each audio recording, taking advantage of longer durations. In order to compete in the fusion category of the DiCOVA challenge, we utilize a linear regression approach among other fusion methods to combine predictions from the most successful models associated with each sound modality. The application of the MIL approach significantly improves generalizability, leading to an AUC ROC score of 86.6% in the fusion category. By incorporating previously unused data, including the sound modality 'sustained vowel phonation' and patient metadata, we were able to significantly improve our previous results reaching a score of 92.2%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BioSAM: Generating SAM Prompts From Superpixel Graph for Biological Instance Segmentation.","authors":"Miaomiao Cai, Xiaoyu Liu, Zhiwei Xiong, Xuejin Chen","doi":"10.1109/JBHI.2024.3474706","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3474706","url":null,"abstract":"<p><p>Proposal-free instance segmentation methods have significantly advanced the field of biological image analysis. Recently, the Segment Anything Model (SAM) has shown an extraordinary ability to handle challenging instance boundaries. However, directly applying SAM to biological images that contain instances with complex morphologies and dense distributions fails to yield satisfactory results. In this work, we propose BioSAM, a new biological instance segmentation framework generating SAM prompts from a superpixel graph. Specifically, to avoid over-merging, we first generate sufficient superpixels as graph nodes and construct an initialized graph. We then generate initial prompts from each superpixel and aggregate them through a graph neural network (GNN) by predicting the relationship of superpixels to avoid over-segmentation. We employ the SAM encoder embeddings and the SAM-assisted superpixel similarity as new features for the graph to enhance its discrimination capability. With the graph-based prompt aggregation, we utilize the aggregated prompts in SAM to refine the segmentation and generate more accurate instance boundaries. Comprehensive experiments on four representative biological datasets demonstrate that our proposed method outperforms state-of-the-art methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformer<sup>3</sup>: A Pure Transformer Framework for fMRI-Based Representations of Human Brain Function.","authors":"Xiaoxi Tian, Hao Ma, Yun Guan, Le Xu, Jiangcong Liu, Lixia Tian","doi":"10.1109/JBHI.2024.3471186","DOIUrl":"10.1109/JBHI.2024.3471186","url":null,"abstract":"<p><p>Effective representation learning is essential for neuroimage-based individualized predictions. Numerous studies have been performed on fMRI-based individualized predictions, leveraging sample-wise, spatial, and temporal interdependencies hidden in fMRI data. However, these studies failed to fully utilize the effective information hidden in fMRI data, as only one or two types of the interdependencies were analyzed. To effectively extract representations of human brain function through fully leveraging the three types of the interdependencies, we establish a pure transformer-based framework, Transformer3, leveraging transformer's strong ability to capture interdependencies within the input data. Transformer<sup>3</sup> consists mainly of three transformer modules, with the Batch Transformer module used for addressing sample-wise similarities and differences, the Region Transformer module used for handling complex spatial interdependencies among brain regions, and the Time Transformer module used for capturing temporal interdependencies across time points. Experiments on age, IQ, and sex predictions based on two public datasets demonstrate the effectiveness of the proposed Transformer3. As the only hypothesis is that sample-wise, spatial, and temporal interdependencies extensively exist within the input data, the proposed Transformer<sup>3</sup> can be widely used for representation learning based on multivariate time-series. Furthermore, the pure transformer framework makes it quite convenient for understanding the driving factors underlying the predictive models based on Transformer<sup>3</sup>.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}