Aiman Parvez, Syed Danish Ali, Hilal Tayara, Kil To Chong
{"title":"Stacking based ensemble learning framework for identification of nitrotyrosine sites.","authors":"Aiman Parvez, Syed Danish Ali, Hilal Tayara, Kil To Chong","doi":"10.1016/j.compbiomed.2024.109200","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109200","url":null,"abstract":"<p><p>Protein nitrotyrosine is an essential post-translational modification that results from the nitration of tyrosine amino acid residues. This modification is known to be associated with the regulation and characterization of several biological functions and diseases. Therefore, accurate identification of nitrotyrosine sites plays a significant role in the elucidating progress of associated biological signs. In this regard, we reported an accurate computational tool known as iNTyro-Stack for the identification of protein nitrotyrosine sites. iNTyro-Stack is a machine-learning model based on a stacking algorithm. The base classifiers in stacking are selected based on the highest performance. The feature map employed is a linear combination of the amino composition encoding schemes, including the composition of k-spaced amino acid pairs and tri-peptide composition. The recursive feature elimination technique is used for significant feature selection. The performance of the proposed method is evaluated using k-fold cross-validation and independent testing approaches. iNTyro-Stack achieved an accuracy of 86.3% and a Matthews correlation coefficient (MCC) of 72.6% in cross-validation. Its generalization capability was further validated on an imbalanced independent test set, where it attained an accuracy of 69.32%. iNTyro-Stack outperforms existing state-of-the-art methods across both evaluation techniques. The github repository is create to reproduce the method and results of iNTyro-Stack, accessible on: https://github.com/waleed551/iNTyro-Stack/.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-stage deep learning framework for occlusal crown depth image generation.","authors":"Junghyun Roh, Junhwi Kim, Jimin Lee","doi":"10.1016/j.compbiomed.2024.109220","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109220","url":null,"abstract":"<p><p>The generation of depth images of occlusal dental crowns is complicated by the need for customization in each case. To decrease the workload of skilled dental technicians, various computer vision models have been used to generate realistic occlusal crown depth images with definite crown surface structures that can ultimately be reconstructed to three-dimensional crowns and directly used in patient treatment. However, it has remained difficult to generate images of the structure of dental crowns in a fluid position using computer vision models. In this paper, we propose a two-stage model for generating depth images of occlusal crowns in diverse positions. The model is divided into two parts: segmentation and inpainting to obtain both shape and surface structure accuracy. The segmentation network focuses on the position and size of the crowns, which allows the model to adapt to diverse targets. The inpainting network based on a GAN generates curved structures of the crown surfaces based on the target jaw image and a binary mask made by the segmentation network. The performance of the model is evaluated via quantitative metrics for the area detection and pixel-value metrics. Compared to the baseline model, the proposed method reduced the MSE score from 0.007001 to 0.002618 and increased DICE score from 0.9333 to 0.9648. It indicates that the model showed better performance in terms of the binary mask from the addition of the segmentation network and the internal structure through the use of inpainting networks. Also, the results demonstrated an improved ability of the proposed model to restore realistic details compared to other models.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Namho Kim, Seongjae Lee, Junho Kim, So Yoon Choi, Sung-Min Park
{"title":"Shuffled ECA-Net for stress detection from multimodal wearable sensor data.","authors":"Namho Kim, Seongjae Lee, Junho Kim, So Yoon Choi, Sung-Min Park","doi":"10.1016/j.compbiomed.2024.109217","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109217","url":null,"abstract":"<p><strong>Background: </strong>Recently, stress has been recognized as a key factor in the emergence of individual and social issues. Numerous attempts have been made to develop sensor-augmented psychological stress detection techniques, although existing methods are often impractical or overly subjective. To overcome these limitations, we acquired a dataset utilizing both wireless wearable multimodal sensors and salivary cortisol tests for supervised learning. We also developed a novel deep neural network (DNN) model that maximizes the benefits of sensor fusion.</p><p><strong>Method: </strong>We devised a DNN involving a shuffled efficient channel attention (ECA) module called a shuffled ECA-Net, which achieves advanced feature-level sensor fusion by considering inter-modality relationships. Through an experiment involving salivary cortisol tests on 26 participants, we acquired multiple bio-signals including electrocardiograms, respiratory waveforms, and electrogastrograms in both relaxed and stressed mental states. A training dataset was generated from the obtained data. Using the dataset, our proposed model was optimized and evaluated ten times through five-fold cross-validation, while varying a random seed.</p><p><strong>Results: </strong>Our proposed model achieved acceptable performance in stress detection, showing 0.916 accuracy, 0.917 sensitivity, 0.916 specificity, 0.914 F1-score, and 0.964 area under the receiver operating characteristic curve (AUROC). Furthermore, we demonstrated that combining multiple bio-signals with a shuffled ECA module can more accurately detect psychological stress.</p><p><strong>Conclusions: </strong>We believe that our proposed model, coupled with the evidence for the viability of multimodal sensor fusion and a shuffled ECA-Net, would significantly contribute to the resolution of stress-related issues.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lightweight medical image segmentation network with multi-scale feature-guided fusion.","authors":"Zhiqin Zhu, Kun Yu, Guanqiu Qi, Baisen Cong, Yuanyuan Li, Zexin Li, Xinbo Gao","doi":"10.1016/j.compbiomed.2024.109204","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109204","url":null,"abstract":"<p><p>In the field of computer-aided medical diagnosis, it is crucial to adapt medical image segmentation to limited computing resources. There is tremendous value in developing accurate, real-time vision processing models that require minimal computational resources. When building lightweight models, there is always a trade-off between computational cost and segmentation performance. Performance often suffers when applying models to meet resource-constrained scenarios characterized by computation, memory, or storage constraints. This remains an ongoing challenge. This paper proposes a lightweight network for medical image segmentation. It introduces a lightweight transformer, proposes a simplified core feature extraction network to capture more semantic information, and builds a multi-scale feature interaction guidance framework. The fusion module embedded in this framework is designed to address spatial and channel complexities. Through the multi-scale feature interaction guidance framework and fusion module, the proposed network achieves robust semantic information extraction from low-resolution feature maps and rich spatial information retrieval from high-resolution feature maps while ensuring segmentation performance. This significantly reduces the parameter requirements for maintaining deep features within the network, resulting in faster inference and reduced floating-point operations (FLOPs) and parameter counts. Experimental results on ISIC2017 and ISIC2018 datasets confirm the effectiveness of the proposed network in medical image segmentation tasks. For instance, on the ISIC2017 dataset, the proposed network achieved a segmentation accuracy of 82.33 % mIoU, and a speed of 71.26 FPS on 256 × 256 images using a GeForce GTX 3090 GPU. Furthermore, the proposed network is tremendously lightweight, containing only 0.524M parameters. The corresponding source codes are available at https://github.com/CurbUni/LMIS-lightweight-network.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shadrack O Aboagye, John A Hunt, Graham Ball, Yang Wei
{"title":"Portable noninvasive technologies for early breast cancer detection: A systematic review.","authors":"Shadrack O Aboagye, John A Hunt, Graham Ball, Yang Wei","doi":"10.1016/j.compbiomed.2024.109219","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109219","url":null,"abstract":"<p><p>Breast cancer remains a leading cause of cancer mortality worldwide, with early detection crucial for improving outcomes. This systematic review evaluates recent advances in portable non-invasive technologies for early breast cancer detection, assessing their methods, performance, and potential for clinical implementation. A comprehensive literature search was conducted across major databases for relevant studies published between 2015 and 2024. Data on technology types, detection methods, and diagnostic performance were extracted and synthesized from 41 included studies. The review examined microwave imaging, electrical impedance tomography (EIT), thermography, bioimpedance spectroscopy (BIS), and pressure sensing technologies. Microwave imaging and EIT showed the most promise, with some studies reporting sensitivities and specificities over 90 %. However, most technologies are still in early stages of development with limited large-scale clinical validation. These innovations could complement existing gold standards, potentially improving screening rates and outcomes, especially in underserved populations, whiles decreasing screening waiting times in developed countries. Further research is therefore needed to validate their clinical efficacy, address implementation challenges, and assess their impact on patient outcomes before widespread adoption can be recommended.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiayu Xu, Qilong Bu, Jingmeng Xie, Hang Li, Feng Xu, Jing Li
{"title":"On-site burn severity assessment using smartphone-captured color burn wound images.","authors":"Xiayu Xu, Qilong Bu, Jingmeng Xie, Hang Li, Feng Xu, Jing Li","doi":"10.1016/j.compbiomed.2024.109171","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109171","url":null,"abstract":"<p><p>Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R<sup>2</sup> for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kang Li, Chen Guo, Rufeng Li, Yufei Yao, Min Qiang, Yuanyuan Chen, Kangsheng Tu, Yungang Xu
{"title":"Pan-cancer characterization of cellular senescence reveals its inter-tumor heterogeneity associated with the tumor microenvironment and prognosis.","authors":"Kang Li, Chen Guo, Rufeng Li, Yufei Yao, Min Qiang, Yuanyuan Chen, Kangsheng Tu, Yungang Xu","doi":"10.1016/j.compbiomed.2024.109196","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109196","url":null,"abstract":"<p><p>Cellular senescence (CS) is characterized by the irreversible cell cycle arrest and plays a key role in aging and diseases, such as cancer. Recent years have witnessed the burgeoning exploration of the intricate relationship between CS and cancer, with CS recognized as either a suppressing or promoting factor and officially acknowledged as one of the 14 cancer hallmarks. However, a comprehensive characterization remains absent from elucidating the divergences of this relationship across different cancer types and its involvement in the multi-facets of tumor development. Here we systematically assessed the cellular senescence of over 10,000 tumor samples from 33 cancer types, starting by defining a set of cancer-associated CS signatures and deriving a quantitative metric representing the CS status, called CS score. We then investigated the CS heterogeneity and its intricate relationship with the prognosis, immune infiltration, and therapeutic responses across different cancers. As a result, cellular senescence demonstrated two distinct prognostic groups: the protective group with eleven cancers, such as LIHC, and the risky group with four cancers, including STAD. Subsequent in-depth investigations between these two groups unveiled the potential molecular and cellular mechanisms underlying the distinct effects of cellular senescence, involving the divergent activation of specific pathways and variances in immune cell infiltrations. These results were further supported by the disparate associations of CS status with the responses to immuno- and chemo-therapies observed between the two groups. Overall, our study offers a deeper understanding of inter-tumor heterogeneity of cellular senescence associated with the tumor microenvironment and cancer prognosis.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rolando de la Cruz, Marc Lavielle, Cristian Meza, Vicente Núñez-Antón
{"title":"A joint analysis proposal of nonlinear longitudinal and time-to-event right-, interval-censored data for modeling pregnancy miscarriage.","authors":"Rolando de la Cruz, Marc Lavielle, Cristian Meza, Vicente Núñez-Antón","doi":"10.1016/j.compbiomed.2024.109186","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109186","url":null,"abstract":"<p><p>Pregnancy in-vitro fertilization (IVF) cases are associated with adverse first-trimester outcomes in comparison to spontaneously achieved pregnancies. Human chorionic gonadotrophin β subunit (β-HCG) is a well-known biomarker for the diagnosis and monitoring of pregnancy after IVF. Low levels of β-HCG during this period are related to miscarriage, ectopic pregnancy, and IVF procedure failures. Longitudinal profiles of β-HCG can be used to distinguish between normal and abnormal pregnancies and to assist and guide the clinician in better management and monitoring of post-IVF pregnancies. Therefore, assessing the association between longitudinally measured β-HCG serum concentration and time to early miscarriage is of crucial interest to clinicians. A common joint modeling approach is to use the longitudinal β-HCG trajectory to determine the risk of miscarriage. This work was motivated by a follow-up study with normal and abnormal pregnancies where β-HCG serum concentrations were measured in 173 young women during a gestational age of 9-86 days in Santiago, Chile. Some women experienced a miscarriage event, and their exact event times were unknown, so we have interval-censored data, with the event occurring between the last time of the observed measurement and ten days later. However, for those women belonging to the normal pregnancy group; that is, carrying a pregnancy to a full-term event, right censoring data are observed. Estimation procedures are based on the Stochastic Approximation of the Expectation-Maximization (SAEM) algorithm.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Nunes, João Boné, João C Ferreira, Pedro Chaves, Luis B Elvas
{"title":"MediAlbertina: An European Portuguese medical language model.","authors":"Miguel Nunes, João Boné, João C Ferreira, Pedro Chaves, Luis B Elvas","doi":"10.1016/j.compbiomed.2024.109233","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109233","url":null,"abstract":"<p><strong>Background: </strong>Patient medical information often exists in unstructured text containing abbreviations and acronyms deemed essential to conserve time and space but posing challenges for automated interpretation. Leveraging the efficacy of Transformers in natural language processing, our objective was to use the knowledge acquired by a language model and continue its pre-training to develop an European Portuguese (PT-PT) healthcare-domain language model.</p><p><strong>Methods: </strong>After carrying out a filtering process, Albertina PT-PT 900M was selected as our base language model, and we continued its pre-training using more than 2.6 million electronic medical records from Portugal's largest public hospital. MediAlbertina 900M has been created through domain adaptation on this data using masked language modelling.</p><p><strong>Results: </strong>The comparison with our baseline was made through the usage of both perplexity, which decreased from about 20 to 1.6 values, and the fine-tuning and evaluation of information extraction models such as Named Entity Recognition and Assertion Status. MediAlbertina PT-PT outperformed Albertina PT-PT in both tasks by 4-6% on recall and f1-score.</p><p><strong>Conclusions: </strong>This study contributes with the first publicly available medical language model trained with PT-PT data. It underscores the efficacy of domain adaptation and offers a contribution to the scientific community in overcoming obstacles of non-English languages. With MediAlbertina, further steps can be taken to assist physicians, in creating decision support systems or building medical timelines in order to perform profiling, by fine-tuning MediAlbertina for PT- PT medical tasks.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdullah A Al-Haddad, Luttfi A Al-Haddad, Sinan A Al-Haddad, Alaa Abdulhady Jaber, Zeashan Hameed Khan, Hafiz Zia Ur Rehman
{"title":"Towards dental diagnostic systems: Synergizing wavelet transform with generative adversarial networks for enhanced image data fusion.","authors":"Abdullah A Al-Haddad, Luttfi A Al-Haddad, Sinan A Al-Haddad, Alaa Abdulhady Jaber, Zeashan Hameed Khan, Hafiz Zia Ur Rehman","doi":"10.1016/j.compbiomed.2024.109241","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109241","url":null,"abstract":"<p><p>The advent of precision diagnostics in pediatric dentistry is shifting towards ensuring early detection of dental diseases, a critical factor in safeguarding the oral health of the younger population. In this study, an innovative approach is introduced, wherein Discrete Wavelet Transform (DWT) and Generative Adversarial Networks (GANs) are synergized within an Image Data Fusion (IDF) framework to enhance the accuracy of dental disease diagnosis through dental diagnostic systems. Dental panoramic radiographs from pediatric patients were utilized to demonstrate how the integration of DWT and GANs can significantly improve the informativeness of dental images. In the IDF process, the original images, GAN-augmented images, and wavelet-transformed images are combined to create a comprehensive dataset. DWT was employed for the decomposition of images into frequency components to enhance the visibility of subtle pathological features. Simultaneously, GANs were used to augment the dataset with high-quality, synthetic radiographic images indistinguishable from real ones, to provide robust data training. These integrated images are then fed into an Artificial Neural Network (ANN) for the classification of dental diseases. The utilization of the ANN in this context demonstrates the system's robustness and culminates in achieving an unprecedented accuracy rate of 0.897, 0.905 precision, recall of 0.897, and specificity of 0.968. Additionally, this study explores the feasibility of embedding the diagnostic system into dental X-ray scanners by leveraging lightweight models and cloud-based solutions to minimize resource constraints. Such integration is posited to revolutionize dental care by providing real-time, accurate disease detection capabilities, which significantly reduces diagnostical delays and enhances treatment outcomes.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}