{"title":"A Transparent Ultrasound Array for Real-Time Optical, Ultrasound, and Photoacoustic Imaging.","authors":"Haoyang Chen, Sumit Agrawal, Mohamed Osman, Josiah Minotto, Shubham Mirg, Jinyun Liu, Ajay Dangi, Quyen Tran, Thomas Jackson, Sri-Rajasekhar Kothapalli","doi":"10.34133/2022/9871098","DOIUrl":"10.34133/2022/9871098","url":null,"abstract":"<p><p><i>Objective and Impact Statement.</i> Simultaneous imaging of ultrasound and optical contrasts can help map structural, functional, and molecular biomarkers inside living subjects with high spatial resolution. There is a need to develop a platform to facilitate this multimodal imaging capability to improve diagnostic sensitivity and specificity. <i>Introduction</i>. Currently, combining ultrasound, photoacoustic, and optical imaging modalities is challenging because conventional ultrasound transducer arrays are optically opaque. As a result, complex geometries are used to coalign both optical and ultrasound waves in the same field of view. <i>Methods</i>. One elegant solution is to make the ultrasound transducer transparent to light. Here, we demonstrate a novel transparent ultrasound transducer (TUT) linear array fabricated using a transparent lithium niobate piezoelectric material for real-time multimodal imaging. <i>Results</i>. The TUT-array consists of 64 elements and centered at ~6 MHz frequency. We demonstrate a quad-mode ultrasound, Doppler ultrasound, photoacoustic, and fluorescence imaging in real-time using the TUT-array directly coupled to the tissue mimicking phantoms. <i>Conclusion</i>. The TUT-array successfully showed a multimodal imaging capability and has potential applications in diagnosing cancer, neurological, and vascular diseases, including image-guided endoscopy and wearable imaging.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9871098"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521654/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-06-08eCollection Date: 2022-01-01DOI: 10.34133/2022/9840678
Alejandra Gonzalez-Calle, Runze Li, Isaac Asante, Juan Carlos Martinez-Camarillo, Stan Louie, Qifa Zhou, Mark S Humayun
{"title":"Development of Moderate Intensity Focused Ultrasound (MIFU) for Ocular Drug Delivery.","authors":"Alejandra Gonzalez-Calle, Runze Li, Isaac Asante, Juan Carlos Martinez-Camarillo, Stan Louie, Qifa Zhou, Mark S Humayun","doi":"10.34133/2022/9840678","DOIUrl":"10.34133/2022/9840678","url":null,"abstract":"<p><p>The purpose of this study is to develop a method for delivering antiinflammatory agents of high molecular weight (e.g., Avastin) into the posterior segment that does not require injections into the eye (i.e., intravitreal injections; IVT). Diseases affecting the posterior segment of the eye are currently treated with monthly to bimonthly intravitreal injections, which can predispose patients to severe albeit rare complications like endophthalmitis, retinal detachment, traumatic cataract, and/or increased intraocular. In this study, we show that one time moderate intensity focused ultrasound (MIFU) treatment can facilitate the penetration of large molecules across the scleral barrier, showing promising evidence that this is a viable method to deliver high molecular weight medications not invasively. To validate the efficacy of the drug delivery system, IVT injections of vascular endothelial growth factor (VEGF) were used to create an animal model of retinopathy. The creation of this model allowed us to test anti-VEGF medications and evaluate the efficacy of the treatment. In vivo testing showed that animals treated with our MIFU device improved on the retinal tortuosity and clinical dilation compared to the control group while evaluating fluorescein angiogram (FA) Images.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9840678"},"PeriodicalIF":0.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-06-08eCollection Date: 2022-01-01DOI: 10.34133/2022/9765095
Heqin Zhu, Qingsong Yao, Li Xiao, S Kevin Zhou
{"title":"Learning to Localize Cross-Anatomy Landmarks in X-Ray Images with a Universal Model.","authors":"Heqin Zhu, Qingsong Yao, Li Xiao, S Kevin Zhou","doi":"10.34133/2022/9765095","DOIUrl":"10.34133/2022/9765095","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. In this work, we develop a universal anatomical landmark detection model which learns once from multiple datasets corresponding to different anatomical regions. Compared with the conventional model trained on a single dataset, this universal model not only is more light weighted and easier to train but also improves the accuracy of the anatomical landmark location. <i>Introduction</i>. The accurate and automatic localization of anatomical landmarks plays an essential role in medical image analysis. However, recent deep learning-based methods only utilize limited data from a single dataset. It is promising and desirable to build a model learned from different regions which harnesses the power of big data. <i>Methods</i>. Our model consists of a local network and a global network, which capture local features and global features, respectively. The local network is a fully convolutional network built up with depth-wise separable convolutions, and the global network uses dilated convolution to enlarge the receptive field to model global dependencies. <i>Results</i>. We evaluate our model on four 2D X-ray image datasets totaling 1710 images and 72 landmarks in four anatomical regions. Extensive experimental results show that our model improves the detection accuracy compared to the state-of-the-art methods. <i>Conclusion</i>. Our model makes the first attempt to train a single network on multiple datasets for landmark detection. Experimental results qualitatively and quantitatively show that our proposed model performs better than other models trained on multiple datasets and even better than models trained on a single dataset separately.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9765095"},"PeriodicalIF":5.0,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521670/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-06-03eCollection Date: 2022-01-01DOI: 10.34133/2022/9854084
Joshua K Peeples, Julie F Jameson, Nisha M Kotta, Jonathan M Grasman, Whitney L Stoppel, Alina Zare
{"title":"Jointly Optimized Spatial Histogram UNET Architecture (JOSHUA) for Adipose Tissue Segmentation.","authors":"Joshua K Peeples, Julie F Jameson, Nisha M Kotta, Jonathan M Grasman, Whitney L Stoppel, Alina Zare","doi":"10.34133/2022/9854084","DOIUrl":"10.34133/2022/9854084","url":null,"abstract":"<p><p><i>Objective</i>. We aim to develop a machine learning algorithm to quantify adipose tissue deposition at surgical sites as a function of biomaterial implantation. <i>Impact Statement</i>. To our knowledge, this study is the first investigation to apply convolutional neural network (CNN) models to identify and segment adipose tissue in histological images from silk fibroin biomaterial implants. <i>Introduction</i>. When designing biomaterials for the treatment of various soft tissue injuries and diseases, one must consider the extent of adipose tissue deposition. In this work, we analyzed adipose tissue accumulation in histological images of sectioned silk fibroin-based biomaterials excised from rodents following subcutaneous implantation for 1, 2, 4, or 8 weeks. Current strategies for quantifying adipose tissue after biomaterial implantation are often tedious and prone to human bias during analysis. <i>Methods</i>. We used CNN models with novel spatial histogram layer(s) that can more accurately identify and segment regions of adipose tissue in hematoxylin and eosin (H&E) and Masson's trichrome stained images, allowing for determination of the optimal biomaterial formulation. We compared the method, Jointly Optimized Spatial Histogram UNET Architecture (JOSHUA), to the baseline UNET model and an extension of the baseline model, attention UNET, as well as to versions of the models with a supplemental attention-inspired mechanism (JOSHUA+ and UNET+). <i>Results</i>. The inclusion of histogram layer(s) in our models shows improved performance through qualitative and quantitative evaluation. <i>Conclusion</i>. Our results demonstrate that the proposed methods, JOSHUA and JOSHUA+, are highly beneficial for adipose tissue identification and localization. The new histological dataset and code used in our experiments are publicly available.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9854084"},"PeriodicalIF":0.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521712/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-05-11eCollection Date: 2022-01-01DOI: 10.34133/2022/9764501
Jiapu Li, Yuqing Ma, Tao Zhang, K Kirk Shung, Benpeng Zhu
{"title":"Recent Advancements in Ultrasound Transducer: From Material Strategies to Biomedical Applications.","authors":"Jiapu Li, Yuqing Ma, Tao Zhang, K Kirk Shung, Benpeng Zhu","doi":"10.34133/2022/9764501","DOIUrl":"https://doi.org/10.34133/2022/9764501","url":null,"abstract":"<p><p>Ultrasound is extensively studied for biomedical engineering applications. As the core part of the ultrasonic system, the ultrasound transducer plays a significant role. For the purpose of meeting the requirement of precision medicine, the main challenge for the development of ultrasound transducer is to further enhance its performance. In this article, an overview of recent developments in ultrasound transducer technologies that use a variety of material strategies and device designs based on both the piezoelectric and photoacoustic mechanisms is provided. Practical applications are also presented, including ultrasound imaging, ultrasound therapy, particle/cell manipulation, drug delivery, and nerve stimulation. Finally, perspectives and opportunities are also highlighted.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9764501"},"PeriodicalIF":0.0,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-04-26eCollection Date: 2022-01-01DOI: 10.34133/2022/9765307
Shuwei Shen, Mengjuan Xu, Fan Zhang, Pengfei Shao, Honghong Liu, Liang Xu, Chi Zhang, Peng Liu, Peng Yao, Ronald X Xu
{"title":"A Low-Cost High-Performance Data Augmentation for Deep Learning-Based Skin Lesion Classification.","authors":"Shuwei Shen, Mengjuan Xu, Fan Zhang, Pengfei Shao, Honghong Liu, Liang Xu, Chi Zhang, Peng Liu, Peng Yao, Ronald X Xu","doi":"10.34133/2022/9765307","DOIUrl":"10.34133/2022/9765307","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. There is a need to develop high-performance and low-cost data augmentation strategies for intelligent skin cancer screening devices that can be deployed in rural or underdeveloped communities. The proposed strategy can not only improve the classification performance of skin lesions but also highlight the potential regions of interest for clinicians' attention. This strategy can also be implemented in a broad range of clinical disciplines for early screening and automatic diagnosis of many other diseases in low resource settings. <i>Methods</i>. We propose a high-performance data augmentation strategy of search space 10<sup>1</sup>, which can be combined with any model through a plug-and-play mode and search for the best argumentation method for a medical database with low resource cost. <i>Results</i>. With EfficientNets as a baseline, the best BACC of HAM10000 is 0.853, outperforming the other published models of \"single-model and no-external-database\" for ISIC 2018 Lesion Diagnosis Challenge (Task 3). The best average AUC performance on ISIC 2017 achieves 0.909 (±0.015), exceeding most of the ensembling models and those using external datasets. Performance on Derm7pt archives the best BACC of 0.735 (±0.018) ahead of all other related studies. Moreover, the model-based heatmaps generated by Grad-CAM++ verify the accurate selection of lesion features in model judgment, further proving the scientific rationality of model-based diagnosis. <i>Conclusion</i>. The proposed data augmentation strategy greatly reduces the computational cost for clinically intelligent diagnosis of skin lesions. It may also facilitate further research in low-cost, portable, and AI-based mobile devices for skin cancer screening and therapeutic guidance.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9765307"},"PeriodicalIF":5.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-04-12eCollection Date: 2022-01-01DOI: 10.34133/2022/9813062
Alina Dubatovka, Joachim M Buhmann
{"title":"Automatic Detection of Atrial Fibrillation from Single-Lead ECG Using Deep Learning of the Cardiac Cycle.","authors":"Alina Dubatovka, Joachim M Buhmann","doi":"10.34133/2022/9813062","DOIUrl":"10.34133/2022/9813062","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. Atrial fibrillation (AF) is a serious medical condition that requires effective and timely treatment to prevent stroke. We explore deep neural networks (DNNs) for learning cardiac cycles and reliably detecting AF from single-lead electrocardiogram (ECG) signals. <i>Introduction</i>. Electrocardiograms are widely used for diagnosis of various cardiac dysfunctions including AF. The huge amount of collected ECGs and recent algorithmic advances to process time-series data with DNNs substantially improve the accuracy of the AF diagnosis. DNNs, however, are often designed as general purpose black-box models and lack interpretability of their decisions. <i>Methods</i>. We design a three-step pipeline for AF detection from ECGs. First, a recording is split into a sequence of individual heartbeats based on R-peak detection. Individual heartbeats are then encoded using a DNN that extracts interpretable features of a heartbeat by disentangling the duration of a heartbeat from its shape. Second, the sequence of heartbeat codes is passed to a DNN to combine a signal-level representation capturing heart rhythm. Third, the signal representations are passed to a DNN for detecting AF. <i>Results</i>. Our approach demonstrates a superior performance to existing ECG analysis methods on AF detection. Additionally, the method provides interpretations of the features extracted from heartbeats by DNNs and enables cardiologists to study ECGs in terms of the shapes of individual heartbeats and rhythm of the whole signals. <i>Conclusion</i>. By considering ECGs on two levels and employing DNNs for modelling of cardiac cycles, this work presents a method for reliable detection of AF from single-lead ECGs.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9813062"},"PeriodicalIF":5.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-04-07eCollection Date: 2022-01-01DOI: 10.34133/2022/9872028
Zheng Cao, Xiang Pan, Hongyun Yu, Shiyuan Hua, Da Wang, Danny Z Chen, Min Zhou, Jian Wu
{"title":"A Deep Learning Approach for Detecting Colorectal Cancer via Raman Spectra.","authors":"Zheng Cao, Xiang Pan, Hongyun Yu, Shiyuan Hua, Da Wang, Danny Z Chen, Min Zhou, Jian Wu","doi":"10.34133/2022/9872028","DOIUrl":"https://doi.org/10.34133/2022/9872028","url":null,"abstract":"<p><p><i>Objective and Impact Statement.</i> Distinguishing tumors from normal tissues is vital in the intraoperative diagnosis and pathological examination. In this work, we propose to utilize Raman spectroscopy as a novel modality in surgery to detect colorectal cancer tissues. <i>Introduction.</i> Raman spectra can reflect the substance components of the target tissues. However, the feature peak is slight and hard to detect due to environmental noise. Collecting a high-quality Raman spectroscopy dataset and developing effective deep learning detection methods are possibly viable approaches. <i>Methods.</i> First, we collect a large Raman spectroscopy dataset from 26 colorectal cancer patients with the Raman shift ranging from 385 to 1545 cm<math><msup><mrow><mtext> </mtext></mrow><mrow><mo>-</mo><mn>1</mn></mrow></msup></math>. Second, a one-dimensional residual convolutional neural network (1D-ResNet) architecture is designed to classify the tumor tissues of colorectal cancer. Third, we visualize and interpret the fingerprint peaks found by our deep learning model. <i>Results.</i> Experimental results show that our deep learning method achieves 98.5% accuracy in the detection of colorectal cancer and outperforms traditional methods. <i>Conclusion.</i> Overall, Raman spectra are a novel modality for clinical detection of colorectal cancer. Our proposed ensemble 1D-ResNet could effectively classify the Raman spectra obtained from colorectal tumor tissues or normal tissues.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9872028"},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
BME frontiersPub Date : 2022-04-05eCollection Date: 2022-01-01DOI: 10.34133/2022/9867230
Chih-Yen Chien, Yaoheng Yang, Yan Gong, Yimei Yue, Hong Chen
{"title":"Blood-Brain Barrier Opening by Individualized Closed-Loop Feedback Control of Focused Ultrasound.","authors":"Chih-Yen Chien, Yaoheng Yang, Yan Gong, Yimei Yue, Hong Chen","doi":"10.34133/2022/9867230","DOIUrl":"10.34133/2022/9867230","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. To develop an approach for individualized closed-loop feedback control of microbubble cavitation to achieve safe and effective focused ultrasound in combination with microbubble-induced blood-brain barrier opening (FUS-BBBO). <i>Introduction</i>. FUS-BBBO is a promising strategy for noninvasive and localized brain drug delivery with a growing number of clinical studies currently ongoing. Real-time cavitation monitoring and feedback control are critical to achieving safe and effective FUS-BBBO. However, feedback control algorithms used in the past were either open-loop or without consideration of baseline cavitation level difference among subjects. <i>Methods</i>. This study performed feedback-controlled FUS-BBBO by defining the target cavitation level based on the baseline stable cavitation level of an individual subject with \"dummy\" FUS sonication. The dummy FUS sonication applied FUS with a low acoustic pressure for a short duration in the presence of microbubbles to define the baseline stable cavitation level that took into consideration of individual differences in the detected cavitation emissions. FUS-BBBO was then achieved through two sonication phases: ramping-up phase to reach the target cavitation level and maintaining phase to control the stable cavitation level at the target cavitation level. <i>Results</i>. Evaluations performed in wild-type mice demonstrated that this approach achieved effective and safe trans-BBB delivery of a model drug. The drug delivery efficiency increased as the target cavitation level increased from 0.5 dB to 2 dB without causing vascular damage. Increasing the target cavitation level to 3 dB and 4 dB increased the probability of tissue damage. <i>Conclusions</i>. Safe and effective brain drug delivery was achieved using the individualized closed-loop feedback-controlled FUS-BBBO.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9867230"},"PeriodicalIF":0.0,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Segmentation Feature-Based Radiomics Improves Recurrence Prediction of Hepatocellular Carcinoma.","authors":"Jifei Wang, Dasheng Wu, Meili Sun, Zhenpeng Peng, Yingyu Lin, Hongxin Lin, Jiazhao Chen, Tingyu Long, Zi-Ping Li, Chuanmiao Xie, Bingsheng Huang, Shi-Ting Feng","doi":"10.34133/2022/9793716","DOIUrl":"https://doi.org/10.34133/2022/9793716","url":null,"abstract":"<p><p><i>Objective and Impact Statement</i>. This study developed and validated a deep semantic segmentation feature-based radiomics (DSFR) model based on preoperative contrast-enhanced computed tomography (CECT) combined with clinical information to predict early recurrence (ER) of single hepatocellular carcinoma (HCC) after curative resection. ER prediction is of great significance to the therapeutic decision-making and surveillance strategy of HCC. <i>Introduction</i>. ER prediction is important for HCC. However, it cannot currently be adequately determined. <i>Methods</i>. Totally, 208 patients with single HCC after curative resection were retrospectively recruited into a model-development cohort (<math><mi>n</mi><mo>=</mo><mn>180</mn></math>) and an independent validation cohort (<math><mi>n</mi><mo>=</mo><mn>28</mn></math>). DSFR models based on different CT phases were developed. The optimal DSFR model was incorporated with clinical information to establish a DSFR-C model. An integrated nomogram based on the Cox regression was established. The DSFR signature was used to stratify high- and low-risk ER groups. <i>Results</i>. A portal phase-based DSFR model was selected as the optimal model (area under receiver operating characteristic curve (AUC): development cohort, 0.740; validation cohort, 0.717). The DSFR-C model achieved AUCs of 0.782 and 0.744 in the development and validation cohorts, respectively. In the development and validation cohorts, the integrated nomogram achieved C-index of 0.748 and 0.741 and time-dependent AUCs of 0.823 and 0.822, respectively, for recurrence-free survival (RFS) prediction. The RFS difference between the risk groups was statistically significant (<math><mi>P</mi><mo><</mo><mn>0.0001</mn></math> and <math><mi>P</mi><mo>=</mo><mn>0.045</mn></math> in the development and validation cohorts, respectively). <i>Conclusion</i>. CECT-based DSFR can predict ER in single HCC after curative resection, and its combination with clinical information further improved the performance for ER prediction.</p>","PeriodicalId":72430,"journal":{"name":"BME frontiers","volume":"2022 ","pages":"9793716"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10521680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41241399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}