{"title":"2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation","authors":"Tomoki Oya , Yuka Kadomatsu , Toyofumi Fengshi Chen-Yoshikawa , Megumi Nakao","doi":"10.1016/j.compmedimag.2024.102418","DOIUrl":"10.1016/j.compmedimag.2024.102418","url":null,"abstract":"<div><p>Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102418"},"PeriodicalIF":5.4,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000958/pdfft?md5=3066bd94344d2f3879bdc4b7435a2810&pid=1-s2.0-S0895611124000958-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dewa Made Sri Arsa , Talha Ilyas , Seok-Hwan Park , Leon Chua , Hyongsuk Kim
{"title":"Efficient multi-stage feedback attention for diverse lesion in cancer image segmentation","authors":"Dewa Made Sri Arsa , Talha Ilyas , Seok-Hwan Park , Leon Chua , Hyongsuk Kim","doi":"10.1016/j.compmedimag.2024.102417","DOIUrl":"10.1016/j.compmedimag.2024.102417","url":null,"abstract":"<div><p>In the domain of Computer-Aided Diagnosis (CAD) systems, the accurate identification of cancer lesions is paramount, given the life-threatening nature of cancer and the complexities inherent in its manifestation. This task is particularly arduous due to the often vague boundaries of cancerous regions, compounded by the presence of noise and the heterogeneity in the appearance of lesions, making precise segmentation a critical yet challenging endeavor. This study introduces an innovative, an iterative feedback mechanism tailored for the nuanced detection of cancer lesions in a variety of medical imaging modalities, offering a refining phase to adjust detection results. The core of our approach is the elimination of the need for an initial segmentation mask, a common limitation in iterative-based segmentation methods. Instead, we utilize a novel system where the feedback for refining segmentation is derived directly from the encoder–decoder architecture of our neural network model. This shift allows for more dynamic and accurate lesion identification. To further enhance the accuracy of our CAD system, we employ a multi-scale feedback attention mechanism to guide and refine predicted mask subsequent iterations. In parallel, we introduce a sophisticated weighted feedback loss function. This function synergistically combines global and iteration-specific loss considerations, thereby refining parameter estimation and improving the overall precision of the segmentation. We conducted comprehensive experiments across three distinct categories of medical imaging: colonoscopy, ultrasonography, and dermoscopic images. The experimental results demonstrate that our method not only competes favorably with but also surpasses current state-of-the-art methods in various scenarios, including both standard and challenging out-of-domain tasks. This evidences the robustness and versatility of our approach in accurately identifying cancer lesions across a spectrum of medical imaging contexts. Our source code can be found at <span><span>https://github.com/dewamsa/EfficientFeedbackNetwork</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102417"},"PeriodicalIF":5.4,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization","authors":"Yijie Qu , Tao Lu , Shaoting Zhang , Guotai Wang","doi":"10.1016/j.compmedimag.2024.102416","DOIUrl":"10.1016/j.compmedimag.2024.102416","url":null,"abstract":"<div><p>Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102416"},"PeriodicalIF":5.4,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions","authors":"Wen Liu , Li Li , Jiao Deng , Wei Li","doi":"10.1016/j.compmedimag.2024.102415","DOIUrl":"10.1016/j.compmedimag.2024.102415","url":null,"abstract":"<div><h3>Purpose</h3><p>To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.</p></div><div><h3>Methods</h3><p>This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.</p></div><div><h3>Results</h3><p>The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).</p></div><div><h3>Conclusion</h3><p>The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102415"},"PeriodicalIF":5.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenchu Xu , Xue Wu , Boyan Wang , Jie Chen , Zhifan Gao , Xiujian Liu , Heye Zhang
{"title":"Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework","authors":"Chenchu Xu , Xue Wu , Boyan Wang , Jie Chen , Zhifan Gao , Xiujian Liu , Heye Zhang","doi":"10.1016/j.compmedimag.2024.102414","DOIUrl":"10.1016/j.compmedimag.2024.102414","url":null,"abstract":"<div><p>The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102414"},"PeriodicalIF":5.4,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caryn Geady , Farnoosh Abbas-Aghababazadeh , Andres Kohan , Scott Schuetze , David Shultz , Benjamin Haibe-Kains
{"title":"Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease","authors":"Caryn Geady , Farnoosh Abbas-Aghababazadeh , Andres Kohan , Scott Schuetze , David Shultz , Benjamin Haibe-Kains","doi":"10.1016/j.compmedimag.2024.102413","DOIUrl":"10.1016/j.compmedimag.2024.102413","url":null,"abstract":"<div><p>Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102413"},"PeriodicalIF":5.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bolun Zeng , Huixiang Wang , Leo Joskowicz , Xiaojun Chen
{"title":"Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation","authors":"Bolun Zeng , Huixiang Wang , Leo Joskowicz , Xiaojun Chen","doi":"10.1016/j.compmedimag.2024.102412","DOIUrl":"10.1016/j.compmedimag.2024.102412","url":null,"abstract":"<div><p>Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935<span><math><mo>±</mo></math></span>0.068 and 0.929<span><math><mo>±</mo></math></span>0.058 in the dataset FracCLINIC, and 0.955<span><math><mo>±</mo></math></span>0.072 and 0.912<span><math><mo>±</mo></math></span>0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102412"},"PeriodicalIF":5.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precision dose prediction for breast cancer patients undergoing IMRT: The Swin-UMamba-Channel Model","authors":"Hui Xie , Hua Zhang , Zijie Chen , Tao Tan","doi":"10.1016/j.compmedimag.2024.102409","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102409","url":null,"abstract":"<div><h3>Background</h3><p>Radiation therapy is one of the crucial treatment modalities for cancer. An excellent radiation therapy plan relies heavily on an outstanding dose distribution map, which is traditionally generated through repeated trials and adjustments by experienced physicists. However, this process is both time-consuming and labor-intensive, and it comes with a degree of subjectivity. Now, with the powerful capabilities of deep learning, we are able to predict dose distribution maps more accurately, effectively overcoming these challenges.</p></div><div><h3>Methods</h3><p>In this study, we propose a novel Swin-UMamba-Channel prediction model specifically designed for predicting the dose distribution of patients with left breast cancer undergoing radiotherapy after total mastectomy. This model integrates anatomical position information of organs and ray angle information, significantly enhancing prediction accuracy. Through iterative training of the generator (Swin-UMamba) and discriminator, the model can generate images that closely match the actual dose, assisting physicists in quickly creating DVH curves and shortening the treatment planning cycle. Our model exhibits excellent performance in terms of prediction accuracy, computational efficiency, and practicality, and its effectiveness has been further verified through comparative experiments with similar networks.</p></div><div><h3>Results</h3><p>The results of the study indicate that our model can accurately predict the clinical dose of breast cancer patients undergoing intensity-modulated radiation therapy (IMRT). The predicted dose range is from 0 to 50 Gy, and compared with actual data, it shows a high accuracy with an average Dice similarity coefficient of 0.86. Specifically, the average dose change rate for the planning target volume ranges from 0.28 % to 1.515 %, while the average dose change rates for the right and left lungs are 2.113 % and 0.508 %, respectively. Notably, due to their small sizes, the heart and spinal cord exhibit relatively higher average dose change rates, reaching 3.208 % and 1.490 %, respectively. In comparison with similar dose studies, our model demonstrates superior performance. Additionally, our model possesses fewer parameters, lower computational complexity, and shorter processing time, further enhancing its practicality and efficiency. These findings provide strong evidence for the accuracy and reliability of our model in predicting doses, offering significant technical support for IMRT in breast cancer patients.</p></div><div><h3>Conclusion</h3><p>This study presents a novel Swin-UMamba-Channel dose prediction model, and its results demonstrate its precise prediction of clinical doses for the target area of left breast cancer patients undergoing total mastectomy and IMRT. These remarkable achievements provide valuable reference data for subsequent plan optimization and quality control, paving a new path for the application of deep learning in","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102409"},"PeriodicalIF":5.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peixuan Ge , Shibo Li , Yefeng Liang , Shuwei Zhang , Lihai Zhang , Ying Hu , Liang Yao , Pak Kin Wong
{"title":"Enhancing trabecular CT scans based on deep learning with multi-strategy fusion","authors":"Peixuan Ge , Shibo Li , Yefeng Liang , Shuwei Zhang , Lihai Zhang , Ying Hu , Liang Yao , Pak Kin Wong","doi":"10.1016/j.compmedimag.2024.102410","DOIUrl":"10.1016/j.compmedimag.2024.102410","url":null,"abstract":"<div><p>Trabecular bone analysis plays a crucial role in understanding bone health and disease, with applications like osteoporosis diagnosis. This paper presents a comprehensive study on 3D trabecular computed tomography (CT) image restoration, addressing significant challenges in this domain. The research introduces a backbone model, Cascade-SwinUNETR, for single-view 3D CT image restoration. This model leverages deep layer aggregation with supervision and capabilities of Swin-Transformer to excel in feature extraction. Additionally, this study also brings DVSR3D, a dual-view restoration model, achieving good performance through deep feature fusion with attention mechanisms and Autoencoders. Furthermore, an Unsupervised Domain Adaptation (UDA) method is introduced, allowing models to adapt to input data distributions without additional labels, holding significant potential for real-world medical applications, and eliminating the need for invasive data collection procedures. The study also includes the curation of a new dual-view dataset for CT image restoration, addressing the scarcity of real human bone data in Micro-CT. Finally, the dual-view approach is validated through downstream medical bone microstructure measurements. Our contributions open several paths for trabecular bone analysis, promising improved clinical outcomes in bone health assessment and diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102410"},"PeriodicalIF":5.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141400693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulia Varriano , Vittoria Nardone , Simona Correra, Francesco Mercaldo, Antonella Santone
{"title":"An automatic radiomic-based approach for disease localization: A pilot study on COVID-19","authors":"Giulia Varriano , Vittoria Nardone , Simona Correra, Francesco Mercaldo, Antonella Santone","doi":"10.1016/j.compmedimag.2024.102411","DOIUrl":"10.1016/j.compmedimag.2024.102411","url":null,"abstract":"<div><p>Radiomics is an innovative field in Personalized Medicine to help medical specialists in diagnosis and prognosis. Mainly, the application of Radiomics to medical images requires the definition and delimitation of the Region Of Interest (ROI) on the medical image to extract radiomic features. The aim of this preliminary study is to define an approach that automatically detects the specific areas indicative of a particular disease and examines them to minimize diagnostic errors associated with false positives and false negatives. This approach aims to create a <span><math><mrow><mi>n</mi><mi>x</mi><mi>n</mi></mrow></math></span> grid on the DICOM image sequence and each cell in the matrix is associated with a region from which radiomic features can be extracted.</p><p>The proposed procedure uses the Model Checking technique and produces as output the medical diagnosis of the patient, i.e., whether the patient under analysis is affected or not by a specific disease. Furthermore, the matrix-based method also localizes where appears the disease marks. To evaluate the performance of the proposed methodology, a case study on COVID-19 disease is used. Both results on disease identification and localization seem very promising. Furthermore, this proposed approach yields better results compared to methods based on the extraction of features using the whole image as a single ROI, as evidenced by improvements in Accuracy and especially Recall. Our approach supports the advancement of knowledge, interoperability and trust in the software tool, fostering collaboration among doctors, staff and Radiomics.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102411"},"PeriodicalIF":5.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}