Simone Foti , Alexander J. Rickart , Bongjin Koo , Eimear O’ Sullivan , Lara S. van de Lande , Athanasios Papaioannou , Roman Khonsari , Danail Stoyanov , N.u. Owase Jeelani , Silvia Schievano , David J. Dunaway , Matthew J. Clarkson
{"title":"Latent disentanglement in mesh variational autoencoders improves the diagnosis of craniofacial syndromes and aids surgical planning","authors":"Simone Foti , Alexander J. Rickart , Bongjin Koo , Eimear O’ Sullivan , Lara S. van de Lande , Athanasios Papaioannou , Roman Khonsari , Danail Stoyanov , N.u. Owase Jeelani , Silvia Schievano , David J. Dunaway , Matthew J. Clarkson","doi":"10.1016/j.cmpb.2024.108395","DOIUrl":"10.1016/j.cmpb.2024.108395","url":null,"abstract":"<div><h3>Background and objective:</h3><p>The use of deep learning to undertake shape analysis of the complexities of the human head holds great promise. However, there have traditionally been a number of barriers to accurate modelling, especially when operating on both a global and local level.</p></div><div><h3>Methods:</h3><p>In this work, we will discuss the application of the Swap Disentangled Variational Autoencoder (SD-VAE) with relevance to Crouzon, Apert and Muenke syndromes. The model is trained on a dataset of 3D meshes of healthy and syndromic patients which was increased in size with a novel data augmentation technique based on spectral interpolation. Thanks to its semantically meaningful and disentangled latent representation, SD-VAE is used to analyse and generate head shapes while considering the influence of different anatomical sub-units.</p></div><div><h3>Results:</h3><p>Although syndrome classification is performed on the entire mesh, it is also possible, for the first time, to analyse the influence of each region of the head on the syndromic phenotype. By manipulating specific parameters of the generative model, and producing procedure-specific new shapes, it is also possible to approximate the outcome of a range of craniofacial surgical procedures.</p></div><div><h3>Conclusion:</h3><p>This work opens new avenues to advance diagnosis, aids surgical planning and allows for the objective evaluation of surgical outcomes. Our code is available at <span><span>github.com/simofoti/CraniofacialSD-VAE</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108395"},"PeriodicalIF":4.9,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003882/pdfft?md5=7bb50e4fab3ca51c5867372f1437b450&pid=1-s2.0-S0169260724003882-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malek Anabtawi , Dehlela Shabir , Jhasketan Padhan , Abdulla Al-Ansari , Omar M. Aboumarzouk , Zhigang Deng , Nikhil V. Navkar
{"title":"A holographic telementoring system depicting surgical instrument movements for real-time guidance in open surgeries","authors":"Malek Anabtawi , Dehlela Shabir , Jhasketan Padhan , Abdulla Al-Ansari , Omar M. Aboumarzouk , Zhigang Deng , Nikhil V. Navkar","doi":"10.1016/j.cmpb.2024.108396","DOIUrl":"10.1016/j.cmpb.2024.108396","url":null,"abstract":"<div><h3>Background and Objective</h3><p>During open surgeries, telementoring serves as a valuable tool for transferring surgical knowledge from a specialist surgeon (mentor) to an operating surgeon (mentee). Depicting the intended movements of the surgical instruments over the operative field improves the understanding of the required tool-tissue interaction. The objective of this work is to develop a telementoring system tailored for open surgeries, enabling the mentor to remotely demonstrate the necessary motions of surgical instruments to the mentee.</p></div><div><h3>Methods</h3><p>A remote telementoring system for open surgery was implemented. The system generates visual cues in the form of virtual surgical instrument motion augmented onto the live view of the operative field. These cues can be rendered on both conventional screens in the operating room and as dynamic holograms on a head mounted display device worn by the mentee. The technical performance of the system was evaluated, where the operating room and remote location were geographically separated and connected via the Internet. Additionally, user studies were conducted to assess the effectiveness of the system as a mentoring tool.</p></div><div><h3>Results</h3><p>The system took 307 ± 12 ms to transmit an operative field view of 1920 × 1080 resolution, along with depth information spanning 36 cm, from the operating room to the remote location. Conversely, it took 145 ± 14 ms to receive the motion of virtual surgical instruments from the remote location back to the operating room. Furthermore, the user studies demonstrated: (a) mentor's capability to annotate the operative field with an accuracy of 3.92 ± 2.1 mm, (b) mentee's ability to comprehend and replicate the motion of surgical instruments in real-time with an average deviation of 12.8 ± 3 mm, (c) efficacy of the rendered dynamic holograms in conveying information intended for surgical instrument motion.</p></div><div><h3>Conclusions</h3><p>The study demonstrates the feasibility of transmitting information over the Internet from the mentor to the mentee in the form of virtual surgical instruments’ motion and projecting it as holograms onto the live view of the operative field. This holds potential to enhance real-time collaborative capabilities between the mentor and the mentee during an open surgery.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108396"},"PeriodicalIF":4.9,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003894/pdfft?md5=fabeac77f0cb4dc61b2122f9bd95466b&pid=1-s2.0-S0169260724003894-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenwei Li , Wu Chen , Zimin Dai , Xiaokang Chai , Sile An , Zhuang Guan , Wei Zhou , Jianwei Chen , Hui Gong , Qingming Luo , Zhao Feng , Anan Li
{"title":"Graph-based cell pattern recognition for merging the multi-modal optical microscopic image of neurons","authors":"Wenwei Li , Wu Chen , Zimin Dai , Xiaokang Chai , Sile An , Zhuang Guan , Wei Zhou , Jianwei Chen , Hui Gong , Qingming Luo , Zhao Feng , Anan Li","doi":"10.1016/j.cmpb.2024.108392","DOIUrl":"10.1016/j.cmpb.2024.108392","url":null,"abstract":"<div><p>A deep understanding of neuron structure and function is crucial for elucidating brain mechanisms, diagnosing and treating diseases. Optical microscopy, pivotal in neuroscience, illuminates neuronal shapes, projections, and electrical activities. To explore the projection of specific functional neurons, scientists have been developing optical-based multimodal imaging strategies to simultaneously capture dynamic <em>in vivo</em> signals and static <em>ex vivo</em> structures from the same neuron. However, the original position of neurons is highly susceptible to displacement during <em>ex vivo</em> imaging, presenting a significant challenge for integrating multimodal information at the single-neuron level. This study introduces a graph-model-based approach for cell image matching, facilitating precise and automated pairing of sparsely labeled neurons across different optical microscopic images. It has been shown that utilizing neuron distribution as a matching feature can mitigate modal differences, the high-order graph model can address scale inconsistency, and the nonlinear iteration can resolve discrepancies in neuron density. This strategy was applied to the connectivity study of the mouse visual cortex, performing cell matching between the two-photon calcium image and the HD-fMOST brain-wide anatomical image sets. Experimental results demonstrate 96.67% precision, 85.29% recall rate, and 90.63% F1 Score, comparable to expert technicians. This study builds a bridge between functional and structural imaging, offering crucial technical support for neuron classification and circuitry analysis.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108392"},"PeriodicalIF":4.9,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003857/pdfft?md5=1fb34b63a68ae05eaabf28402cd73721&pid=1-s2.0-S0169260724003857-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiao-Wei Du , Fan Xiao , Lin Zheng , Ren-dong Chen , Li-Nan Dong , Fang-Yi Liu , Zhi-Gang Cheng , Jie Yu , Ping Liang
{"title":"Importance of the enhanced cooling system for more spherical ablation zones: Numerical simulation, ex vivo and in vivo validation","authors":"Qiao-Wei Du , Fan Xiao , Lin Zheng , Ren-dong Chen , Li-Nan Dong , Fang-Yi Liu , Zhi-Gang Cheng , Jie Yu , Ping Liang","doi":"10.1016/j.cmpb.2024.108383","DOIUrl":"10.1016/j.cmpb.2024.108383","url":null,"abstract":"<div><h3>Introduction</h3><p>This study aimed to investigate the efficacy of a small-gauge microwave ablation antenna (MWA) with an enhanced cooling system (ECS) for generating more spherical ablation zones.</p></div><div><h3>Methods</h3><p>A comparison was made between two types of microwave ablation antennas, one with ECS and the other with a conventional cooling system (CCS). The finite element method was used to simulate in vivo ablation. Two types of antennas were used to create MWA zones for 5, 8, 10 min at 50, 60, and 80 W in <em>ex vivo</em> bovine livers (n = 6) and 5 min at 60 W <em>in vivo</em> porcine livers (n = 16). The overtreatment ratio, ablation aspect ratio, carbonization area, and other characteristcs of antennas were measured and compared using numerical simulation and gross pathologic examination.</p></div><div><h3>Results</h3><p>In numerical simulation, the ECS antenna demonstrated a lower overtreatment ratio than the CCS antenna (1.38 vs 1.43 at 50 W 5 min, 1.19 vs 1.35 at 50 W 8 min, 1.13 vs 1.32 at 50 W 10 min, 1.28 vs 1.38 at 60 W 5 min, 1.14 vs 1.32 at 60 W 8 min, 1.10 vs 1.30 at 60 W 10 min). The experiments revealed that the ECS antenna generated ablation zones with a more significant aspect ratio (0.92 ± 0.03 vs 0.72 ± 0.01 at 50 W 5 min, 0.95 ± 0.02 vs 0.70 ± 0.01 at 50 W 8 min, 0.96 ± 0.01 vs 0.71 ± 0.04 at 50 W 10 min, 0.96 ± 0.01 vs 0.73 ± 0.02 at 60 W 5 min, 0.94 ± 0.03 vs 0.71 ± 0.03 at 60 W 8 min, 0.96 ± 0.02 vs 0.69 ± 0.04 at 60 W 10 min) and a smaller carbonization area (0.00 ± 0.00 cm<sup>2</sup> vs 0.54 ± 0.06 cm<sup>2</sup> at 50 W 5 min, 0.13 ± 0.03 cm<sup>2</sup> vs 0.61 ± 0.09 cm<sup>2</sup> at 50 W 8 min, 0.23 ± 0.05 cm<sup>2</sup> vs 0.73 ± 0.05 m<sup>2</sup> at 50 W 10 min, 0.00 ± 0.00 cm<sup>2</sup> vs 1.59 ± 0.41 cm<sup>2</sup> at 60 W 5 min, 0.23 ± 0.22 cm<sup>2</sup> vs 2.11 ± 0.63 cm<sup>2</sup> at 60 W 8 min, 0.57 ± 0.09 cm<sup>2</sup> vs 2.55 ± 0.51 cm<sup>2</sup> at 60 W 10 min). Intraoperative ultrasound images revealed a hypoechoic area instead of a hyperechoic area near the antenna. Hematoxylin-eosin staining of the dissected tissue revealed a correlation between the edge of the ablation zone and that of the hypoechoic area.</p></div><div><h3>Conclusions</h3><p>The ECS antenna can produce more spherical ablation zones with less charring and a clearer intraoperative ultrasound image of the ablation area than the CCS antenna.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108383"},"PeriodicalIF":4.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zipiao Zhu , Yang Liu , Chang-An Yuan , Xiao Qin , Feng Yang
{"title":"A diffusion model multi-scale feature fusion network for imbalanced medical image classification research","authors":"Zipiao Zhu , Yang Liu , Chang-An Yuan , Xiao Qin , Feng Yang","doi":"10.1016/j.cmpb.2024.108384","DOIUrl":"10.1016/j.cmpb.2024.108384","url":null,"abstract":"<div><h3>Background and objective</h3><p>Medicine image classification are important methods of traditional medical image analysis, but the trainable data in medical image classification is highly imbalanced and the accuracy of medical image classification models is low. In view of the above two common problems in medical image classification. This study aims to: (i) effectively solve the problem of poor training effect caused by the imbalance of class imbalanced data sets. (ii) propose a network framework suitable for improving medical image classification results, which needs to be superior to existing methods.</p></div><div><h3>Methods</h3><p>In this paper, we put in the diffusion model multi-scale feature fusion network (DMSFF), which mainly uses the diffusion generation model to overcome imbalanced classes (DMOIC) on highly imbalanced medical image datasets. At the same time, it is processed according to the cropped image augmentation strategy through cropping (IASTC). Based on this, we use the new dataset to design a multi-scale feature fusion network (MSFF) that can fully utilize multiple hierarchical features. The DMSFF network can effectively solve the problems of small and imbalanced samples and low accuracy in medical image classification.</p></div><div><h3>Results</h3><p>We evaluated the performance of the DMSFF network on highly imbalanced medical image classification datasets APTOS2019 and ISIC2018. Compared with other classification models, our proposed DMSFF network achieved significant improvements in classification accuracy and F1 score on two datasets, reaching 0.872, 0.731, and 0.906, 0.836, respectively.</p></div><div><h3>Conclusions</h3><p>Our newly proposed DMSFF architecture outperforms existing methods on two datasets, and verifies the effectiveness of generative model inverse balance for imbalance class datasets and feature enhancement by multi-scale feature fusion. Further, the method can be applied to other class imbalanced data sets where the results will be improved.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108384"},"PeriodicalIF":4.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142083368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ViT-MAENB7: An innovative breast cancer diagnosis model from 3D mammograms using advanced segmentation and classification process","authors":"Thippaluru Umamaheswari , Y. Murali Mohan Babu","doi":"10.1016/j.cmpb.2024.108373","DOIUrl":"10.1016/j.cmpb.2024.108373","url":null,"abstract":"<div><p>Tumors are an important health concern in modern times. Breast cancer is one of the most prevalent causes of death for women. Breast cancer is rapidly becoming the leading cause of mortality among women globally. Early detection of breast cancer allows patients to obtain appropriate therapy, increasing their probability of survival. The adoption of 3-Dimensional (3D) mammography for the medical identification of abnormalities in the breast reduced the number of deaths dramatically. Classification and accurate detection of lumps in the breast in 3D mammography is especially difficult due to factors such as inadequate contrast and normal fluctuations in tissue density. Several Computer-Aided Diagnosis (CAD) solutions are under development to help radiologists accurately classify abnormalities in the breast. In this paper, a breast cancer diagnosis model is implemented to detect breast cancer in cancer patients to prevent death rates. The 3D mammogram images are gathered from the internet. Then, the gathered images are given to the preprocessing phase. The preprocessing is done using a median filter and image scaling method. The purpose of the preprocessing phase is to enhance the quality of the images and remove any noise or artifacts that may interfere with the detection of abnormalities. The median filter helps to smooth out any irregularities in the images, while the image scaling method adjusts the size and resolution of the images for better analysis. Once the preprocessing is complete, the preprocessed image is given to the segmentation phase. The segmentation phase is crucial in medical image analysis as it helps to identify and separate different structures within the image, such as organs or tumors. This process involves dividing the preprocessed image into meaningful regions or segments based on intensity, color, texture, or other features. The segmentation process is done using Adaptive Thresholding with Region Growing Fusion Model (AT-RGFM)”. This model combines the advantages of both thresholding and region-growing techniques to accurately identify and delineate specific structures within the image. By utilizing AT-RGFM, the segmentation phase can effectively differentiate between different parts of the image, allowing for more precise analysis and diagnosis. It plays a vital role in the medical image analysis process, providing crucial insights for healthcare professionals. Here, the Modified Garter Snake Optimization Algorithm (MGSOA) is used to optimize the parameters. It helps to optimize parameters for accurately identifying and delineating specific structures within medical images and also helps healthcare professionals in providing more precise analysis and diagnosis, ultimately playing a vital role in the medical image analysis process. MGSOA enhances the segmentation phase by effectively differentiating between different parts of the image, leading to more accurate results. Then, the segmented image is fed into the det","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108373"},"PeriodicalIF":4.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jin Wei , Yupeng Xu , Hanying Wang , Tian Niu , Yan Jiang , Yinchen Shen , Li Su , Tianyu Dou , Yige Peng , Lei Bi , Xun Xu , Yufan Wang , Kun Liu
{"title":"Metadata information and fundus image fusion neural network for hyperuricemia classification in diabetes","authors":"Jin Wei , Yupeng Xu , Hanying Wang , Tian Niu , Yan Jiang , Yinchen Shen , Li Su , Tianyu Dou , Yige Peng , Lei Bi , Xun Xu , Yufan Wang , Kun Liu","doi":"10.1016/j.cmpb.2024.108382","DOIUrl":"10.1016/j.cmpb.2024.108382","url":null,"abstract":"<div><h3>Objective</h3><p>In diabetes mellitus patients, hyperuricemia may lead to the development of diabetic complications, including macrovascular and microvascular dysfunction. However, the level of blood uric acid in diabetic patients is obtained by sampling peripheral blood from the patient, which is an invasive procedure and not conducive to routine monitoring. Therefore, we developed deep learning algorithm to detect noninvasively hyperuricemia from retina photographs and metadata of patients with diabetes and evaluated performance in multiethnic populations and different subgroups.</p></div><div><h3>Materials and methods</h3><p>To achieve the task of non-invasive detection of hyperuricemia in diabetic patients, given that blood uric acid metabolism is directly related to estimated glomerular filtration rate(eGFR), we first performed a regression task for eGFR value before the classification task for hyperuricemia and reintroduced the eGFR regression values into the baseline information. We trained 3 deep learning models: (1) metadata model adjusted for sex, age, body mass index, duration of diabetes, HbA1c, systolic blood pressure, diastolic blood pressure; (2) image model based on fundus photographs; (3)hybrid model combining image and metadata model. Data from the Shanghai General Hospital Diabetes Management Center (ShDMC) were used to develop (6091 participants with diabetes) and internally validated (using 5-fold cross-validation) the models. External testing was performed on an independent dataset (UK Biobank dataset) consisting of 9327 participants with diabetes.</p></div><div><h3>Results</h3><p>For the regression task of eGFR, in ShDMC dataset, the coefficient of determination (R2) was 0.684±0.07 (95 % CI) for image model, 0.501±0.04 for metadata model, and 0.727±0.002 for hybrid model. In external UK Biobank dataset, a coefficient of determination (R2) was 0.647±0.06 for image model, 0.627±0.03 for metadata model, and 0.697±0.07 for hybrid model. Our method was demonstrably superior to previous methods. For the classification of hyperuricemia, in ShDMC validation, the area, under the curve (AUC) was 0.86±0.013for image model, 0.86±0.013 for metadata model, and 0.92±0.026 for hybrid model. Estimates with UK biobank were 0.82±0.017 for image model, 0.79±0.024 for metadata model, and 0.89±0.032 for hybrid model.</p></div><div><h3>Conclusion</h3><p>There is a potential deep learning algorithm using fundus photographs as a noninvasively screening adjunct for hyperuricemia among individuals with diabetes. Meanwhile, combining patient's metadata enables higher screening accuracy. After applying the visualization tool, it found that the deep learning network for the identification of hyperuricemia mainly focuses on the fundus optic disc region.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108382"},"PeriodicalIF":4.9,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003754/pdfft?md5=c3308820b3ca8bda1dc412912b891e0d&pid=1-s2.0-S0169260724003754-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitrios Karkalousos , Ivana Išgum , Henk A. Marquering , Matthan W.A. Caan
{"title":"ATOMMIC: An Advanced Toolbox for Multitask Medical Imaging Consistency to facilitate Artificial Intelligence applications from acquisition to analysis in Magnetic Resonance Imaging","authors":"Dimitrios Karkalousos , Ivana Išgum , Henk A. Marquering , Matthan W.A. Caan","doi":"10.1016/j.cmpb.2024.108377","DOIUrl":"10.1016/j.cmpb.2024.108377","url":null,"abstract":"<div><h3>Background and Objectives:</h3><p>Artificial intelligence (AI) is revolutionizing Magnetic Resonance Imaging (MRI) along the acquisition and processing chain. Advanced AI frameworks have been applied in various successive tasks, such as image reconstruction, quantitative parameter map estimation, and image segmentation. However, existing frameworks are often designed to perform tasks independently of each other or are focused on specific models or single datasets, limiting generalization. This work introduces the Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC), a novel open-source toolbox that streamlines AI applications for accelerated MRI reconstruction and analysis. ATOMMIC implements several tasks using deep learning (DL) models and enables MultiTask Learning (MTL) to perform related tasks in an integrated manner, targeting generalization in the MRI domain.</p></div><div><h3>Methods:</h3><p>We conducted a comprehensive literature review and analyzed 12,479 GitHub repositories to assess the current landscape of AI frameworks for MRI. Subsequently, we demonstrate how ATOMMIC standardizes workflows and improves data interoperability, enabling effective benchmarking of various DL models across MRI tasks and datasets. To showcase ATOMMIC’s capabilities, we evaluated twenty-five DL models on eight publicly available datasets, focusing on accelerated MRI reconstruction, segmentation, quantitative parameter map estimation, and joint accelerated MRI reconstruction and segmentation using MTL.</p></div><div><h3>Results:</h3><p>ATOMMIC’s high-performance training and testing capabilities, utilizing multiple GPUs and mixed precision support, enable efficient benchmarking of multiple models across various tasks. The framework’s modular architecture implements each task through a collection of data loaders, models, loss functions, evaluation metrics, and pre-processing transformations, facilitating seamless integration of new tasks, datasets, and models. Our findings demonstrate that ATOMMIC supports MTL for multiple MRI tasks with harmonized complex-valued and real-valued data support while maintaining active development and documentation. Task-specific evaluations demonstrate that physics-based models outperform other approaches in reconstructing highly accelerated acquisitions. These high-quality reconstruction models also show superior accuracy in estimating quantitative parameter maps. Furthermore, when combining high-performing reconstruction models with robust segmentation networks through MTL, performance is improved in both tasks.</p></div><div><h3>Conclusions:</h3><p>ATOMMIC advances MRI reconstruction and analysis by leveraging MTL and ensuring consistency across tasks, models, and datasets. This comprehensive framework serves as a versatile platform for researchers to use existing AI methods and develop new approaches in medical imaging.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108377"},"PeriodicalIF":4.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003705/pdfft?md5=cbdf31e815c85c2c4da5babf503b1cb8&pid=1-s2.0-S0169260724003705-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingzhong Hou , Zhen Guan , Xianwei Zhang , Xiao Hu , Shuangmei Zou , Chunzi Liang , Lulin Shi , Kaitai Zhang , Haihang You
{"title":"Evaluation of tumor budding with virtual panCK stains generated by novel multi-model CNN framework","authors":"Xingzhong Hou , Zhen Guan , Xianwei Zhang , Xiao Hu , Shuangmei Zou , Chunzi Liang , Lulin Shi , Kaitai Zhang , Haihang You","doi":"10.1016/j.cmpb.2024.108352","DOIUrl":"10.1016/j.cmpb.2024.108352","url":null,"abstract":"<div><p>As the global incidence of cancer continues to rise rapidly, the need for swift and precise diagnoses has become increasingly pressing. Pathologists commonly rely on H&E-panCK stain pairs for various aspects of cancer diagnosis, including the detection of occult tumor cells and the evaluation of tumor budding. Nevertheless, conventional chemical staining methods suffer from notable drawbacks, such as time-intensive processes and irreversible staining outcomes. The virtual stain technique, leveraging generative adversarial network (GAN), has emerged as a promising alternative to chemical stains. This approach aims to transform biopsy scans (often H&E) into other stain types. Despite achieving notable progress in recent years, current state-of-the-art virtual staining models confront challenges that hinder their efficacy, particularly in achieving accurate staining outcomes under specific conditions. These limitations have impeded the practical integration of virtual staining into diagnostic practices. To address the goal of producing virtual panCK stains capable of replacing chemical panCK, we propose an innovative multi-model framework. Our approach involves employing a combination of Mask-RCNN (for cell segmentation) and GAN models to extract cytokeratin distribution from chemical H&E images. Additionally, we introduce a tailored dynamic GAN model to convert H&E images into virtual panCK stains, integrating the derived cytokeratin distribution. Our framework is motivated by the fact that the unique pattern of the panCK is derived from cytokeratin distribution. As a proof of concept, we employ our virtual panCK stains to evaluate tumor budding in 45 H&E whole-slide images taken from breast cancer-invaded lymph nodes . Through thorough validation by both pathologists and the QuPath software, our virtual panCK stains demonstrate a remarkable level of accuracy. In stark contrast, the accuracy of state-of-the-art single cycleGAN virtual panCK stains is negligible. To our best knowledge, this is the first instance of a multi-model virtual panCK framework and the utilization of virtual panCK for tumor budding assessment. Our framework excels in generating dependable virtual panCK stains with significantly improved efficiency, thereby considerably reducing turnaround times in diagnosis. Furthermore, its outcomes are easily comprehensible even to pathologists who may not be well-versed in computer technology. We firmly believe that our framework has the potential to advance the field of virtual stain, thereby making significant strides towards improved cancer diagnosis.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108352"},"PeriodicalIF":4.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuehai Wu, Parameshwaran Pasupathy, Assimina A. Pelegri
{"title":"On the application of hybrid deep 3D convolutional neural network algorithms for predicting the micromechanics of brain white matter","authors":"Xuehai Wu, Parameshwaran Pasupathy, Assimina A. Pelegri","doi":"10.1016/j.cmpb.2024.108381","DOIUrl":"10.1016/j.cmpb.2024.108381","url":null,"abstract":"<div><h3>Background:</h3><p>Material characterization of brain white matter (BWM) is difficult due to the anisotropy inherent to the three-dimensional microstructure and the various interactions between heterogeneous brain-tissue (axon, myelin, and glia). Developing full scale finite element models that accurately represent the relationship between the micro and macroscale BWM is however extremely challenging and computationally expensive. The anisotropic properties of the microstructure of BWM computed by building unit cells under frequency domain viscoelasticity comprises of 36 individual constants each, for the loss and storage moduli. Furthermore, the architecture of each unit cell is arbitrary in an infinite dataset.</p></div><div><h3>Methods:</h3><p>In this study, we extend our previous work on developing representative volume elements (RVE) of the microstructure of the BWM in the frequency domain to develop 3D deep learning algorithms that can predict the anisotropic composite properties. The deep 3D convolutional neural network (CNN) algorithms utilizes a voxelization method to obtain geometry information from 3D RVEs. The architecture information encoded in the voxelized location is employed as input data while cross-referencing the RVEs’ material properties (output data). We further develop methods by incorporating parallel pathways, Residual Neural Networks and inception modulus that improve the efficiency of deep learning algorithms.</p></div><div><h3>Results:</h3><p>This paper presents different CNN algorithms in predicting the anisotropic composite properties of BWM. A quantitative analysis of the individual algorithms is presented with the view of identifying optimal strategies to interpret the combined measurements of brain MRE and DTI.</p></div><div><h3>Significance:</h3><p>The proposed Multiscale 3D ResNet (M3DR) algorithm demonstrates high learning ability and performance over baseline CNN algorithms in predicting BWM tissue properties. The hybrid M3DR framework also overcomes the significant limitations encountered in modeling brain tissue using finite elements alone including those such as high computational cost, mesh and simulation failure. The proposed framework also provides an efficient and streamlined platform for implementing complex boundary conditions, modeling intrinsic material properties and imparting interfacial architecture information.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"256 ","pages":"Article 108381"},"PeriodicalIF":4.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003742/pdfft?md5=818301a4a6839e1068c6a07de77c2cc5&pid=1-s2.0-S0169260724003742-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}