Medical image analysis最新文献

筛选
英文 中文
PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images PSFHS 挑战报告:从产后超声图像中分割耻骨联合和胎儿头部
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103353
Jieyun Bai , Zihao Zhou , Zhanhong Ou , Gregor Koehler , Raphael Stock , Klaus Maier-Hein , Marawan Elbatel , Robert Martí , Xiaomeng Li , Yaoyang Qiu , Panjie Gou , Gongping Chen , Lei Zhao , Jianxun Zhang , Yu Dai , Fangyijie Wang , Guénolé Silvestre , Kathleen Curran , Hongkun Sun , Jing Xu , Karim Lekadir
{"title":"PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images","authors":"Jieyun Bai ,&nbsp;Zihao Zhou ,&nbsp;Zhanhong Ou ,&nbsp;Gregor Koehler ,&nbsp;Raphael Stock ,&nbsp;Klaus Maier-Hein ,&nbsp;Marawan Elbatel ,&nbsp;Robert Martí ,&nbsp;Xiaomeng Li ,&nbsp;Yaoyang Qiu ,&nbsp;Panjie Gou ,&nbsp;Gongping Chen ,&nbsp;Lei Zhao ,&nbsp;Jianxun Zhang ,&nbsp;Yu Dai ,&nbsp;Fangyijie Wang ,&nbsp;Guénolé Silvestre ,&nbsp;Kathleen Curran ,&nbsp;Hongkun Sun ,&nbsp;Jing Xu ,&nbsp;Karim Lekadir","doi":"10.1016/j.media.2024.103353","DOIUrl":"10.1016/j.media.2024.103353","url":null,"abstract":"<div><div>Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103353"},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Convolution Block with global receptive field for MRI reconstruction 用于磁共振成像重建的具有全局感受野的傅立叶卷积块
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-20 DOI: 10.1016/j.media.2024.103349
Haozhong Sun , Yuze Li , Zhongsen Li , Runyu Yang , Ziming Xu , Jiaqi Dou , Haikun Qi , Huijun Chen
{"title":"Fourier Convolution Block with global receptive field for MRI reconstruction","authors":"Haozhong Sun ,&nbsp;Yuze Li ,&nbsp;Zhongsen Li ,&nbsp;Runyu Yang ,&nbsp;Ziming Xu ,&nbsp;Jiaqi Dou ,&nbsp;Haikun Qi ,&nbsp;Huijun Chen","doi":"10.1016/j.media.2024.103349","DOIUrl":"10.1016/j.media.2024.103349","url":null,"abstract":"<div><p>Reconstructing images from under-sampled Magnetic Resonance Imaging (MRI) signals significantly reduces scan time and improves clinical practice. However, Convolutional Neural Network (CNN)-based methods, while demonstrating great performance in MRI reconstruction, may face limitations due to their restricted receptive field (RF), hindering the capture of global features. This is particularly crucial for reconstruction, as aliasing artifacts are distributed globally. Recent advancements in Vision Transformers have further emphasized the significance of a large RF. In this study, we proposed a novel global Fourier Convolution Block (FCB) with whole image RF and low computational complexity by transforming the regular spatial domain convolutions into frequency domain. Visualizations of the effective RF and trained kernels demonstrated that FCB improves the RF of reconstruction models in practice. The proposed FCB was evaluated on four popular CNN architectures using brain and knee MRI datasets. Models with FCB achieved superior PSNR and SSIM than baseline models and exhibited more details and texture recovery. The code is publicly available at <span><span>https://github.com/Haozhoong/FCB</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103349"},"PeriodicalIF":10.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re-identification from histopathology images 组织病理学图像的再识别
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103335
Jonathan Ganz , Jonas Ammeling , Samir Jabari , Katharina Breininger , Marc Aubreville
{"title":"Re-identification from histopathology images","authors":"Jonathan Ganz ,&nbsp;Jonas Ammeling ,&nbsp;Samir Jabari ,&nbsp;Katharina Breininger ,&nbsp;Marc Aubreville","doi":"10.1016/j.media.2024.103335","DOIUrl":"10.1016/j.media.2024.103335","url":null,"abstract":"<div><div>In numerous studies, deep learning algorithms have proven their potential for the analysis of histopathology images, for example, for revealing the subtypes of tumors or the primary origin of metastases. These models require large datasets for training, which must be anonymized to prevent possible patient identity leaks. This study demonstrates that even relatively simple deep learning algorithms can re-identify patients in large histopathology datasets with substantial accuracy. In addition, we compared a comprehensive set of state-of-the-art whole slide image classifiers and feature extractors for the given task. We evaluated our algorithms on two TCIA datasets including lung squamous cell carcinoma (LSCC) and lung adenocarcinoma (LUAD). We also demonstrate the algorithm’s performance on an in-house dataset of meningioma tissue. We predicted the source patient of a slide with <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> scores of up to 80.1% and 77.19% on the LSCC and LUAD datasets, respectively, and with 77.09% on our meningioma dataset. Based on our findings, we formulated a risk assessment scheme to estimate the risk to the patient’s privacy prior to publication.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103335"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002603/pdfft?md5=6efea46ba696d683bc55409496e68f7b&pid=1-s2.0-S1361841524002603-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction 用于术后面部外观预测的颌面骨运动感知双图卷积法
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103350
Xinrui Huang , Dongming He , Zhenming Li , Xiaofan Zhang , Xudong Wang
{"title":"Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction","authors":"Xinrui Huang ,&nbsp;Dongming He ,&nbsp;Zhenming Li ,&nbsp;Xiaofan Zhang ,&nbsp;Xudong Wang","doi":"10.1016/j.media.2024.103350","DOIUrl":"10.1016/j.media.2024.103350","url":null,"abstract":"<div><div>Postoperative facial appearance prediction is vital for surgeons to make orthognathic surgical plans and communicate with patients. Conventional biomechanical prediction methods require heavy computations and time-consuming manual operations which hamper their clinical practice. Deep learning based methods have shown the potential to improve computational efficiency and achieve comparable accuracy. However, existing deep learning based methods only learn facial features from facial point clouds and process regional points independently, which has constrains in perceiving facial surface details and topology. In addition, they predict postoperative displacements for all facial points in one step, which is vulnerable to weakly supervised training and easy to produce distorted predictions. To alleviate these limitations, we propose a novel dual graph convolution based postoperative facial appearance prediction model which considers the surface geometry by learning on two graphs constructed from the facial mesh in the Euclidean and geodesic spaces, and transfers the bone movements to facial movements in dual spaces. We further adopt a coarse-to-fine strategy which performs coarse predictions for facial meshes with fewer vertices and then adds more to obtain more robust fine predictions. Experiments on real clinical data demonstrate that our method outperforms state-of-the-art deep learning based methods qualitatively and quantitatively.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103350"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UM-Net: Rethinking ICGNet for polyp segmentation with uncertainty modeling UM-Net:利用不确定性建模反思息肉分割 ICGNet
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103347
Xiuquan Du , Xuebin Xu , Jiajia Chen , Xuejun Zhang , Lei Li , Heng Liu , Shuo Li
{"title":"UM-Net: Rethinking ICGNet for polyp segmentation with uncertainty modeling","authors":"Xiuquan Du ,&nbsp;Xuebin Xu ,&nbsp;Jiajia Chen ,&nbsp;Xuejun Zhang ,&nbsp;Lei Li ,&nbsp;Heng Liu ,&nbsp;Shuo Li","doi":"10.1016/j.media.2024.103347","DOIUrl":"10.1016/j.media.2024.103347","url":null,"abstract":"<div><div>Automatic segmentation of polyps from colonoscopy images plays a critical role in the early diagnosis and treatment of colorectal cancer. Nevertheless, some bottlenecks still exist. In our previous work, we mainly focused on polyps with intra-class inconsistency and low contrast, using ICGNet to solve them. Due to the different equipment, specific locations and properties of polyps, the color distribution of the collected images is inconsistent. ICGNet was designed primarily with reverse-contour guide information and local–global context information, ignoring this inconsistent color distribution, which leads to overfitting problems and makes it difficult to focus only on beneficial image content. In addition, a trustworthy segmentation model should not only produce high-precision results but also provide a measure of uncertainty to accompany its predictions so that physicians can make informed decisions. However, ICGNet only gives the segmentation result and lacks the uncertainty measure. To cope with these novel bottlenecks, we further extend the original ICGNet to a comprehensive and effective network (UM-Net) with two main contributions that have been proved by experiments to have substantial practical value. Firstly, we employ a color transfer operation to weaken the relationship between color and polyps, making the model more concerned with the shape of the polyps. Secondly, we provide the uncertainty to represent the reliability of the segmentation results and use variance to rectify uncertainty. Our improved method is evaluated on five polyp datasets, which shows competitive results compared to other advanced methods in both learning ability and generalization capability. The source code is available at <span><span>https://github.com/dxqllp/UM-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103347"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fetal body organ T2* relaxometry at low field strength (FOREST) 低场强胎儿身体器官 T2* 弛豫测量(FOREST)
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103352
Kelly Payette , Alena U. Uus , Jordina Aviles Verdera , Megan Hall , Alexia Egloff , Maria Deprez , Raphaël Tomi-Tricot , Joseph V. Hajnal , Mary A. Rutherford , Lisa Story , Jana Hutter
{"title":"Fetal body organ T2* relaxometry at low field strength (FOREST)","authors":"Kelly Payette ,&nbsp;Alena U. Uus ,&nbsp;Jordina Aviles Verdera ,&nbsp;Megan Hall ,&nbsp;Alexia Egloff ,&nbsp;Maria Deprez ,&nbsp;Raphaël Tomi-Tricot ,&nbsp;Joseph V. Hajnal ,&nbsp;Mary A. Rutherford ,&nbsp;Lisa Story ,&nbsp;Jana Hutter","doi":"10.1016/j.media.2024.103352","DOIUrl":"10.1016/j.media.2024.103352","url":null,"abstract":"<div><div>Fetal Magnetic Resonance Imaging (MRI) at low field strengths is an exciting new field in both clinical and research settings. Clinical low field (0.55T) scanners are beneficial for fetal imaging due to their reduced susceptibility-induced artifacts, increased T2* values, and wider bore (widening access for the increasingly obese pregnant population). However, the lack of standard automated image processing tools such as segmentation and reconstruction hampers wider clinical use. In this study, we present the Fetal body Organ T2* RElaxometry at low field STrength (FOREST) pipeline that analyzes ten major fetal body organs. Dynamic multi-echo multi-gradient sequences were acquired and automatically reoriented to a standard plane, reconstructed into a high-resolution volume using deformable slice-to-volume reconstruction, and then automatically segmented into ten major fetal organs. We extensively validated FOREST using an inter-rater quality analysis. We then present fetal T2* body organ growth curves made from 100 control subjects from a wide gestational age range (17–40 gestational weeks) in order to investigate the relationship of T2* with gestational age. The T2* values for all organs except the stomach and spleen were found to have a relationship with gestational age (p<span><math><mo>&lt;</mo></math></span>0.05). FOREST is robust to fetal motion, and can be used for both normal and fetuses with pathologies. Low field fetal MRI can be used to perform advanced MRI analysis, and is a viable option for clinical scanning.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103352"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction 半监督 ViT 知识蒸馏网络与风格转移归一化用于结直肠肝转移生存率预测
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-16 DOI: 10.1016/j.media.2024.103346
Mohamed El Amine Elforaici , Emmanuel Montagnon , Francisco Perdigón Romero , William Trung Le , Feryel Azzi , Dominique Trudel , Bich Nguyen , Simon Turcotte , An Tang , Samuel Kadoury
{"title":"Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction","authors":"Mohamed El Amine Elforaici ,&nbsp;Emmanuel Montagnon ,&nbsp;Francisco Perdigón Romero ,&nbsp;William Trung Le ,&nbsp;Feryel Azzi ,&nbsp;Dominique Trudel ,&nbsp;Bich Nguyen ,&nbsp;Simon Turcotte ,&nbsp;An Tang ,&nbsp;Samuel Kadoury","doi":"10.1016/j.media.2024.103346","DOIUrl":"10.1016/j.media.2024.103346","url":null,"abstract":"<div><div>Colorectal liver metastases (CLM) affect almost half of all colon cancer patients and the response to systemic chemotherapy plays a crucial role in patient survival. While oncologists typically use tumor grading scores, such as tumor regression grade (TRG), to establish an accurate prognosis on patient outcomes, including overall survival (OS) and time-to-recurrence (TTR), these traditional methods have several limitations. They are subjective, time-consuming, and require extensive expertise, which limits their scalability and reliability. Additionally, existing approaches for prognosis prediction using machine learning mostly rely on radiological imaging data, but recently histological images have been shown to be relevant for survival predictions by allowing to fully capture the complex microenvironmental and cellular characteristics of the tumor. To address these limitations, we propose an end-to-end approach for automated prognosis prediction using histology slides stained with Hematoxylin and Eosin (H&amp;E) and Hematoxylin Phloxine Saffron (HPS). We first employ a Generative Adversarial Network (GAN) for slide normalization to reduce staining variations and improve the overall quality of the images that are used as input to our prediction pipeline. We propose a semi-supervised model to perform tissue classification from sparse annotations, producing segmentation and feature maps. Specifically, we use an attention-based approach that weighs the importance of different slide regions in producing the final classification results. Finally, we exploit the extracted features for the metastatic nodules and surrounding tissue to train a prognosis model. In parallel, we train a vision Transformer model in a knowledge distillation framework to replicate and enhance the performance of the prognosis prediction. We evaluate our approach on an in-house clinical dataset of 258 CLM patients, achieving superior performance compared to other comparative models with a c-index of 0.804 (0.014) for OS and 0.735 (0.016) for TTR, as well as on two public datasets. The proposed approach achieves an accuracy of 86.9% to 90.3% in predicting TRG dichotomization. For the 3-class TRG classification task, the proposed approach yields an accuracy of 78.5% to 82.1%, outperforming the comparative methods. Our proposed pipeline can provide automated prognosis for pathologists and oncologists, and can greatly promote precision medicine progress in managing CLM patients.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103346"},"PeriodicalIF":10.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SafeRPlan: Safe deep reinforcement learning for intraoperative planning of pedicle screw placement SafeRPlan:用于椎弓根螺钉置入术中规划的安全深度强化学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-16 DOI: 10.1016/j.media.2024.103345
Yunke Ao , Hooman Esfandiari , Fabio Carrillo , Christoph J. Laux , Yarden As , Ruixuan Li , Kaat Van Assche , Ayoob Davoodi , Nicola A. Cavalcanti , Mazda Farshad , Benjamin F. Grewe , Emmanuel Vander Poorten , Andreas Krause , Philipp Fürnstahl
{"title":"SafeRPlan: Safe deep reinforcement learning for intraoperative planning of pedicle screw placement","authors":"Yunke Ao ,&nbsp;Hooman Esfandiari ,&nbsp;Fabio Carrillo ,&nbsp;Christoph J. Laux ,&nbsp;Yarden As ,&nbsp;Ruixuan Li ,&nbsp;Kaat Van Assche ,&nbsp;Ayoob Davoodi ,&nbsp;Nicola A. Cavalcanti ,&nbsp;Mazda Farshad ,&nbsp;Benjamin F. Grewe ,&nbsp;Emmanuel Vander Poorten ,&nbsp;Andreas Krause ,&nbsp;Philipp Fürnstahl","doi":"10.1016/j.media.2024.103345","DOIUrl":"10.1016/j.media.2024.103345","url":null,"abstract":"<div><p>Spinal fusion surgery requires highly accurate implantation of pedicle screw implants, which must be conducted in critical proximity to vital structures with a limited view of the anatomy. Robotic surgery systems have been proposed to improve placement accuracy. Despite remarkable advances, current robotic systems still lack advanced mechanisms for continuous updating of surgical plans during procedures, which hinders attaining higher levels of robotic autonomy. These systems adhere to conventional rigid registration concepts, relying on the alignment of preoperative planning to the intraoperative anatomy. In this paper, we propose a safe deep reinforcement learning (DRL) planning approach (SafeRPlan) for robotic spine surgery that leverages intraoperative observation for continuous path planning of pedicle screw placement. The main contributions of our method are (1) the capability to ensure safe actions by introducing an uncertainty-aware distance-based safety filter; (2) the ability to compensate for incomplete intraoperative anatomical information, by encoding a-priori knowledge of anatomical structures with neural networks pre-trained on pre-operative images; and (3) the capability to generalize over unseen observation noise thanks to the novel domain randomization techniques. Planning quality was assessed by quantitative comparison with the baseline approaches, gold standard (GS) and qualitative evaluation by expert surgeons. In experiments with human model datasets, our approach was capable of achieving over 5% higher safety rates compared to baseline approaches, even under realistic observation noise. To the best of our knowledge, SafeRPlan is the first safety-aware DRL planning approach specifically designed for robotic spine surgery.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103345"},"PeriodicalIF":10.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002706/pdfft?md5=74703339b4aa1e7a3fd37730a5391672&pid=1-s2.0-S1361841524002706-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Will Transformers change gastrointestinal endoscopic image analysis? A comparative analysis between CNNs and Transformers, in terms of performance, robustness and generalization 变形器能否改变胃肠道内窥镜图像分析?CNN 与变形器在性能、鲁棒性和通用性方面的比较分析
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-16 DOI: 10.1016/j.media.2024.103348
Carolus H.J. Kusters , Tim J.M. Jaspers , Tim G.W. Boers , Martijn R. Jong , Jelmer B. Jukema , Kiki N. Fockens , Albert J. de Groof , Jacques J. Bergman , Fons van der Sommen , Peter H.N. De With
{"title":"Will Transformers change gastrointestinal endoscopic image analysis? A comparative analysis between CNNs and Transformers, in terms of performance, robustness and generalization","authors":"Carolus H.J. Kusters ,&nbsp;Tim J.M. Jaspers ,&nbsp;Tim G.W. Boers ,&nbsp;Martijn R. Jong ,&nbsp;Jelmer B. Jukema ,&nbsp;Kiki N. Fockens ,&nbsp;Albert J. de Groof ,&nbsp;Jacques J. Bergman ,&nbsp;Fons van der Sommen ,&nbsp;Peter H.N. De With","doi":"10.1016/j.media.2024.103348","DOIUrl":"10.1016/j.media.2024.103348","url":null,"abstract":"<div><p>Gastrointestinal endoscopic image analysis presents significant challenges, such as considerable variations in quality due to the challenging in-body imaging environment, the often-subtle nature of abnormalities with low interobserver agreement, and the need for real-time processing. These challenges pose strong requirements on the performance, generalization, robustness and complexity of deep learning-based techniques in such safety–critical applications. While Convolutional Neural Networks (CNNs) have been the go-to architecture for endoscopic image analysis, recent successes of the Transformer architecture in computer vision raise the possibility to update this conclusion. To this end, we evaluate and compare clinically relevant performance, generalization and robustness of state-of-the-art CNNs and Transformers for neoplasia detection in Barrett’s esophagus. We have trained and validated several top-performing CNNs and Transformers on a total of 10,208 images (2,079 patients), and tested on a total of 7,118 images (998 patients) across multiple test sets, including a high-quality test set, two internal and two external generalization test sets, and a robustness test set. Furthermore, to expand the scope of the study, we have conducted the performance and robustness comparisons for colonic polyp segmentation (Kvasir-SEG) and angiodysplasia detection (Giana). The results obtained for featured models across a wide range of training set sizes demonstrate that Transformers achieve comparable performance as CNNs on various applications, show comparable or slightly improved generalization capabilities and offer equally strong resilience and robustness against common image corruptions and perturbations. These findings confirm the viability of the Transformer architecture, particularly suited to the dynamic nature of endoscopic video analysis, characterized by fluctuating image quality, appearance and equipment configurations in transition from hospital to hospital. The code is made publicly available at: <span><span>https://github.com/BONS-AI-VCA-AMC/Endoscopy-CNNs-vs-Transformers</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103348"},"PeriodicalIF":10.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002731/pdfft?md5=6f5df02e55d444d8522ef7477d8446aa&pid=1-s2.0-S1361841524002731-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust image segmentation and synthesis pipeline for histopathology 用于组织病理学的稳健图像分割和合成管道
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-11 DOI: 10.1016/j.media.2024.103344
Muhammad Jehanzaib , Yasin Almalioglu , Kutsev Bengisu Ozyoruk , Drew F.K. Williamson , Talha Abdullah , Kayhan Basak , Derya Demir , G. Evren Keles , Kashif Zafar , Mehmet Turan
{"title":"A robust image segmentation and synthesis pipeline for histopathology","authors":"Muhammad Jehanzaib ,&nbsp;Yasin Almalioglu ,&nbsp;Kutsev Bengisu Ozyoruk ,&nbsp;Drew F.K. Williamson ,&nbsp;Talha Abdullah ,&nbsp;Kayhan Basak ,&nbsp;Derya Demir ,&nbsp;G. Evren Keles ,&nbsp;Kashif Zafar ,&nbsp;Mehmet Turan","doi":"10.1016/j.media.2024.103344","DOIUrl":"10.1016/j.media.2024.103344","url":null,"abstract":"<div><p>Significant diagnostic variability between and within observers persists in pathology, despite the fact that digital slide images provide the ability to measure and quantify features much more precisely compared to conventional methods. Automated and accurate segmentation of cancerous cell and tissue regions can streamline the diagnostic process, providing insights into the cancer progression, and helping experts decide on the most effective treatment. Here, we evaluate the performance of the proposed PathoSeg model, with an architecture comprising of a modified HRNet encoder and a UNet++ decoder integrated with a CBAM block to utilize attention mechanism for an improved segmentation capability. We demonstrate that PathoSeg outperforms the current state-of-the-art (SOTA) networks in both quantitative and qualitative assessment of instance and semantic segmentation. Notably, we leverage the use of synthetic data generated by PathopixGAN, which effectively addresses the data imbalance problem commonly encountered in histopathology datasets, further improving the performance of PathoSeg. It utilizes spatially adaptive normalization within a generative and discriminative mechanism to synthesize diverse histopathological environments dictated through semantic information passed through pixel-level annotated Ground Truth semantic masks.Besides, we contribute to the research community by providing an in-house dataset that includes semantically segmented masks for breast carcinoma tubules (BCT), micro/macrovesicular steatosis of the liver (MSL), and prostate carcinoma glands (PCG). In the first part of the dataset, we have a total of 14 whole slide images from 13 patients’ liver, with fat cell segmented masks, totaling 951 masks of size 512 × 512 pixels. In the second part, it includes 17 whole slide images from 13 patients with prostate carcinoma gland segmentation masks, amounting to 30,000 masks of size 512 × 512 pixels. In the third part, the dataset contains 51 whole slides from 36 patients, with breast carcinoma tubule masks totaling 30,000 masks of size 512 × 512 pixels. To ensure transparency and encourage further research, we will make this dataset publicly available for non-commercial and academic purposes. To facilitate reproducibility and encourage further research, we will also make our code and pre-trained models publicly available at <span><span>https://github.com/DeepMIALab/PathoSeg</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103344"},"PeriodicalIF":10.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信