Medical image analysis最新文献

筛选
英文 中文
Fetal body organ T2* relaxometry at low field strength (FOREST) 低场强胎儿身体器官 T2* 弛豫测量(FOREST)
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103352
{"title":"Fetal body organ T2* relaxometry at low field strength (FOREST)","authors":"","doi":"10.1016/j.media.2024.103352","DOIUrl":"10.1016/j.media.2024.103352","url":null,"abstract":"<div><div>Fetal Magnetic Resonance Imaging (MRI) at low field strengths is an exciting new field in both clinical and research settings. Clinical low field (0.55T) scanners are beneficial for fetal imaging due to their reduced susceptibility-induced artifacts, increased T2* values, and wider bore (widening access for the increasingly obese pregnant population). However, the lack of standard automated image processing tools such as segmentation and reconstruction hampers wider clinical use. In this study, we present the Fetal body Organ T2* RElaxometry at low field STrength (FOREST) pipeline that analyzes ten major fetal body organs. Dynamic multi-echo multi-gradient sequences were acquired and automatically reoriented to a standard plane, reconstructed into a high-resolution volume using deformable slice-to-volume reconstruction, and then automatically segmented into ten major fetal organs. We extensively validated FOREST using an inter-rater quality analysis. We then present fetal T2* body organ growth curves made from 100 control subjects from a wide gestational age range (17–40 gestational weeks) in order to investigate the relationship of T2* with gestational age. The T2* values for all organs except the stomach and spleen were found to have a relationship with gestational age (p<span><math><mo>&lt;</mo></math></span>0.05). FOREST is robust to fetal motion, and can be used for both normal and fetuses with pathologies. Low field fetal MRI can be used to perform advanced MRI analysis, and is a viable option for clinical scanning.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction 半监督 ViT 知识蒸馏网络与风格转移归一化用于结直肠肝转移生存率预测
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-16 DOI: 10.1016/j.media.2024.103346
{"title":"Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction","authors":"","doi":"10.1016/j.media.2024.103346","DOIUrl":"10.1016/j.media.2024.103346","url":null,"abstract":"<div><div>Colorectal liver metastases (CLM) affect almost half of all colon cancer patients and the response to systemic chemotherapy plays a crucial role in patient survival. While oncologists typically use tumor grading scores, such as tumor regression grade (TRG), to establish an accurate prognosis on patient outcomes, including overall survival (OS) and time-to-recurrence (TTR), these traditional methods have several limitations. They are subjective, time-consuming, and require extensive expertise, which limits their scalability and reliability. Additionally, existing approaches for prognosis prediction using machine learning mostly rely on radiological imaging data, but recently histological images have been shown to be relevant for survival predictions by allowing to fully capture the complex microenvironmental and cellular characteristics of the tumor. To address these limitations, we propose an end-to-end approach for automated prognosis prediction using histology slides stained with Hematoxylin and Eosin (H&amp;E) and Hematoxylin Phloxine Saffron (HPS). We first employ a Generative Adversarial Network (GAN) for slide normalization to reduce staining variations and improve the overall quality of the images that are used as input to our prediction pipeline. We propose a semi-supervised model to perform tissue classification from sparse annotations, producing segmentation and feature maps. Specifically, we use an attention-based approach that weighs the importance of different slide regions in producing the final classification results. Finally, we exploit the extracted features for the metastatic nodules and surrounding tissue to train a prognosis model. In parallel, we train a vision Transformer model in a knowledge distillation framework to replicate and enhance the performance of the prognosis prediction. We evaluate our approach on an in-house clinical dataset of 258 CLM patients, achieving superior performance compared to other comparative models with a c-index of 0.804 (0.014) for OS and 0.735 (0.016) for TTR, as well as on two public datasets. The proposed approach achieves an accuracy of 86.9% to 90.3% in predicting TRG dichotomization. For the 3-class TRG classification task, the proposed approach yields an accuracy of 78.5% to 82.1%, outperforming the comparative methods. Our proposed pipeline can provide automated prognosis for pathologists and oncologists, and can greatly promote precision medicine progress in managing CLM patients.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SafeRPlan: Safe deep reinforcement learning for intraoperative planning of pedicle screw placement SafeRPlan:用于椎弓根螺钉置入术中规划的安全深度强化学习
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-16 DOI: 10.1016/j.media.2024.103345
{"title":"SafeRPlan: Safe deep reinforcement learning for intraoperative planning of pedicle screw placement","authors":"","doi":"10.1016/j.media.2024.103345","DOIUrl":"10.1016/j.media.2024.103345","url":null,"abstract":"<div><p>Spinal fusion surgery requires highly accurate implantation of pedicle screw implants, which must be conducted in critical proximity to vital structures with a limited view of the anatomy. Robotic surgery systems have been proposed to improve placement accuracy. Despite remarkable advances, current robotic systems still lack advanced mechanisms for continuous updating of surgical plans during procedures, which hinders attaining higher levels of robotic autonomy. These systems adhere to conventional rigid registration concepts, relying on the alignment of preoperative planning to the intraoperative anatomy. In this paper, we propose a safe deep reinforcement learning (DRL) planning approach (SafeRPlan) for robotic spine surgery that leverages intraoperative observation for continuous path planning of pedicle screw placement. The main contributions of our method are (1) the capability to ensure safe actions by introducing an uncertainty-aware distance-based safety filter; (2) the ability to compensate for incomplete intraoperative anatomical information, by encoding a-priori knowledge of anatomical structures with neural networks pre-trained on pre-operative images; and (3) the capability to generalize over unseen observation noise thanks to the novel domain randomization techniques. Planning quality was assessed by quantitative comparison with the baseline approaches, gold standard (GS) and qualitative evaluation by expert surgeons. In experiments with human model datasets, our approach was capable of achieving over 5% higher safety rates compared to baseline approaches, even under realistic observation noise. To the best of our knowledge, SafeRPlan is the first safety-aware DRL planning approach specifically designed for robotic spine surgery.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002706/pdfft?md5=74703339b4aa1e7a3fd37730a5391672&pid=1-s2.0-S1361841524002706-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Will Transformers change gastrointestinal endoscopic image analysis? A comparative analysis between CNNs and Transformers, in terms of performance, robustness and generalization 变形器能否改变胃肠道内窥镜图像分析?CNN 与变形器在性能、鲁棒性和通用性方面的比较分析
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-16 DOI: 10.1016/j.media.2024.103348
{"title":"Will Transformers change gastrointestinal endoscopic image analysis? A comparative analysis between CNNs and Transformers, in terms of performance, robustness and generalization","authors":"","doi":"10.1016/j.media.2024.103348","DOIUrl":"10.1016/j.media.2024.103348","url":null,"abstract":"<div><p>Gastrointestinal endoscopic image analysis presents significant challenges, such as considerable variations in quality due to the challenging in-body imaging environment, the often-subtle nature of abnormalities with low interobserver agreement, and the need for real-time processing. These challenges pose strong requirements on the performance, generalization, robustness and complexity of deep learning-based techniques in such safety–critical applications. While Convolutional Neural Networks (CNNs) have been the go-to architecture for endoscopic image analysis, recent successes of the Transformer architecture in computer vision raise the possibility to update this conclusion. To this end, we evaluate and compare clinically relevant performance, generalization and robustness of state-of-the-art CNNs and Transformers for neoplasia detection in Barrett’s esophagus. We have trained and validated several top-performing CNNs and Transformers on a total of 10,208 images (2,079 patients), and tested on a total of 7,118 images (998 patients) across multiple test sets, including a high-quality test set, two internal and two external generalization test sets, and a robustness test set. Furthermore, to expand the scope of the study, we have conducted the performance and robustness comparisons for colonic polyp segmentation (Kvasir-SEG) and angiodysplasia detection (Giana). The results obtained for featured models across a wide range of training set sizes demonstrate that Transformers achieve comparable performance as CNNs on various applications, show comparable or slightly improved generalization capabilities and offer equally strong resilience and robustness against common image corruptions and perturbations. These findings confirm the viability of the Transformer architecture, particularly suited to the dynamic nature of endoscopic video analysis, characterized by fluctuating image quality, appearance and equipment configurations in transition from hospital to hospital. The code is made publicly available at: <span><span>https://github.com/BONS-AI-VCA-AMC/Endoscopy-CNNs-vs-Transformers</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002731/pdfft?md5=6f5df02e55d444d8522ef7477d8446aa&pid=1-s2.0-S1361841524002731-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142239684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust image segmentation and synthesis pipeline for histopathology 用于组织病理学的稳健图像分割和合成管道
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-11 DOI: 10.1016/j.media.2024.103344
{"title":"A robust image segmentation and synthesis pipeline for histopathology","authors":"","doi":"10.1016/j.media.2024.103344","DOIUrl":"10.1016/j.media.2024.103344","url":null,"abstract":"<div><p>Significant diagnostic variability between and within observers persists in pathology, despite the fact that digital slide images provide the ability to measure and quantify features much more precisely compared to conventional methods. Automated and accurate segmentation of cancerous cell and tissue regions can streamline the diagnostic process, providing insights into the cancer progression, and helping experts decide on the most effective treatment. Here, we evaluate the performance of the proposed PathoSeg model, with an architecture comprising of a modified HRNet encoder and a UNet++ decoder integrated with a CBAM block to utilize attention mechanism for an improved segmentation capability. We demonstrate that PathoSeg outperforms the current state-of-the-art (SOTA) networks in both quantitative and qualitative assessment of instance and semantic segmentation. Notably, we leverage the use of synthetic data generated by PathopixGAN, which effectively addresses the data imbalance problem commonly encountered in histopathology datasets, further improving the performance of PathoSeg. It utilizes spatially adaptive normalization within a generative and discriminative mechanism to synthesize diverse histopathological environments dictated through semantic information passed through pixel-level annotated Ground Truth semantic masks.Besides, we contribute to the research community by providing an in-house dataset that includes semantically segmented masks for breast carcinoma tubules (BCT), micro/macrovesicular steatosis of the liver (MSL), and prostate carcinoma glands (PCG). In the first part of the dataset, we have a total of 14 whole slide images from 13 patients’ liver, with fat cell segmented masks, totaling 951 masks of size 512 × 512 pixels. In the second part, it includes 17 whole slide images from 13 patients with prostate carcinoma gland segmentation masks, amounting to 30,000 masks of size 512 × 512 pixels. In the third part, the dataset contains 51 whole slides from 36 patients, with breast carcinoma tubule masks totaling 30,000 masks of size 512 × 512 pixels. To ensure transparency and encourage further research, we will make this dataset publicly available for non-commercial and academic purposes. To facilitate reproducibility and encourage further research, we will also make our code and pre-trained models publicly available at <span><span>https://github.com/DeepMIALab/PathoSeg</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-dose computed tomography perceptual image quality assessment 低剂量计算机断层扫描感知图像质量评估
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-06 DOI: 10.1016/j.media.2024.103343
{"title":"Low-dose computed tomography perceptual image quality assessment","authors":"","doi":"10.1016/j.media.2024.103343","DOIUrl":"10.1016/j.media.2024.103343","url":null,"abstract":"<div><p>In computed tomography (CT) imaging, optimizing the balance between radiation dose and image quality is crucial due to the potentially harmful effects of radiation on patients. Although subjective assessments by radiologists are considered the gold standard in medical imaging, these evaluations can be time-consuming and costly. Thus, objective methods, such as the peak signal-to-noise ratio and structural similarity index measure, are often employed as alternatives. However, these metrics, initially developed for natural images, may not fully encapsulate the radiologists’ assessment process. Consequently, interest in developing deep learning-based image quality assessment (IQA) methods that more closely align with radiologists’ perceptions is growing. A significant barrier to this development has been the absence of open-source datasets and benchmark models specific to CT IQA. Addressing these challenges, we organized the Low-dose Computed Tomography Perceptual Image Quality Assessment Challenge in conjunction with the Medical Image Computing and Computer Assisted Intervention 2023. This event introduced the first open-source CT IQA dataset, consisting of 1,000 CT images of various quality, annotated with radiologists’ assessment scores. As a benchmark, this challenge offers a comprehensive analysis of six submitted methods, providing valuable insight into their performance. This paper presents a summary of these methods and insights. This challenge underscores the potential for developing no-reference IQA methods that could exceed the capabilities of full-reference IQA methods, making a significant contribution to the research community with this novel dataset. The dataset is accessible at <span><span>https://zenodo.org/records/7833096</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002688/pdfft?md5=4b571dbdaaece38e1cd24203b6bc5445&pid=1-s2.0-S1361841524002688-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142169386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Labeled-to-unlabeled distribution alignment for partially-supervised multi-organ medical image segmentation 用于部分监督多器官医学图像分割的标签到非标签分布对齐
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-05 DOI: 10.1016/j.media.2024.103333
{"title":"Labeled-to-unlabeled distribution alignment for partially-supervised multi-organ medical image segmentation","authors":"","doi":"10.1016/j.media.2024.103333","DOIUrl":"10.1016/j.media.2024.103333","url":null,"abstract":"<div><p>Partially-supervised multi-organ medical image segmentation aims to develop a unified semantic segmentation model by utilizing multiple partially-labeled datasets, with each dataset providing labels for a single class of organs. However, the limited availability of labeled foreground organs and the absence of supervision to distinguish unlabeled foreground organs from the background pose a significant challenge, which leads to a distribution mismatch between labeled and unlabeled pixels. Although existing pseudo-labeling methods can be employed to learn from both labeled and unlabeled pixels, they are prone to performance degradation in this task, as they rely on the assumption that labeled and unlabeled pixels have the same distribution. In this paper, to address the problem of distribution mismatch, we propose a labeled-to-unlabeled distribution alignment (LTUDA) framework that aligns feature distributions and enhances discriminative capability. Specifically, we introduce a cross-set data augmentation strategy, which performs region-level mixing between labeled and unlabeled organs to reduce distribution discrepancy and enrich the training set. Besides, we propose a prototype-based distribution alignment method that implicitly reduces intra-class variation and increases the separation between the unlabeled foreground and background. This can be achieved by encouraging consistency between the outputs of two prototype classifiers and a linear classifier. Extensive experimental results on the AbdomenCT-1K dataset and a union of four benchmark datasets (including LiTS, MSD-Spleen, KiTS, and NIH82) demonstrate that our method outperforms the state-of-the-art partially-supervised methods by a considerable margin, and even surpasses the fully-supervised methods. The source code is publicly available at <span><span>LTUDA</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges 通过 Medico 2020 和 MedAI 2021 挑战赛验证结肠镜检查中的息肉和器械分割方法
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-05 DOI: 10.1016/j.media.2024.103307
{"title":"Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges","authors":"","doi":"10.1016/j.media.2024.103307","DOIUrl":"10.1016/j.media.2024.103307","url":null,"abstract":"&lt;div&gt;&lt;p&gt;Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm’s accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm’s prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the &lt;em&gt;“Medico automatic polyp segmentation (Medico 2020)”&lt;/em&gt; and &lt;em&gt;“MedAI: Transparency in Medical Image Segmentation (MedAI 2021)”&lt;/em&gt; competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models’ credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and impr","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002329/pdfft?md5=50118306fd01509d0c89ada6fdf9e7b9&pid=1-s2.0-S1361841524002329-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images ATEC23 挑战赛:利用组织病理学图像自动预测卵巢癌的治疗效果
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-05 DOI: 10.1016/j.media.2024.103342
{"title":"ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images","authors":"","doi":"10.1016/j.media.2024.103342","DOIUrl":"10.1016/j.media.2024.103342","url":null,"abstract":"<div><p>Ovarian cancer, predominantly epithelial ovarian cancer (EOC), is a global health concern due to its high mortality rate. Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70% of advanced patients are with recurrent cancer and disease. Bevacizumab is a humanized monoclonal antibody, which blocks <em><em>VEGF</em></em> signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by the FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Unfortunately, Bevacizumab may also induce harmful adverse effects, such as hypertension, bleeding, arterial thromboembolism, poor wound healing and gastrointestinal perforation. Given the expensive cost and unwanted toxicities, there is an urgent need for predictive methods to identify who could benefit from bevacizumab. Of the 18 (approved) requests from 5 countries, 6 teams using 284 whole section WSIs for training to develop fully automated systems submitted their predictions on a test set of 180 tissue core images, with the corresponding ground truth labels kept private. This paper summarizes the 5 qualified methods successfully submitted to the international challenge of automated prediction of treatment effectiveness in ovarian cancer using the histopathologic images (ATEC23) held at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023 and evaluates the methods in comparison with 5 state of the art deep learning approaches. This study further assesses the effectiveness of the presented prediction models as indicators for patient selection utilizing both Cox proportional hazards analysis and Kaplan–Meier survival analysis. A robust and cost-effective deep learning pipeline for digital histopathology tasks has become a necessity within the context of the medical community. This challenge highlights the limitations of current MIL methods, particularly within the context of prognosis-based classification tasks, and the importance of DCNNs like inception that has nonlinear convolutional modules at various resolutions to facilitate processing the data in multiple resolutions, which is a key feature required for pathology related prediction tasks. This further suggests the use of feature reuse at various scales to improve models for future research directions. In particular, this paper releases the labels of the testing set and provides applications for future research directions in precision oncology to predict ovarian cancer treatment effectiveness and facilitate patient selection via histopathological images.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent surgical planning for automatic reconstruction of orbital blowout fracture using a prior adversarial generative network 利用先验对抗生成网络自动重建眼眶爆裂骨折的智能手术规划
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-04 DOI: 10.1016/j.media.2024.103332
{"title":"Intelligent surgical planning for automatic reconstruction of orbital blowout fracture using a prior adversarial generative network","authors":"","doi":"10.1016/j.media.2024.103332","DOIUrl":"10.1016/j.media.2024.103332","url":null,"abstract":"<div><div>Orbital blowout fracture (OBF) is a disease that can result in herniation of orbital soft tissue, enophthalmos, and even severe visual dysfunction. Given the complex and diverse types of orbital wall fractures, reconstructing the orbital wall presents a significant challenge in OBF repair surgery. Accurate surgical planning is crucial in addressing this issue. However, there is currently a lack of efficient and precise surgical planning methods. Therefore, we propose an intelligent surgical planning method for automatic OBF reconstruction based on a prior adversarial generative network (GAN). Firstly, an automatic generation method of symmetric prior anatomical knowledge (SPAK) based on spatial transformation is proposed to guide the reconstruction of fractured orbital wall. Secondly, a reconstruction network based on SPAK-guided GAN is proposed to achieve accurate and automatic reconstruction of fractured orbital wall. Building upon this, a new surgical planning workflow based on the proposed reconstruction network and 3D Slicer software is developed to simplify the operational steps. Finally, the proposed surgical planning method is successfully applied in OBF repair surgery, verifying its reliability. Experimental results demonstrate that the proposed reconstruction network achieves relatively accurate automatic reconstruction of the orbital wall, with an average DSC of 92.35 ± 2.13% and a 95% Hausdorff distance of 0.59 ± 0.23 mm, markedly outperforming the compared state-of-the-art networks. Additionally, the proposed surgical planning workflow reduces the traditional planning time from an average of 25 min and 17.8 s to just 1 min and 35.1 s, greatly enhancing planning efficiency. In the future, the proposed surgical planning method will have good application prospects in OBF repair surgery.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信