International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Enhanced Lung Cancer Diagnosis and Staging With HRNeT: A Deep Learning Approach 利用 HRNeT 增强肺癌诊断和分期:一种深度学习方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-10-04 DOI: 10.1002/ima.23193
N. Rathan, S. Lokesh
{"title":"Enhanced Lung Cancer Diagnosis and Staging With HRNeT: A Deep Learning Approach","authors":"N. Rathan,&nbsp;S. Lokesh","doi":"10.1002/ima.23193","DOIUrl":"https://doi.org/10.1002/ima.23193","url":null,"abstract":"<div>\u0000 \u0000 <p>The healthcare industry has been significantly impacted by the widespread adoption of advanced technologies such as deep learning (DL) and artificial intelligence (AI). Among various applications, computer-aided diagnosis has become a critical tool to enhance medical practice. In this research, we introduce a hybrid approach that combines a deep neural model, data collection, and classification methods for CT scans. This approach aims to detect and classify the severity of pulmonary disease and the stages of lung cancer. Our proposed lung cancer detector and stage classifier (LCDSC) demonstrate greater performance, achieving higher accuracy, sensitivity, specificity, recall, and precision. We employ an active contour model for lung cancer segmentation and high-resolution net (HRNet) for stage classification. This methodology is validated using the industry-standard benchmark image dataset lung image database consortium and image database resource initiative (LIDC-IDRI). The results show a remarkable accuracy of 98.4% in classifying lung cancer stages. Our approach presents a promising solution for early lung cancer diagnosis, potentially leading to improved patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142429272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFH-Net: A Hybrid CNN-Transformer Network Based Multi-Scale Fusion for Medical Image Segmentation MFH-Net:基于混合 CNN-Transformer 网络的多尺度融合医学图像分割技术
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-10-02 DOI: 10.1002/ima.23192
Ying Wang, Meng Zhang, Jian'an Liang, Meiyan Liang
{"title":"MFH-Net: A Hybrid CNN-Transformer Network Based Multi-Scale Fusion for Medical Image Segmentation","authors":"Ying Wang,&nbsp;Meng Zhang,&nbsp;Jian'an Liang,&nbsp;Meiyan Liang","doi":"10.1002/ima.23192","DOIUrl":"https://doi.org/10.1002/ima.23192","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, U-Net and its variants have gained widespread use in medical image segmentation. One key aspect of U-Net's design is the skip connection, facilitating the retention of detailed information and leading to finer segmentation results. However, existing research often concentrates on enhancing either the encoder or decoder, neglecting the semantic gap between them, and resulting in suboptimal model performance. In response, we introduce Multi-Scale Fusion module aimed at enhancing the original skip connections and addressing the semantic gap. Our approach fully incorporates the correlation between outputs from adjacent encoder layers and facilitates bidirectional information exchange across multiple layers. Additionally, we introduce Channel Relation Perception module to guide the fused multi-scale information for efficient connection with decoder features. These two modules collectively bridge the semantic gap by capturing spatial and channel dependencies in the features, contributing to accurate medical image segmentation. Building upon these innovations, we propose a novel network called MFH-Net. On three publicly available datasets, ISIC2016, ISIC2017, and Kvasir-SEG, we perform a comprehensive evaluation of the network. The experimental results show that MFH-Net exhibits higher segmentation accuracy in comparison with other competing methods. Importantly, the modules we have devised can be seamlessly incorporated into various networks, such as U-Net and its variants, offering a potential avenue for further improving model performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142428873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RetNet30: A Novel Stacked Convolution Neural Network Model for Automated Retinal Disease Diagnosis RetNet30:用于视网膜疾病自动诊断的新型堆积卷积神经网络模型
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-25 DOI: 10.1002/ima.23187
Krishnakumar Subramaniam, Archana Naganathan
{"title":"RetNet30: A Novel Stacked Convolution Neural Network Model for Automated Retinal Disease Diagnosis","authors":"Krishnakumar Subramaniam,&nbsp;Archana Naganathan","doi":"10.1002/ima.23187","DOIUrl":"https://doi.org/10.1002/ima.23187","url":null,"abstract":"<div>\u0000 \u0000 <p>Automated diagnosis of retinal diseases holds significant promise in enhancing healthcare efficiency and patient outcomes. However, existing methods often lack the accuracy and efficiency required for timely disease detection. To address this gap, we introduce RetNet30, a novel stacked convolutional neural network (CNN) designed to revolutionize automated retinal disease diagnosis. RetNet30 combines a custom-built 30-layer CNN with a fine-tuned Inception V3 model, integrating these sub-models through logistic regression to achieve superior classification performance. Extensive evaluations on retinal image datasets such as DRIVE, STARE, CHASE_DB1, and HRF demonstrate significant improvements in accuracy, sensitivity, specificity, and area under the ROC curve (AUROC) when compared to conventional approaches. By leveraging advanced deep learning architectures, RetNet30 not only enhances diagnostic precision but also generalizes effectively across diverse datasets, establishing a new benchmark in retinal disease classification. This novel approach offers a highly efficient and reliable solution for early disease detection and patient management, addressing the limitations of manual examination methods. Through rigorous quantitative and qualitative assessments, our proposed method demonstrates its potential to significantly impact medical image analysis and improve healthcare outcomes. RetNet30 marks a major step forward in automated retinal disease diagnosis, showcasing the future of AI-driven advancements in ophthalmology.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Layer Connection SegFormer Attention U-Net for Efficient TRUS Image Segmentation 跨层连接 SegFormer 关注 U-Net 实现高效 TRUS 图像分割
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23178
Yongtao Shi, Wei Du, Chao Gao, Xinzhi Li
{"title":"Cross-Layer Connection SegFormer Attention U-Net for Efficient TRUS Image Segmentation","authors":"Yongtao Shi,&nbsp;Wei Du,&nbsp;Chao Gao,&nbsp;Xinzhi Li","doi":"10.1002/ima.23178","DOIUrl":"https://doi.org/10.1002/ima.23178","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurately and rapidly segmenting the prostate in transrectal ultrasound (TRUS) images remains challenging due to the complex semantic information in ultrasound images. The paper discusses a cross-layer connection with SegFormer attention U-Net for efficient TRUS image segmentation. The SegFormer framework is enhanced by reducing model parameters and complexity without sacrificing accuracy. We introduce layer-skipping connections for precise positioning and combine local context with global dependency for superior feature recognition. The decoder is improved with Multi-layer Perceptual Convolutional Block Attention Module (MCBAM) for better upsampling and reduced information loss, leading to increased accuracy. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the dice similarity coefficient (DSC) of 97.55% and the intersection over union (IoU) of 95.23%. This approach balances encoder efficiency, multi-layer information flow, and parameter reduction.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revolutionizing Colon Histopathology Glandular Segmentation Using an Ensemble Network With Watershed Algorithm 利用带分水岭算法的集合网络革新结肠组织病理学腺体分割技术
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23179
Bijoyeta Roy, Mousumi Gupta, Bidyut Krishna Goswami
{"title":"Revolutionizing Colon Histopathology Glandular Segmentation Using an Ensemble Network With Watershed Algorithm","authors":"Bijoyeta Roy,&nbsp;Mousumi Gupta,&nbsp;Bidyut Krishna Goswami","doi":"10.1002/ima.23179","DOIUrl":"https://doi.org/10.1002/ima.23179","url":null,"abstract":"<div>\u0000 \u0000 <p>Colorectal adenocarcinoma, the most prevalent form of colon cancer, originates in the glandular structures of the intestines, presenting histopathological abnormalities in affected tissues. Accurate gland segmentation is crucial for identifying these potentially fatal abnormalities. While recent methodologies have shown success in segmenting glands in benign tissues, their efficacy diminishes when applied to malignant tissue segmentation. This study aims to develop a robust learning algorithm using a convolutional neural network (CNN) to segment glandular structures in colon histology images. The methodology employs a CNN based on the U-Net architecture, augmented by a weighted ensemble network that integrates DenseNet 169, Inception V3, and Efficientnet B3 as backbone models. Additionally, the segmented gland boundaries are refined using the watershed algorithm. Evaluation on the Warwick-QU dataset demonstrates promising results for the ensemble model, by achieving an F1 score of 0.928 and 0.913, object dice coefficient of 0.923 and 0.911, and Hausdorff distances of 38.97 and 33.76 on test sets A and B, respectively. These results are compared with outcomes from the GlaS challenge (MICCAI 2015) and existing research findings. Furthermore, our model is validated with a publicly available dataset named LC25000, and visual inspection reveals promising results, further validating the efficacy of our approach. The proposed ensemble methodology underscores the advantages of amalgamating diverse models, highlighting the potential of ensemble techniques to enhance segmentation tasks beyond individual model capabilities.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement of Semantic Segmentation by Image-Level Fine-Tuning to Overcome Image Pattern Imbalance in HRCT of Diffuse Infiltrative Lung Diseases 通过图像级微调增强语义分割,克服弥漫性浸润性肺病 HRCT 图像模式失衡问题
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23188
Sungwon Ham, Beomhee Park, Jihye Yun, Sang Min Lee, Joon Beom Seo, Namkug Kim
{"title":"Enhancement of Semantic Segmentation by Image-Level Fine-Tuning to Overcome Image Pattern Imbalance in HRCT of Diffuse Infiltrative Lung Diseases","authors":"Sungwon Ham,&nbsp;Beomhee Park,&nbsp;Jihye Yun,&nbsp;Sang Min Lee,&nbsp;Joon Beom Seo,&nbsp;Namkug Kim","doi":"10.1002/ima.23188","DOIUrl":"https://doi.org/10.1002/ima.23188","url":null,"abstract":"<div>\u0000 \u0000 <p>Diagnosing diffuse infiltrative lung diseases (DILD) using high-resolution computed tomography (HRCT) is challenging, even for expert radiologists, due to the complex and variable image patterns. Moreover, the imbalances among the six key DILD-related patterns—normal, ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation—further complicate accurate segmentation and diagnosis. This study presents an enhanced U-Net-based segmentation technique aimed at addressing these challenges. The primary contribution of our work is the fine-tuning of the U-Net model using image-level labels from 92 HRCT images that include various types of DILDs, such as cryptogenic organizing pneumonia, usual interstitial pneumonia, and nonspecific interstitial pneumonia. This approach helps to correct the imbalance among image patterns, improving the model's ability to accurately differentiate between them. By employing semantic lung segmentation and patch-level machine learning, the fine-tuned model demonstrated improved agreement with radiologists' evaluations compared to conventional methods. This suggests a significant enhancement in both segmentation accuracy and inter-observer consistency. In conclusion, the fine-tuned U-Net model offers a more reliable tool for HRCT image segmentation, making it a valuable imaging biomarker for guiding treatment decisions in patients with DILD. By addressing the issue of pattern imbalances, our model significantly improves the accuracy of DILD diagnosis, which is crucial for effective patient care.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CafeNet: A Novel Multi-Scale Context Aggregation and Multi-Level Foreground Enhancement Network for Polyp Segmentation CafeNet:用于息肉分割的新型多尺度上下文聚合和多层次前景增强网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23183
Zhanlin Ji, Xiaoyu Li, Zhiwu Wang, Haiyang Zhang, Na Yuan, Xueji Zhang, Ivan Ganchev
{"title":"CafeNet: A Novel Multi-Scale Context Aggregation and Multi-Level Foreground Enhancement Network for Polyp Segmentation","authors":"Zhanlin Ji,&nbsp;Xiaoyu Li,&nbsp;Zhiwu Wang,&nbsp;Haiyang Zhang,&nbsp;Na Yuan,&nbsp;Xueji Zhang,&nbsp;Ivan Ganchev","doi":"10.1002/ima.23183","DOIUrl":"https://doi.org/10.1002/ima.23183","url":null,"abstract":"<p>The detection of polyps plays a significant role in colonoscopy examinations, cancer diagnosis, and early patient treatment. However, due to the diversity in the size, color, and shape of polyps, as well as the presence of low image contrast with the surrounding mucosa and fuzzy boundaries, precise polyp segmentation remains a challenging task. Furthermore, this task requires excellent real-time performance to promptly and efficiently present predictive results to doctors during colonoscopy examinations. To address these challenges, a novel neural network, called CafeNet, is proposed in this paper for rapid and accurate polyp segmentation. CafeNet utilizes newly designed multi-scale context aggregation (MCA) modules to adapt to the extensive variations in polyp morphology, covering small to large polyps by fusing simplified global contextual information and local information at different scales. Additionally, the proposed network utilizes newly designed multi-level foreground enhancement (MFE) modules to compute and extract differential features between adjacent layers and uses the prediction output from the adjacent lower-layer decoder as a guidance map to enhance the polyp information extracted by the upper-layer encoder, thereby improving the contrast between polyps and the background. The polyp segmentation performance of the proposed CafeNet network is evaluated on five benchmark public datasets using six evaluation metrics. Experimental results indicate that CafeNet outperforms the state-of-the-art networks, while also exhibiting the least parameter count along with excellent real-time operational speed.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Measurement Method of the Tibial Deformity Angle on X-Ray Films Based on Deep Learning Keypoint Detection Network 基于深度学习关键点检测网络的 X 光片胫骨畸形角度自动测量方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23190
Ning Zhao, Cheng Chang, Yuanyuan Liu, Xiao Li, Zicheng Song, Yue Guo, Jianwen Chen, Hao Sun
{"title":"An Automatic Measurement Method of the Tibial Deformity Angle on X-Ray Films Based on Deep Learning Keypoint Detection Network","authors":"Ning Zhao,&nbsp;Cheng Chang,&nbsp;Yuanyuan Liu,&nbsp;Xiao Li,&nbsp;Zicheng Song,&nbsp;Yue Guo,&nbsp;Jianwen Chen,&nbsp;Hao Sun","doi":"10.1002/ima.23190","DOIUrl":"https://doi.org/10.1002/ima.23190","url":null,"abstract":"<div>\u0000 \u0000 <p>In the clinical application of the parallel external fixator, medical practitioners are required to quantify deformity parameters to develop corrective strategies. However, manual measurement of deformity angles is a complex and time-consuming process that is susceptible to subjective factors, resulting in nonreproducible results. Accordingly, this study proposes an automatic measurement method based on deep learning, comprising three stages: tibial segment localization, tibial contour point detection, and deformity angle calculation. First, the Faster R-CNN object detection model, combined with ResNet50 and FPN as the backbone, was employed to achieve accurate localization of tibial segments under both occluded and nonoccluded conditions. Subsequently, a relative position constraint loss function was added, and ResNet101 was used as the backbone, resulting in an improved RTMPose keypoint detection model that achieved precise detection of tibial contour points. Ultimately, the bone axes of each tibial segment were determined based on the coordinates of the contour points, and the deformity angles were calculated. The enhanced keypoint detection model, Con_RTMPose, elevated the Percentage of Correct Keypoints (PCK) from 63.94% of the initial model to 87.17%, markedly augmenting keypoint localization precision. Compared to manual measurements conducted by medical professionals, the proposed methodology demonstrates an average error of 0.52°, a maximum error of 1.15°, and a standard deviation of 0.07, thereby satisfying the requisite accuracy standards for orthopedic assessments. The measurement time is approximately 12 s, whereas manual measurement requires about 15 min, greatly reducing the time required. Additionally, the stability of the models was verified through <i>K</i>-fold cross-validation experiments. The proposed method meets the accuracy requirements for orthopedic applications, provides objective and reproducible results, significantly reduces the workload of medical professionals, and greatly improves efficiency.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation Design of a Triple Antenna Combination for PET-MRI Imaging Compatible With 3, 7, and 11.74 T MRI Scanner 与 3、7 和 11.74 T 磁共振成像扫描仪兼容的 PET-MRI 成像三天线组合模拟设计
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23191
Daniel Hernandez, Taewoo Nam, Eunwoo Lee, Geun Bae Ko, Jae Sung Lee, Kyoung-Nam Kim
{"title":"Simulation Design of a Triple Antenna Combination for PET-MRI Imaging Compatible With 3, 7, and 11.74 T MRI Scanner","authors":"Daniel Hernandez,&nbsp;Taewoo Nam,&nbsp;Eunwoo Lee,&nbsp;Geun Bae Ko,&nbsp;Jae Sung Lee,&nbsp;Kyoung-Nam Kim","doi":"10.1002/ima.23191","DOIUrl":"https://doi.org/10.1002/ima.23191","url":null,"abstract":"<p>The use of electromagnetism and the design of antennas in the field of medical imaging have played important roles in clinical practice. Specifically, magnetic resonance imaging (MRI) utilizes transmission and reception antennas, or coils, that are tuned to specific frequencies depending on the strength of the main magnet. Clinical scanners operating at 3 Teslas (T) function at a frequency of 127 MHz, while research scanners at 7 T operate at 300 MHz. An 11.74 T scanner for human imaging, which is currently under development, will operate at a frequency of 500 MHz. MRI allows for the high-definition scanning of biological tissues, making it a valuable tool for enhancing images acquired with positron emission tomography (PET). PET is an imaging modality used to evaluate the metabolism of organs or cancers. With recent advancements in the development of portable PET systems that can be integrated into any MRI scanner, we propose the design based on electromagnetic simulations of a triple-tuned array of dipole antennas to operate at 127, 300, and 500 MHz. This array can be attached to the PET inset and used in 3, 7, or 11.74 T scanners.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23191","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adapting Segment Anything Model for 3D Brain Tumor Segmentation With Missing Modalities 为缺失模式下的三维脑肿瘤分段调整分段任何模型
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-09-24 DOI: 10.1002/ima.23177
Xiaoliang Lei, Xiaosheng Yu, Maocheng Bai, Jingsi Zhang, Chengdong Wu
{"title":"Adapting Segment Anything Model for 3D Brain Tumor Segmentation With Missing Modalities","authors":"Xiaoliang Lei,&nbsp;Xiaosheng Yu,&nbsp;Maocheng Bai,&nbsp;Jingsi Zhang,&nbsp;Chengdong Wu","doi":"10.1002/ima.23177","DOIUrl":"https://doi.org/10.1002/ima.23177","url":null,"abstract":"<div>\u0000 \u0000 <p>The problem of missing or unavailable magnetic resonance imaging modalities challenges clinical diagnosis and medical image analysis technology. Although the development of deep learning and the proposal of large models have improved medical analytics, this problem still needs to be better resolved.The purpose of this study was to efficiently adapt the Segment Anything Model, a two-dimensional visual foundation model trained on natural images, to address the challenge of brain tumor segmentation with missing modalities. We designed a twin network structure that processes missing and intact magnetic resonance imaging (MRI) modalities separately using shared parameters. It involved comparing the features of two network branches to minimize differences between the feature maps derived from them. We added a multimodal adapter before the image encoder and a spatial–depth adapter before the mask decoder to fine-tune the Segment Anything Model for brain tumor segmentation. The proposed method was evaluated using datasets provided by the MICCAI BraTS2021 Challenge. In terms of accuracy and robustness, the proposed method is better than existing solutions. The proposed method can segment brain tumors well under the missing modality condition.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142316943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信