International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Machine Learning Assisted Differential Diagnosis of Pulmonary Nodules Based on 3D Images Reconstructed From CT Scans
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-20 DOI: 10.1002/ima.70054
Xiao-Yuan Wang, Qin Hong, Da-Wei Li, Tao Wu, Yue-Qiang Liu, Ruo-Can Qian
{"title":"Machine Learning Assisted Differential Diagnosis of Pulmonary Nodules Based on 3D Images Reconstructed From CT Scans","authors":"Xiao-Yuan Wang,&nbsp;Qin Hong,&nbsp;Da-Wei Li,&nbsp;Tao Wu,&nbsp;Yue-Qiang Liu,&nbsp;Ruo-Can Qian","doi":"10.1002/ima.70054","DOIUrl":"https://doi.org/10.1002/ima.70054","url":null,"abstract":"<div>\u0000 \u0000 <p>Lung cancer is one of the most common and deadly diseases worldwide. The precise diagnosis of lung cancer at an early stage holds particular significance, as it contributes to enhanced therapeutic decision-making and prognosis. Despite advancements in computed tomography (CT) scanning for the detection of pulmonary nodules, accurately assessing the diverse range of pulmonary nodules continues to pose a substantial challenge. Herein, we present an innovative approach utilizing machine learning to facilitate the accurate differentiation of pulmonary nodules. Our method relies on the reconstruction of three-dimensional (3D) lung models derived from two-dimensional (2D) CT scans. Inspired by the successful utilization of deep convolutional neural networks (DCNNs) in the realm of natural image recognition, we propose a novel technique for pulmonary nodule detection employing DCNNs. Initially, we employ an algorithm to generate 3D lung models from raw 2D CT scans, thereby providing an immersive stereoscopic depiction of the lungs. Subsequently, a DCNN is introduced to extract features from images and classify the pulmonary nodules. Based on the developed model, pulmonary nodules with various features have been successfully classified with 86% accuracy, demonstrating superior performance. We hold the belief that our strategy will provide a useful tool for the early clinical diagnosis and management of lung cancer.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143456048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional Block Attention Module and Parallel Branch Architectures for Cervical Cell Classification
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-18 DOI: 10.1002/ima.70048
Zafer Cömert, Ferat Efil, Muammer Türkoğlu
{"title":"Convolutional Block Attention Module and Parallel Branch Architectures for Cervical Cell Classification","authors":"Zafer Cömert,&nbsp;Ferat Efil,&nbsp;Muammer Türkoğlu","doi":"10.1002/ima.70048","DOIUrl":"https://doi.org/10.1002/ima.70048","url":null,"abstract":"<div>\u0000 \u0000 <p>Cervical cancer persists as a significant global health concern, underscoring the vital importance of early detection for effective treatment and enhanced patient outcomes. While traditional Pap smear tests remain an invaluable diagnostic tool, they are inherently time-consuming and susceptible to human error. This study introduces an innovative approach that employs convolutional neural networks (CNN) to enhance the accuracy and efficiency of cervical cell classification. The proposed model incorporates the Convolutional Block Attention Module (CBAM) and parallel branch architectures, which facilitate enhanced feature extraction by focusing on crucial spatial and channel information. The process of feature extraction entails the identification and utilization of the most pertinent elements within an image for the purpose of classification. The proposed model was meticulously assessed on the SIPaKMeD dataset, attaining an exceptional degree of accuracy (92.82%), which surpassed the performance of traditional CNN models. The incorporation of sophisticated attention mechanisms enables the model to not only accurately classify images but also facilitate interpretability by emphasizing crucial regions within the images. This study highlights the transformative potential of cutting-edge deep learning techniques in medical image analysis, particularly for cervical cancer screening, providing a powerful tool to support pathologists in early detection and accurate diagnosis. Future work will explore additional attention mechanisms and extend the application of this architecture to other medical imaging tasks, further enhancing its clinical utility and impact on patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143438828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Tumors Classification in MRIs Based on Personalized Federated Distillation Learning With Similarity-Preserving
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-17 DOI: 10.1002/ima.70046
Bo Wu, Donghui Shi, Jose Aguilar
{"title":"Brain Tumors Classification in MRIs Based on Personalized Federated Distillation Learning With Similarity-Preserving","authors":"Bo Wu,&nbsp;Donghui Shi,&nbsp;Jose Aguilar","doi":"10.1002/ima.70046","DOIUrl":"https://doi.org/10.1002/ima.70046","url":null,"abstract":"<div>\u0000 \u0000 <p>Owing to legal restrictions and privacy preservation, it is impractical to consolidate medical data across multiple regions for model training, leading to difficulties in data sharing. Federated learning (FL) methods present a solution to this issue. However, traditional FL encounters difficulties in handling non-independent identically distributed (Non-IID) data, where the data distribution across clients is heterogeneous and not uniformly distributed. Although personalized federated learning (PFL) can tackle the Non-IID issue, it has drawbacks such as lower accuracy rates or high memory usage. Furthermore, knowledge-distillation-based PFL exhibits shortcomings in model learning capabilities. In this study, we propose FedSPD, a novel federated learning framework that integrates similarity-preserving knowledge distillation to bridge the gap between global knowledge and local models. FedSPD reduces discrepancies by aligning feature representations through cosine similarity at the feature level, enabling local models to assimilate global knowledge while preserving personalized characteristics. This approach enhances model performance in heterogeneous environments while mitigating privacy risks by sharing only averaged logits, in line with stringent medical data security requirements. Extensive experiments were conducted on three datasets: MNIST, CIFAR-10, and brain tumor MRI, comparing FedSPD with nine state-of-the-art FL and PFL algorithms. On general datasets, under the IID setting, FedSPD achieved performance comparable to existing methods. In Non-IID scenarios, we employed the Dirichlet distribution to control the data distribution across clients, allowing us to model and assess non-uniform data partitions in our FL settings. FedSPD demonstrated exceptional performance, with accuracy improvements of up to 77.77% over traditional FL methods and up to 4.19% over PFL methods. On the brain tumor MRI dataset, FedSPD outperformed most algorithms under the IID condition. In Non-IID settings, it exhibited even greater advantages, with accuracy improvements of up to 78.41% over traditional FL methods and up to 10.55% over PFL methods. Additionally, FedSPD significantly reduced computational overhead, shortening each training round by up to 67.25% compared to other PFL methods and reducing parameter size by up to 49.34%, thereby improving scalability and efficiency. By effectively integrating global and personalized features, FedSPD not only enhanced model generalization across heterogeneous medical datasets but also strengthened clinical decision-making, contributing to more accurate diagnoses and better patient prognosis. This scalable and privacy-preserving solution meets the practical demands of healthcare applications.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMCAF: A Survival Status Prediction Method Based on Cross-Attention Fusion of Multimodal Colorectal Cancer Data
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-14 DOI: 10.1002/ima.70051
Xueping Tan, Dinghui Wu, Hao Wang, Zihao Zhao, Yuxi Ge, Shudong Hu
{"title":"MMCAF: A Survival Status Prediction Method Based on Cross-Attention Fusion of Multimodal Colorectal Cancer Data","authors":"Xueping Tan,&nbsp;Dinghui Wu,&nbsp;Hao Wang,&nbsp;Zihao Zhao,&nbsp;Yuxi Ge,&nbsp;Shudong Hu","doi":"10.1002/ima.70051","DOIUrl":"https://doi.org/10.1002/ima.70051","url":null,"abstract":"<div>\u0000 \u0000 <p>The employment of artificial intelligence methods in computer-assisted diagnosis systems is critical for colorectal cancer survival analysis and prognosis. However, due to the low prediction accuracy of single-modal data research and the complexity of multimodal data fusion methods, the current study's effect on colorectal cancer is minimal. To address this issue, the authors offer a multimodal cross attention fusion (MMCAF) technique for predicting colorectal cancer survival status. First, feature engineering is used to create feature sets for every mode and to address the heterogeneity of multimodal data. Second, a three-mode fusion technique is used to allocate weight to single-mode and multimodal features via channels and cross-attention processes. Lastly, the cross-entropy loss function is minimized in order to estimate the classification survival. The experimental results reveal that the MMCAF approach predicts survival states with 97.73% accuracy and an area under the receiver operating characteristic curve (AUC) of 0.99. When compared to the best outcome of other fusion algorithms (feature concatenation), the prediction accuracy increases by about 6 percentage points, while the AUC increases by 7 percentage points. This finding thoroughly demonstrates MMCAF's efficacy in predicting colorectal cancer survival.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143404494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dermatology 2.0: Deploying YOLOv11 for Accurate and Accessible Skin Disease Detection: A Web-Based Approach
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-14 DOI: 10.1002/ima.70050
Adnan Hameed, Said Khalid Shah, Sajid Ullah Khan, Sultan Alanazi, Shabbab Ali Algamdi
{"title":"Dermatology 2.0: Deploying YOLOv11 for Accurate and Accessible Skin Disease Detection: A Web-Based Approach","authors":"Adnan Hameed,&nbsp;Said Khalid Shah,&nbsp;Sajid Ullah Khan,&nbsp;Sultan Alanazi,&nbsp;Shabbab Ali Algamdi","doi":"10.1002/ima.70050","DOIUrl":"https://doi.org/10.1002/ima.70050","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin disorders are common and require diagnosis and treatment in a timely manner. In traditional diagnostics, great demands are made on the time and interpretation of the results. To cope with this, we introduce YOLOv11, an enhanced deep learning model designed for skin disease detection and classification. The model integrates EfficientNetB0 as the backbone for feature extraction and ResNet50 in the head for robust classification and localization. Our model was trained on a dataset of 10 common skin diseases to ensure robustness and accuracy; we were able to classify the diseases with a mean Average Precision (mAP) of 89.8%, a precision of 90%, and a recall of 88% on the test dataset. This model was developed in the form of a web application based on Streamlit, which was used for easy uploading of pictures by both clinicians and patients for threshold diagnostics. This upsurge in technology allows for treatment without visitation, making skin disease diagnosis more dynamic.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LFBTS: Enhanced Multimodality MRI Fusion for Brain Tumor Segmentation With Limited Computational Resources LFBTS:在计算资源有限的情况下增强多模态磁共振成像融合以进行脑肿瘤分割
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-13 DOI: 10.1002/ima.70044
Yuanjing Hu, Aibin Huang
{"title":"LFBTS: Enhanced Multimodality MRI Fusion for Brain Tumor Segmentation With Limited Computational Resources","authors":"Yuanjing Hu,&nbsp;Aibin Huang","doi":"10.1002/ima.70044","DOIUrl":"https://doi.org/10.1002/ima.70044","url":null,"abstract":"<div>\u0000 \u0000 <p>Efficient and accurate segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) is crucial for clinical diagnosis and treatment planning. Traditional methods tend to concentrate solely on feature extraction from individual modalities, overlooking the substantial potential of multimodal feature fusion in enhancing segmentation performance. In this paper, we present a novel method that not only integrates salient features from different modalities strategically but also takes into account the constraints imposed by limited computational resources, ensuring both accuracy and efficiency. Two key modules, the attention-guided cross-modality fusion module (ACFM) and the hierarchical asymmetric convolution module (HACM), were designed to leverage the distinct modalities and the varying information focuses found within different dimensions. The ACFM is based on a transformer framework, utilizing self-attention and cross-attention mechanisms. These mechanisms enable the capture of both local and global dependencies within and between different MRI modalities. This design allows for the effective fusion of complementary features from multiple modalities, thereby enhancing segmentation performance by leveraging the valuable information contained in each modality. Meanwhile, the HACM reduces computational complexity using a pseudo-3D convolution approach. This approach breaks down 3D convolutions into components along the transverse and sagittal axes. Unlike traditional 2D convolutions, this method preserves essential spatial information across dimensions. It ensures accurate segmentation while maximizing efficiency by capitalizing on the varying focus of information in different spatial planes. This approach takes advantage of the varying information density in these dimensions, achieving a balance between accuracy and efficiency. Through extensive experiments on the BraTS2021 dataset, our proposed modality fusion-based network under limited resources (LFBTS) achieves dice scores of 0.925, 0.911, and 0.886 for whole tumor (WT), tumor core (TC), and enhanced tumor (ET), respectively. These results outperform state-of-the-art (SOTA) models and consistently demonstrate superiority over models developed in the preceding 2 years. This highlights the potential of our approach in advancing brain tumor segmentation and improving clinical decision-making, particularly in settings with limited resources.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DML-GNN: ASD Diagnosis Based on Dual-Atlas Multi-Feature Learning Graph Neural Network
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-12 DOI: 10.1002/ima.70038
Shuaiqi Liu, Chaolei Sun, Jinkai Li, Shuihua Wang, Ling Zhao
{"title":"DML-GNN: ASD Diagnosis Based on Dual-Atlas Multi-Feature Learning Graph Neural Network","authors":"Shuaiqi Liu,&nbsp;Chaolei Sun,&nbsp;Jinkai Li,&nbsp;Shuihua Wang,&nbsp;Ling Zhao","doi":"10.1002/ima.70038","DOIUrl":"https://doi.org/10.1002/ima.70038","url":null,"abstract":"<div>\u0000 \u0000 <p>To better automate the diagnosis of autism spectrum disorder (ASD) and improve diagnostic accuracy, a graph neural network via dual-atlas multi-feature learning (DML-GNN) model for ASD diagnosis is constructed based on the local feature information of brain atlas and the global feature information from the multi-modal data. First, DML-GNN constructs a dual-atlas feature extraction module to capture the initial features of each subject. Second, it combines K-nearest-neighbor graphs, graph pooling, graph convolution (GCN) and graph channel attention (GCA) to construct a local feature learning module. This module extracts deep features for each subject and eliminate redundant features, and further fuses multi-atlases features efficiently. Third, DML-GNN constructs a global feature learning module by combining the non-imaging information of fMRI data and graph isomorphism network (GINConv), which combines the information of multi-modal data to construct comprehensive multi-graph features and learns node embeddings using GINConv. Finally, multi-layer perceptron (MLP) is used to obtain the final ASD diagnosis results. Compared with recent algorithms for ASD diagnosis on the public data set-Autism Brain Imaging Data Exchange I (ABIDE I), our method demonstrated superior performance, underscoring its potential as an effective tool.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143396801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing AI for Comprehensive Reporting of Medical AI Research
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-11 DOI: 10.1002/ima.70047
Mohamed L. Seghier
{"title":"Harnessing AI for Comprehensive Reporting of Medical AI Research","authors":"Mohamed L. Seghier","doi":"10.1002/ima.70047","DOIUrl":"https://doi.org/10.1002/ima.70047","url":null,"abstract":"&lt;p&gt;In this editorial, I would like to succinctly discuss the potential of using AI to improve reporting medical AI research. There are already several published guidelines and checklists in the current literature but how they are interpreted and implemented varies with publishers, editors, reviewers and authors. Here, I discuss the possibility of harnessing generative AI tools in order to assist authors to comprehensively report their AI work and meet current guidelines, with the ultimate aim to improve transparency and replicability in medical AI research. The succinct discussion below reckons two key issues: (1) AI has a seductive allure that might affect how AI-generated evidence is scrutinized and disseminated, hence the need for comprehensive and transparent reporting, and (2) authors sometimes feel uncertain about what to report in the light of so many existing guidelines about reporting AI research and the lack of consensus in the field.&lt;/p&gt;&lt;p&gt;It has been argued that extraneous or irrelevant information with a seductive allure can improve the ratings of scientific explanations [&lt;span&gt;1&lt;/span&gt;]. AI, with its overhyped knowledgeability, can convey biases and false information that readers might judge believable [&lt;span&gt;2&lt;/span&gt;]. AI can write highly convincing text that can impress or deceive readers, even in the presence of errors and false information [&lt;span&gt;3, 4&lt;/span&gt;]. Likewise, merely mentioning “AI” in the title of a research paper seems to increase its citation potential [&lt;span&gt;5&lt;/span&gt;]. The latter might incentivise scientists to use AI purely to boost their work citability, regardless of whether AI improved their work quality. In this context, one might speculate that some publications that used AI but with flawed methodologies or wrong conclusions might have slipped through the cracks of peer review, with many already being indexed and citable [&lt;span&gt;6&lt;/span&gt;]. Overall, emerging evidence suggests that AI has an intrinsic seductive allure that is shaping the medical research landscape and impacting how readers appraise research articles that employ AI. This is why improving the reporting and evaluation of AI work is of paramount importance, and in this editorial, I underscore the potential role of generative AI for that purpose.&lt;/p&gt;&lt;p&gt;Consider this: readers might find a paper entitled “&lt;i&gt;Association between condition X and biomarker Y demonstrated with deep learning&lt;/i&gt;” novel and worth reading. Now, imagine if the same finding was evidenced with a traditional analysis method and entitled “&lt;i&gt;Association between condition X and biomarker Y demonstrated with a correlation analysis&lt;/i&gt;”, though it is unlikely that the authors of the latter will consider correlation analysis worth mentioning in the article title. Although both pieces of work report the same finding, they may not enjoy the same buzz and high citability in the field. This is because AI-based methods and traditional analysis methods operate at different maturity levels. ","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143389020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced BoxInst for Weakly Supervised Liver Tumor Instance Segmentation in CT Images
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-08 DOI: 10.1002/ima.70043
Shanshan Li, Yuhan Zhang, Lingyan Zhang, Wei Chen
{"title":"Enhanced BoxInst for Weakly Supervised Liver Tumor Instance Segmentation in CT Images","authors":"Shanshan Li,&nbsp;Yuhan Zhang,&nbsp;Lingyan Zhang,&nbsp;Wei Chen","doi":"10.1002/ima.70043","DOIUrl":"https://doi.org/10.1002/ima.70043","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate liver tumor detection and segmentation are essential for disease diagnosis and treatment planning. While traditional methods rely on pixel-level mask annotations in fully supervised training, weakly supervised techniques are gaining attention due to their reduced annotation requirements. In this study, we propose an enhanced version of BoxInst, called Enhanced BoxInst, which incorporates two key innovations: the position activation (PA) Module and the progressive mask generation (PMG) Module. The PA Module utilizes a Spatial Awareness (SA) Block to accurately locate tumor regions and encodes the location information to the segmentation branch through the Spatial Interaction Encoding (SIE) mechanism, thereby achieving cross-spatial feature interaction and ultimately improving the segmentation accuracy of liver tumors. Additionally, the PMG Module employs a feature decomposition scheme to refine tumor masks progressively from coarse to fine, accurately restoring the overall layout and boundary details of the tumor mask. Extensive experiments on the LiTS, AMU-Liver, and 3DIRCADb datasets demonstrate that Enhanced BoxInst outperforms existing methods in liver tumor instance segmentation. These results highlight the potential of our approach for practical use in medical image analysis, especially when only box annotations are available. The code is available at https://github.com/ssli23/Enhanced_BoxInst.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pancreatic Tumor Detection From CT Images Converted to Graphs Using Whale Optimization and Classification Algorithms With Transfer Learning
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-08 DOI: 10.1002/ima.70040
Yusuf Alaca, Ömer Faruk Akmeşe
{"title":"Pancreatic Tumor Detection From CT Images Converted to Graphs Using Whale Optimization and Classification Algorithms With Transfer Learning","authors":"Yusuf Alaca,&nbsp;Ömer Faruk Akmeşe","doi":"10.1002/ima.70040","DOIUrl":"https://doi.org/10.1002/ima.70040","url":null,"abstract":"<p>Pancreatic cancer is one of the most aggressive types of cancer, known for its high mortality rate, as it is often diagnosed at an advanced stage. Early diagnosis holds the potential to prolong patients' lifespans and improve treatment success rates. In this study, an innovative method is proposed to enhance the diagnosis of pancreatic cancer. Computed tomography (CT) images were converted into graphs using the Harris Corner Detection Algorithm and analyzed using deep learning models via transfer learning. DenseNet121 and InceptionV3 transfer learning models were trained on graph-based data, and model parameters were optimized using the Whale Optimization Algorithm (WOA). Additionally, classification algorithms such as k-Nearest Neighbors (k-NN), Support Vector Machines (SVM), and Random Forests (RF) were integrated into the analysis of the extracted features. The best results were achieved using the k-NN classification algorithm on features optimized by WOA, yielding an accuracy of 92.10% and an F1 score of 92.74%. The study demonstrated that graph-based transformation enabled more effective modeling of spatial relationships, thereby enhancing the performance of deep learning models. WOA offered significant superiority compared to other methods in parameter optimization. This study aims to contribute to the development of a reliable diagnostic system that can be integrated into clinical applications. In the future, the use of larger and more diverse datasets, along with different graph-based methods, could enhance the generalizability and performance of the proposed approach. The proposed model has the potential to serve as a decision support tool for physicians, particularly in early diagnosis, offering an opportunity to improve patients' quality of life.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143362798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信