{"title":"TPA-Seg: Multi-Class Nucleus Segmentation Using Text Prompts and Cross-Attention","authors":"Yao-Ming Liang, Shi-Yu Lin, Zu-Xuan Wang, Ling-Feng Yang, Yi-Bo Jin, Yan-Hong Ji","doi":"10.1002/ima.70125","DOIUrl":"https://doi.org/10.1002/ima.70125","url":null,"abstract":"<div>\u0000 \u0000 <p>Precise semantic segmentation of nuclei in pathological images is a crucial step in pathological diagnosis and analysis. Given the limited scale and the high cost of annotation for current pathological datasets, appropriately incorporating textual prompts as prior knowledge is key to achieving high-accuracy multi-class segmentation. These text prompts can be derived from image information such as the morphology, size, location, and density of nuclei in medical images. The text prompts are processed by a text encoder to obtain textual features, while the images are processed by an image encoder to obtain multi-scale feature maps. These features are then fused through feature fusion blocks, allowing the features to interact and be perceived in a multi-scale multimodal manner. Finally, metric learning and weighted loss functions are introduced to prevent feature loss caused by a small number of categories or small target sizes in the image. Experimental results on multiple pathological image datasets demonstrate that our method is effective and outperforms existing models in the segmentation of pathological images. Furthermore, the study verifies the effectiveness of each module and evaluates the potential of different types of text prompts in improving performance. The insights and methods proposed may offer a novel solution for segmentation and classification tasks. The code can be viewed at https://github.com/kahhh743/TPA-Seg.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Deep Learning Model Based Cancerous Lung Nodules Severity Grading Framework Using CT Images","authors":"P. Mohan Kumar, V. E. Jayanthi","doi":"10.1002/ima.70134","DOIUrl":"https://doi.org/10.1002/ima.70134","url":null,"abstract":"<div>\u0000 \u0000 <p>Lung cancer remains one of the leading causes of cancer-related mortality, with early diagnosis being critical for improving patient survival rates. Existing deep learning models for lung nodule severity classification face significant challenges, including overfitting, computational inefficiency, and inaccurate segmentation of nodules from CT images. To overcome these limitations, this study proposes a novel deep learning framework integrating a Quadrangle Attention-based <i>U</i>-shaped Convolutional Transformer (QA-UCT) for segmentation and a Spatial Attention-based Multi-Scale Convolution Network (SMCN) for classification. CT images are enhanced using the Rotationally Invariant Block Matching-based Non-Local Means (RIB-NLM) filter to remove noise while preserving structural details. The QA-UCT model leverages transformer-based global attention mechanisms combined with convolutional layers to segment lung nodules with high precision. The SMCN classifier employs spatial attention mechanisms to categorize nodules as solid, part-solid, or non-solid based on severity. The proposed model was evaluated on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset. This proposed model achieves a 98.73% dice score for segmentation and 99.56% classification accuracy, outperforming existing methods such as U-Net, VGG, and autoencoders. Improved precision and recall demonstrate superior performance in lung nodule grading. This study introduces a transformer-enhanced segmentation and spatial attention based classification framework that significantly improves lung nodule detection accuracy. The integration of QA-UCT and SMCN enhances both segmentation precision and classification reliability. Future research will explore adapting this framework for liver and kidney segmentation, as well as optimizing computational efficiency for real-time clinical deployment.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144264560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remya Remani Sathyan, Hariharan Sreedharan, Hari Prasad, Gopakumar Chandrasekhara Menon
{"title":"ChromSeg-P3GAN: A Benchmark Dataset and Pix2Pix Patch Generative Adversarial Network for Chromosome Segmentation","authors":"Remya Remani Sathyan, Hariharan Sreedharan, Hari Prasad, Gopakumar Chandrasekhara Menon","doi":"10.1002/ima.70133","DOIUrl":"https://doi.org/10.1002/ima.70133","url":null,"abstract":"<div>\u0000 \u0000 <p>Chromosome image analysis with automated karyotyping systems (AKS) is crucial for the diagnosis and prognosis of hematologic malignancies and genetic disorders. However, the partial or complete occlusion of nonrigid chromosome structures significantly limits the performance of AKS. To address these challenges, this paper extends the Pix2Pix generative adversarial network (GAN) model for the first time to segment overlapping and touching chromosomes. A new publicly available dataset of G-banded metaphase chromosome images has been prepared specifically for this study, marking the first use of GAN-based methods on such data, as previous research has been confined to FISH image datasets. A comprehensive comparative study of Pix2Pix GAN objective functions—including binary cross entropy (BCE) loss with and without logit, Tversky loss, Focal Tversky (FT) loss with different gamma values, and Dice loss—has been conducted. To address class imbalance and segmentation challenges, a custom loss function combining BCE with logit, Tversky loss, and L1 loss is introduced, which yields superior performance. Furthermore, a 5-fold cross-validation is performed to evaluate the stability and performance of the models. The top five models from the comparative study are tested on a completely unseen dataset, and their performance is visualized using a boxplot. The proposed model demonstrates the best segmentation performance, with Intersection over Union (IoU) of 0.9247, Dice coefficient of 0.9596, and recall of 0.9687. The results validate the robustness and effectiveness of the proposed approach for addressing overlapping and touching chromosome segmentation in AKS.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144244243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolution Block Extension of DCNN for Retinal Vascular Segmentation: Taxonomy and Discussion","authors":"Henda Boudegga, Yaroub Elloumi, Rostom Kachouri, Asma Ben Abdallah, Nesrine Abroug, Mohamed Hedi Bedoui","doi":"10.1002/ima.70118","DOIUrl":"https://doi.org/10.1002/ima.70118","url":null,"abstract":"<div>\u0000 \u0000 <p>The retinal vascular tree (RVT) segmentation is a main step for diagnosing several ocular diseases. Higher accurate segmentation remains crucial to ensure a reliable disease detection and hence clinical treatment. Numerous standard deep learning (DL) architectures have been employed to segment the RVT regardless of the image field However, due to the intricate morphologies of vascular trees comprising fine and complex structures, those DL architectures failed to achieve high accuracy in retinal vessel segmentation. Therefore, several promising solutions have been developed to overcome these limitations, where their main contributions rely on adapting the convolution processing of deep convolutional neural networks (DCNNs) blocks with respect to the retinal vessels characteristics. In this paper, we present a review of extended convolution blocks within DCNNs for RVT segmentation from fundus images. Our main contributions remain on (1) Identifying the different principles extension of convolution blocks; (2) Proposing a taxonomy of convolution block extension, and (3) Analyzing and discussing the strengths and weaknesses of each extension type with respect to segmentation quality and database characteristics. The presented study allows a valuable recommendation for future research in the field of RVT segmentation based on DCNN.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144244241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongjie Li, Xiangyu Meng, Yu Liang, Bei Jiang, Jiaxin Ren
{"title":"NMDAU-Net: A Novel Lightweight 3D Network for Precision Segmentation of Brain Gliomas in MRI","authors":"Dongjie Li, Xiangyu Meng, Yu Liang, Bei Jiang, Jiaxin Ren","doi":"10.1002/ima.70135","DOIUrl":"https://doi.org/10.1002/ima.70135","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain MRI images are inherently three-dimensional, and traditional segmentation methods frequently fail to capture critical information. To address the complexities of 3D brain glioma MRI image segmentation, we introduced NMDAU-Net, a high-performance lightweight 3D segmentation network. This network builds upon the 3D U-Net architecture by integrating an enhanced 3D decomposable convolution block and dense attention modules (DAMs), significantly improving feature interaction and representation. Incorporating the avoid space pyramid pooling (ASPP) module as a transition structure between the encoder and decoder further augments feature extraction and enables the capture of richer semantic information. In addition, a weighted bidirectional feature pyramid module replaces the conventional skip connections in the 3D U-Net, facilitating the integration of multiscale features. Our model was evaluated on a dataset comprising more than 378 3D brain glioma MRI images and achieved a Dice score of 86.91%. The enhanced segmentation precision of NMDAU-Net offers crucial support for precise diagnosis and personalized treatment strategies and is promising for significantly improving treatment outcomes for glioma. This demonstrates its substantial potential for clinical application in enhancing patient prognosis and survival rates.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144244242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Romen Samuel Wabina, Prut Saowaprut, Junwei Yang, Christine Wagas Pitos
{"title":"Stochastic Differential Equation Approach as Uncertainty-Aware Feature Recalibration Module in Image Classification","authors":"Romen Samuel Wabina, Prut Saowaprut, Junwei Yang, Christine Wagas Pitos","doi":"10.1002/ima.70131","DOIUrl":"https://doi.org/10.1002/ima.70131","url":null,"abstract":"<p>Despite significant advancements in image classification, deep learning models struggle to accurately discern fine details in images, producing overly confident and imbalanced predictions for certain classes. These models typically employ feature recalibration techniques but do not account for the underlying uncertainty in predictions—particularly in complex sequential tasks like image classification. These uncertainties can significantly impact the reliability of subsequent analyses, potentially compromising accuracy across various applications. To address these limitations, we introduce the Stochastic Differential Equation Recalibration Module (SDERM), a novel approach designed to dynamically adjust the channel-wise feature responses in convolutional neural networks. It integrates a stochastic differential equation (SDE) framework into a feature recalibration module to capture the inherent uncertainties in the data and its model predictions. To the best of our knowledge, our study is the first to explore the integration of SDE-based feature recalibration modules in image classification. We build SDERM based on two interconnected networks—drift and diffusion network. The drift network serves as a deterministic component that approximates the predictive function of the model that systematically influences recalibrations of the predictions without considering the randomness. Concurrently, the diffusion network uses the Wiener process that captures the inherent uncertainties within the data and the network's predictions. We tested the classification accuracy of SDERM in ResNet50, ResNet101, and ResNet152 against other recalibration modules, including Squeeze-Excitation (SE), Convolutional Block Attention Module (CBAM), Gather and Excite (GE), and Position-Aware Recalibration Module (PARM), as well as the original Bottleneck architecture. Public image classification datasets were used, including CIFAR-10, SVHN, FashionMNIST, and HAM10000, and their classification accuracies were evaluated using the F1 score. The proposed ResNetSDE architecture achieved state-of-the-art F1 scores across four of five benchmark datasets. On Fashion-MNIST, ResNetSDE attained an F1 score of 0.937 (CI: 0.932–0.941), outperforming all baseline recalibration methods by margins of 0.9%–1.3%. For CIFAR-10 and CIFAR-100, ResNetSDE achieved 0.886 (CI: 0.879–0.892) and 0.962 (CI: 0.958–0.965), respectively, surpassing ResNet-GE and ResNet-CBAM by 3.5% and 1.3%, respectively. ResNetSDE dominated SVHN with an F1 of 0.956 (CI: 0.953–0.958), a significant improvement over ResNet-CBAM's 0.948 (CI: 0.945–0.951). While ResNet-CBAM led on the class-imbalanced HAM10000 (0.770, CI: 0.758–0.782), ResNetSDE remained competitive (0.768, CI: 0.749–0.786) since its consistent superiority—evidenced by narrow confidence intervals—validates its efficacy as a feature recalibration framework. Our experiments demonstrate that SDERM can outperform existing feature recalibration modules in image cl","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70131","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaoli Li, Tielin Liang, Dejian Li, Changhong Jiang, Bin Liu, Luyao He
{"title":"DETF-Net: A Network for Retinal Vessel Segmentation Utilizing Detailed Feature Enhancement and Dynamic Temporal Fusion","authors":"Shaoli Li, Tielin Liang, Dejian Li, Changhong Jiang, Bin Liu, Luyao He","doi":"10.1002/ima.70132","DOIUrl":"https://doi.org/10.1002/ima.70132","url":null,"abstract":"<div>\u0000 \u0000 <p>The segmentation of retinal vessel images is a pivotal step in diagnosing various ophthalmic and systemic diseases. Among deep learning techniques, UNet has been extensively utilized for its capability to deliver remarkable segmentation results. Nonetheless, significant challenges persist, particularly the loss of detail and spatial resolution caused by downsampling operations in convolutional and pooling layers. This drawback often results in subpar segmentation of small targets and intricate boundaries. Furthermore, achieving a balance between capturing global context and preserving local detail remains challenging, thereby limiting the segmentation performance on multi-scale targets. To tackle these challenges, this study proposes the Detail-Enhanced Temporal Fusion Network (DETF-Net), which introduces two essential modules: (1) the Detail Feature Enhancement Module (DFEM), designed to strengthen the representation of complex boundary features through the integration of median pooling, spatial attention, and mixed depthwise convolution; and (2) the Dynamic Temporal Fusion Module (DTFM), which combines Multi-scale Feature Extraction (MFE) and the Temporal Fusion Attention Mechanism (TFAM). The MFE module improves robustness across varying vessel sizes and shapes, while the TFAM dynamically adjusts feature importance and effectively captures subtle changes in vessel structure. The effectiveness of DETF-Net was evaluated on three benchmark datasets: DRIVE, CHASE_DB1, and STARE. The proposed network achieved high accuracy scores of 0.9811, 0.9875, and 0.9876, respectively, alongside specificity values of 0.9811, 0.9870, and 0.9875. Comparative experiments demonstrated that DETF-Net outperforms current state-of-the-art models, showcasing its superior segmentation performance. This research presents innovative approaches to address existing limitations in retinal vessel image segmentation, thereby advancing diagnostic accuracy for ophthalmic diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144219896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taibur Rahman, Lipi B. Mahanta, Anup Kumar Das, Gazi Naseem Ahmed
{"title":"An Enhanced U-Net Model for Precise Oral Epithelial Layer Segmentation Using Patch-Based Training","authors":"Taibur Rahman, Lipi B. Mahanta, Anup Kumar Das, Gazi Naseem Ahmed","doi":"10.1002/ima.70136","DOIUrl":"https://doi.org/10.1002/ima.70136","url":null,"abstract":"<div>\u0000 \u0000 <p>The oral epithelial layer is crucial for detecting oral dysplasia and cancer from histopathology images. Accurate segmentation of the oral epithelial layer in biopsy slide images is essential for early detection and effective treatment planning of conditions like Oral Epithelial Dysplasia, where abnormal changes increase the risk of oral cancer. This study investigates using a Deep Learning model to precisely identify and segment areas of the Oral Epithelial Layer in biopsy images of the oral cavity, aiming to enhance early diagnosis and treatment strategies. The study is conducted with an indigenously collected and benchmarked dataset of 300 histopathology images of the oral cavity, representing 64 patients. We propose a Deep Learning-based modified U-Net model for segmenting oral cavity histopathology images. Various patch sizes and batch size combinations were tested and implemented for comparison. The performance of the optimal patch and batch size combination is further compared with relevant state-of-the-art models. The modified U-Net model utilizing the patch generation technique demonstrated superior performance in oral cavity epithelium segmentation, achieving an IoU of 98.06, precision of 99.66, recall of 99.13, and F1-score of 99.00. Our research underscores the efficacy of deep learning-based segmentation with the patch generation technique in improving oral health diagnostics, outperforming several state-of-the-art models in segmenting the epithelial layer. This research enhances segmentation, a key step in Computer-Aided Diagnosis systems, ensuring accurate analysis, efficient processing, and reliable medical image interpretation for improved patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdus Salam, Moajjem Hossain Chowdhury, M. Murugappan, Muhammad E. H. Chowdhury
{"title":"Multicentered Data Based Polyp Detection Using Colonoscopy Images Using DNN","authors":"Abdus Salam, Moajjem Hossain Chowdhury, M. Murugappan, Muhammad E. H. Chowdhury","doi":"10.1002/ima.70123","DOIUrl":"https://doi.org/10.1002/ima.70123","url":null,"abstract":"<div>\u0000 \u0000 <p>The diagnosis and screening of colon polyps are essential for the early detection of colorectal cancer. Polyps can be identified through colonoscopies before becoming cancerous, making accurate detection and prompt intervention critical for colorectal health. A comprehensive evaluation of deep learning models using colonoscopy images and comparisons with state-of-the-art models is presented in this study. A total of 7900 still and video sequence images from the PolypGen multicenter data set were used to train cutting-edge object detection models, including YOLOv5, YOLOv7, YOLOv8, and F-RCNN + ResNet101. In terms of accuracy, precision, recall, and mAP, the YOLOv8x model achieved the best performance with an F1 score of 0.9058, accuracy of 0.949, precision of 0.863, and [email protected]. The robustness of the model was further confirmed across varying patient demographics and conditions using the external Kvasir data set. To enhance interpretability, the EigenCam explainable AI (XAI) technique was used, offering visual insights into the model's decision-making process by highlighting the most influential regions in the input images.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ET-WOFS Metaheuristic Feature Selection Based Approach for Endometrial Cancer Classification and Detection","authors":"Ramneek Kaur Brar, Manoj Sharma","doi":"10.1002/ima.70126","DOIUrl":"https://doi.org/10.1002/ima.70126","url":null,"abstract":"<div>\u0000 \u0000 <p>Endometrial Cancer (EC), also referred to as <i>endometrial carcinoma</i>, stands as the most common category of carcinoma of the uterus in females, ranking as the sixth most common cancer worldwide among women. This study introduces a Machine Learning-Based Efficient Computer-Aided Diagnosis (ML-CAD) state-of-the-art model aimed at assisting healthcare professionals in investigating, estimating, and accurately classifying endometrial cancer through the meticulous analysis of H&E-stained histopathological images. In the initial phase of image processing, meticulous steps are taken to eliminate noise from histopathological images. Subsequently, the application of the Vahadane stain normalization technique ensures stain normalization across histopathological images. The segmentation of stain-normalized histopathological images is executed with precision using the k-NN clustering approach, thereby enhancing the classification capabilities of the proposed ML-CAD model. Shallow features and deep features are extracted for analysis. The integration of shallow and deep features is achieved through a middle-level fusion strategy, and the SMOTE-Edited Nearest Neighbor (SMOTE-ENN) pre-processing technique is applied to address the sample imbalance issue. The identification of optimal features from a heterogeneous feature dataset is conducted meticulously using the novel Extra Tree-Whale Optimization Feature Selector (ET-WOFS). For the subsequent classification of endometrial cancer, a repertoire of classifiers, including k-NN, Random Forest, and Support Vector Machine (SVM), is harnessed. The classifier that incorporates ET-WOFS features demonstrates exceptional classification outcomes. Compared with existing models, the outcomes demonstrate that a k-NN classifier utilizing ET-WOFS features showcases remarkable outcomes with a classification accuracy of 95.78%, precision of 96.77%, an impressively low false positive rate (FPR) of 1.40%, and also a minimal false negative rate (FNR) of 4.21%. Further validation of the model's prediction and classification performance is evaluated in terms of the AUC-ROC value and other metrices. These presented assessments affirm the model's efficacy in providing accurate and reliable diagnostic support for endometrial cancer.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144206683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}