International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
A Novel Approach for Dental X-Ray Enhancement and Caries Detection 一种新的牙齿x射线增强和龋齿检测方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-07 DOI: 10.1002/ima.70108
Sajid Ullah Khan, Sultan Alanazi, Fahdah Almarshad, Tallha Akram
{"title":"A Novel Approach for Dental X-Ray Enhancement and Caries Detection","authors":"Sajid Ullah Khan,&nbsp;Sultan Alanazi,&nbsp;Fahdah Almarshad,&nbsp;Tallha Akram","doi":"10.1002/ima.70108","DOIUrl":"https://doi.org/10.1002/ima.70108","url":null,"abstract":"<div>\u0000 \u0000 <p>Typical manual processes are time-consuming, error-prone, and subjective, especially for complex radiological diagnoses. Although current artificial intelligence models show promising results for identifying caries, they generally fail due to a lack of well-pre-processed images. This research work is two-fold. Initially, we propose a novel layer division non-zero elimination model to reduce Poisson noise and de-blur the acquired images. In the second step, we propose a more accurate and intuitive method in segmenting and classifying caries of the teeth. We used a total of 17 840 radiographs, which are a mix of bitewing and periapical X-rays, for classification with ResNet-50 and segmentation with ResUNet. ResNet-50 uses skip connections within the residual blocks to solve the gradient issue existing in cavity presence. ResUNet combines the encoder-decoder structure of U-Net with the residual block features of ResNet to improve the performance of segmentation on radiographs with cavities. Finally, the Stochastic Gradient Descent optimizer was employed during the training phase to ensure the possibility of convergence and improve accuracy. ResNet-50 was proven to outperform earlier versions, like ResNet-18 and ResNet-34, in achieving a recognition accuracy of 87% in the classification challenge, which is a very reliable indicator of promising results. Similarly, ResUNet was proved to be better than existing state-of-the-art models such as CariesNet, DeepLab v3, and U-Net++ in terms of accuracy, even achieving the level of 98% accuracy in segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143914382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Leakage-Resistant Spatially Weighted Active Contour for Brain Tumor Segmentation 抗泄漏空间加权活动轮廓在脑肿瘤分割中的应用
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-07 DOI: 10.1002/ima.70110
Bijay Kumar Sa, Sanjay Agrawal, Rutuparna Panda
{"title":"A Leakage-Resistant Spatially Weighted Active Contour for Brain Tumor Segmentation","authors":"Bijay Kumar Sa,&nbsp;Sanjay Agrawal,&nbsp;Rutuparna Panda","doi":"10.1002/ima.70110","DOIUrl":"https://doi.org/10.1002/ima.70110","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate delineation of brain tumor in a magnetic resonance (MR) image is crucial for its prognosis. Recently, active contour models (ACM) are increasingly being applied in brain tumor segmentation, owing to their flexibility in capturing intricate boundaries and optimization-driven approach. However, the accuracy of these models often gets limited due to the image's intensity inhomogeneity induced false convergence and leakage through weak edged boundaries. In contrast to the traditional ACMs that use fixed or adaptive scalar weights, we propose to counter these limitations using spatially adaptive weights for the contour's regularization energy terms. This keeps the ACM independent of the weight initializations. Further, no exclusive image-fitting term is required in its overall energy, as the spatial weighting of the regularization terms can inhibit the contour's motion near the boundary pixels. Our model dynamically adjusts the variable weight elements along the contour based on Hellinger distances of the local intensity distributions from a reference. It mitigates leakage by using a special weighting factor that checks contour motion particularly at points of changing intensity statistics. Despite the overhead caused by the local evaluation of spatial weights along the contour, implementation using parallel processing maintains a decent computational efficiency. Experimental results obtained on Cheng's brain MR dataset demonstrate the model's accuracy and robustness against various levels of inhomogeneity and boundary smoothness. Further tests on multiple other medical images highlight its generality. It outperforms the compared state-of-the-art machine learning models and major ACMs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143919588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative MRI Denoising Using Federated and Transfer Learning 利用联邦学习和迁移学习的创新MRI去噪
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-07 DOI: 10.1002/ima.70106
Areeba Naseem Khan, Mohsin Bilal, Sajid Ullah Khan, Salabat Khan, Muhammad Sharif
{"title":"Innovative MRI Denoising Using Federated and Transfer Learning","authors":"Areeba Naseem Khan,&nbsp;Mohsin Bilal,&nbsp;Sajid Ullah Khan,&nbsp;Salabat Khan,&nbsp;Muhammad Sharif","doi":"10.1002/ima.70106","DOIUrl":"https://doi.org/10.1002/ima.70106","url":null,"abstract":"<div>\u0000 \u0000 <p>Magnetic resonance imaging (MRI) is crucial for medical diagnostics, providing detailed images essential for accurate diagnoses. However, centralized image processing systems pose significant data privacy risks, particularly when sharing patient data across institutions. This study addresses the dual challenges of MRI denoising and data privacy by introducing a novel hybrid model within a federated learning (FL) framework. The proposed approach combines transfer learning and FL to enhance MRI denoising performance while ensuring patient data remains secure and decentralized. Specifically, a VGG-denoising autoencoder (VGG-DAE) integrates a pretrained VGG16 network with an autoencoder, trained across eight clients simulating diverse medical institutions. FL enables localized data storage and aggregates model updates to refine a global model. Experimental results demonstrate the method's effectiveness, achieving a peak signal to noise ratio (PSNR) of 56.95 dB, significantly surpassing traditional denoising approaches where the state of the art PSNR is capped at 30 dB. This work underscores the potential of FL for secure and efficient MRI denoising, offering a significant contribution to medical imaging by improving noise reduction while preserving data privacy.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143919681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pneumonia Screening From Radiology Images Using Homomorphic Transformation Filter-Based FAWT and Customized VGG-16 基于同态变换滤波的FAWT和自定义VGG-16的影像学肺炎筛查
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-07 DOI: 10.1002/ima.70093
Rajneesh Kumar Patel, Ankit Choudhary, Nancy Kumari, Hemraj Shobharam Lamkuche
{"title":"Pneumonia Screening From Radiology Images Using Homomorphic Transformation Filter-Based FAWT and Customized VGG-16","authors":"Rajneesh Kumar Patel,&nbsp;Ankit Choudhary,&nbsp;Nancy Kumari,&nbsp;Hemraj Shobharam Lamkuche","doi":"10.1002/ima.70093","DOIUrl":"https://doi.org/10.1002/ima.70093","url":null,"abstract":"<div>\u0000 \u0000 <p>Pneumonia, attributable to pathogens and autoimmune disorders, accounts for approximately 450 million cases annually. Chest x-ray analysis remains the gold standard for pneumonia detection, and DL has revolutionized the study of high-dimensional data, including images, audio, and video. This research enhances and validates a CAD system for distinguishing pneumonia from normal health states using x-ray imaging. This paper presents a novel methodology that integrates CLHAE and Homographic Transformation Filter-based Flexible Analytical Wavelet Transform (HTF-FAWT) for image decomposition, enabling systematic decomposition of pre-processed input images into four distinct sub-band images across six hierarchical levels. Feature extraction employs the VGG-16 Deep Learning techniques, with the extracted features subsequently classified by a support vector machine that integrates Morlet, Mexican-hat wavelet, and radial basis function kernels. Employing tenfold cross-validation, our model exhibited remarkable classification performance, achieving an accuracy of 97.51%, specificity of 97.77%, and sensitivity of 96.5% in spotting pneumonia via Chest x-rays. The utility of feature maps and Grad-CAM analysis in highlighting critical regions for accurate prediction was confirmed, offering visual validation of the model's efficacy. Statistical examinations validate the superior performance of our proposed framework, demonstrating its potential as an expedient diagnostic tool for medical imaging specialists in rapidly detecting pneumonia. It demonstrates the effectiveness of various classifiers for classification, with the proposed method outperforming state-of-the-art approaches. The proposed CAD system enhances pneumonia diagnosis with high accuracy (97.51%), Grad-CAM visualization, and automated interpretation, enabling faster, reliable screening and clinical integration and reducing reliance on manual assessment in radiology.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143914383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DualBranch-FusionNet: A Hybrid CNN-Transformer Architecture for Cervical Cell Image Classification DualBranch-FusionNet:一种用于宫颈细胞图像分类的CNN-Transformer混合架构
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-07 DOI: 10.1002/ima.70101
Chuanyun Xu, Shuaiye Huang, Yang Zhang, Die Hu, Yisha Sun, Gang Li
{"title":"DualBranch-FusionNet: A Hybrid CNN-Transformer Architecture for Cervical Cell Image Classification","authors":"Chuanyun Xu,&nbsp;Shuaiye Huang,&nbsp;Yang Zhang,&nbsp;Die Hu,&nbsp;Yisha Sun,&nbsp;Gang Li","doi":"10.1002/ima.70101","DOIUrl":"https://doi.org/10.1002/ima.70101","url":null,"abstract":"<div>\u0000 \u0000 <p>Cervical cancer screening relies on accurate cell classification. Approaches based on Convolutional Neural Networks (CNNs) have proven effective in addressing the task. However, these approaches suffer from two main challenges. First, they may introduce bias into models due to variations in cell morphology and color. Second, they may struggle to capture broader contextual information as CNNs primarily focus on local pixel information. To address these issues, we present a novel hybrid model named DualBranch-FusionNet, which combines CNNs for local feature extraction with Transformers for capturing global contextual information to improve cervical cell classification accuracy. The proposed method adopts the three-fold ideas. First, concerning the CNN branch, it introduces Omni-dimensional Dynamic Convolution (ODConv) to adaptively extract detailed features across multiple dimensions and designs an Adaptive Channel Modulation (ACM) mechanism to dynamically emphasize critical feature channels. Second, regarding the Transformer branch, it designs a Dynamic Query-Aware Sparse Attention (DQSA) mechanism to effectively filter out less relevant key-value pairs over a larger receptive field, thereby reducing the interference of irrelevant information. Third, it adopts a fusion strategy, the Simple Fusion Module (SFM), to produce more comprehensive feature representations, leading to improved cervical cell classification accuracy. The proposed model was validated on two datasets: the Mendeley LBC and the Tianchi Cervical Cancer Risk Intelligent Diagnosis Challenge datasets, achieving Accuracies of 99.07% and 99.12%, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143914549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSA: Deep Self-Attention Medical Transformer Neuro-Technology for Brain Tumor Segmentation DSA:深度自关注医疗变形神经技术用于脑肿瘤分割
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-07 DOI: 10.1002/ima.70109
Mariyam Siddiqah, Kashif Javed, Syed Omer Gilani, Muhammad Attique Khan, Shrooq Alsenan, Robertas Damaševic̆ius, Yudong Zhang
{"title":"DSA: Deep Self-Attention Medical Transformer Neuro-Technology for Brain Tumor Segmentation","authors":"Mariyam Siddiqah,&nbsp;Kashif Javed,&nbsp;Syed Omer Gilani,&nbsp;Muhammad Attique Khan,&nbsp;Shrooq Alsenan,&nbsp;Robertas Damaševic̆ius,&nbsp;Yudong Zhang","doi":"10.1002/ima.70109","DOIUrl":"https://doi.org/10.1002/ima.70109","url":null,"abstract":"<div>\u0000 \u0000 <p>Transformer-based methods have shown remarkable outcomes in medical image segmentation tasks. Specifically, the Swin Transformer has proven to be an impressive approach for segmentation jobs, demonstrating its potential to further the discipline. Extensive research on integrating Swin Transformer architecture with U-Net models has shown significant progress toward improving segmentation accuracy. Currently, researchers are looking for innovative methods to improve the challenging segmentation accuracy of enhanced tumor regions due to their heterogeneous and indistinct boundaries. To improve its accuracy, we have proposed a modified version of Swin UNETR, DSA, which is deeper and more focused on extracting global features by an enhanced self-attention mechanism in the later stages of the encoder. It outperformed the enhancing tumor class with comparative performance for the other two classes. By fine-tuning some hyperparameters, we achieved SOTA performance for brain tumor segmentation. The proposed deep self-architecture obtained a mean dice score value of 0.889 and a mean Jaccard score of 0.806, respectively. A comparison was conducted with some recent state-of-the-art techniques, which showed improved accuracy and outperformed the recent best-performing UNet and transformer architectures.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143914547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPKU-Net: A U-Shaped Medical Image Segmentation Network Based on MLP and KAN MPKU-Net:基于MLP和KAN的u形医学图像分割网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-05 DOI: 10.1002/ima.70105
Peng Chen, Huihui Wang, Qin Jin
{"title":"MPKU-Net: A U-Shaped Medical Image Segmentation Network Based on MLP and KAN","authors":"Peng Chen,&nbsp;Huihui Wang,&nbsp;Qin Jin","doi":"10.1002/ima.70105","DOIUrl":"https://doi.org/10.1002/ima.70105","url":null,"abstract":"<div>\u0000 \u0000 <p>The UNET architecture has been widely adopted for image segmentation across various domains, owing to its efficient and powerful performance in recent years. Its application and enhancement in medical image segmentation primarily involve convolutional neural network (CNN) and Transformer. However, both methods have fundamental limitations. CNN struggle to capture global features, which greatly reduces the computational complexity but compromises its effectiveness. Transformers excel at capturing global features but demand substantial parameters and computations and fail to effectively extract the local features. To address these challenges, we propose a U-shaped network model, MPKU-NET, which integrates a multilayer perception (MLP) with a Knowledge-Aware Networks (KAN) network architecture, aiming to effectively extract both local and global characteristics in a coordinated manner. MPKU-NET features the flexible rolling Flip operation that, along with MLP and Knowledge-Aware Network (KAN), creates the WE-MPK modules for thorough learning of global and local features. Its effectiveness is proven by extensive testing on the BUSI, CVC, and GlaS datasets. The results demonstrate that MPKU-Net consistently outperforms several widely used segmentation networks, including U-KAN, Rolling-U-net, U-Net ++, in terms of both model parameters and segmentation accuracy, highlighting its effectiveness as a scalable solution for medical image segmentation. The network model code has been uploaded: https://github.com/cp668688/MPKU-Net.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143905336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement of Chest X-Ray Images Classification With Fuzzy-Variable Neural Network Activation Function 模糊变量神经网络激活函数增强胸部x线图像分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-05 DOI: 10.1002/ima.70094
Rayene Chelghoum, Ameur Ikhlef, Sabir Jacquir
{"title":"Enhancement of Chest X-Ray Images Classification With Fuzzy-Variable Neural Network Activation Function","authors":"Rayene Chelghoum,&nbsp;Ameur Ikhlef,&nbsp;Sabir Jacquir","doi":"10.1002/ima.70094","DOIUrl":"https://doi.org/10.1002/ima.70094","url":null,"abstract":"<div>\u0000 \u0000 <p>This study presents a novel Variable Single-Input Type-2 Fuzzy Rectifying Units activation function (VAR-SIT2-FRU), incorporating variable triangular membership functions assigned to different input values. It adjusts the width of the membership function dynamically to optimize performance for various tasks. The proposed activation function is designed to capture nonlinear relationships in data and enhance the efficiency and reliability of deep learning models while reducing computational costs compared to traditional activation functions. These make it more appropriate for medical image analysis tasks. The paper focuses on evaluating the performance of VAR-SIT2-FRU against five widely used activation functions and the classic SIT2-FRU activation function using AlexNet and ResNet-50 architectures. The experiments focused on classifying COVID-19, normal, and pneumonia using chest X-ray images. All images are preprocessed, normalized, and augmented to prevent overfitting. The significant results show that VAR-SIT2-FRU is suitable for medical classification tasks. It achieves higher classification accuracy and improved learning efficiency.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143909111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JILDYA-Net: An Efficient Lightweight Multi-Class Classification Architecture for Skin Lesions JILDYA-Net:一种高效轻量级的多类皮肤病变分类体系结构
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-05 DOI: 10.1002/ima.70102
Ayoub Laouarem, Chafia Kara-Mohamed, El-Bay Bourennane, Aboubekeur Hamdi-Cherif
{"title":"JILDYA-Net: An Efficient Lightweight Multi-Class Classification Architecture for Skin Lesions","authors":"Ayoub Laouarem,&nbsp;Chafia Kara-Mohamed,&nbsp;El-Bay Bourennane,&nbsp;Aboubekeur Hamdi-Cherif","doi":"10.1002/ima.70102","DOIUrl":"https://doi.org/10.1002/ima.70102","url":null,"abstract":"<p>Skin lesion classification has become increasingly important yet challenging due to the time physicians spend manually analyzing very similar lesions. While traditional deep learning methods have historically offered dependable automated support in lesion detection, thus improving patient care, newer lightweight architectures bring distinct advantages like decreased computational requirements and quicker training, making them better suited for mobile devices, microcontrollers, and embedded systems. The present paper proposes JILDYA-Net: A lightweight method designed for mobile applications and embedded systems, enabling accurate, rapid, and consistent diagnosis of skin lesions from dermoscopic images. The proposed approach aims to improve the analysis of dermoscopic images containing diverse features through two main components. First, a novel convolutional attention component called attention-based structural feature enhancement is introduced to enhance skin lesion features. Then, an encoder-based FNet enables faster processing and lower memory usage, which is especially beneficial for longer input lengths via Fourier Transforms. Additionally, an external attention module refines learned representations and emphasizes relevant features, accelerating convergence and improving model performance and stability during training. Furthermore, augmentation techniques are employed to address class imbalance sensitivity, generating additional data and reducing overfitting. Overall, the goal is to achieve optimal performance with a simple model that trains quickly. Regarding the evaluation metrics, we employ accuracy, sensitivity, specificity, and AUC. Our approach displays a competitive performance, validated through experiments on augmented and balanced versions of the HAM10000 and ISIC-2019 datasets, as compared with state-of-the-art methods. It demonstrates superior performance in accuracy, sensitivity, and specificity relative to competing methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143905335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Generative Segmentation Method for Panoramic Dental Radiography Images 一种基于鲁棒生成的口腔放射成像图像分割方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-05-03 DOI: 10.1002/ima.70099
Ayşe Başağaoğlu Fındık, Gizem Dursun Demir, Ufuk Özkaya, Gültekin Özdemir
{"title":"A Robust Generative Segmentation Method for Panoramic Dental Radiography Images","authors":"Ayşe Başağaoğlu Fındık,&nbsp;Gizem Dursun Demir,&nbsp;Ufuk Özkaya,&nbsp;Gültekin Özdemir","doi":"10.1002/ima.70099","DOIUrl":"https://doi.org/10.1002/ima.70099","url":null,"abstract":"<div>\u0000 \u0000 <p>Panoramic imaging is commonly used by dentists in both routine practice and in the planning of dental treatments. The process of capturing dental panoramic images presents a challenge in the segmentation of tooth components and the identification of features that form the basis of treatment planning. This is due to a number of factors, including the generation of different noise levels by the machine, the low contrast of edges, and the overlapping of anatomical structures. Furthermore, the segmentation of panoramic images presents a challenge in that a robust method is required which is capable of segmenting all tooth components in a variety of scenarios, including the presence of fillings, braces, implants, prosthetic dental crowns, and missing teeth. To address these issues, this study proposes the use of a generative model for the segmentation of panoramic dental images. The proposed Generative Adversarial Networks (GAN) model is trained to learn the spatial information between the original panoramic radiography images and the ground truth images, which contain the boundaries of the image components. Our model is evaluated on the UESB dataset, and its segmentation performance is compared with that of the U-Net model and SOTA methods evaluated on the UESB dataset. The GAN model achieved segmentation results of 0.8715 Jaccard, 0.9304 Dice, 0.9353 Precision, and 0.9293 Recall without the need for pre- or post-processing. The model demonstrated superior performance to the U-Net model and exhibited a level of performance that could compete with other convolutional neural network models. The segmentation performance of the model was validated through the conduct of an ablation study based on loss functions. The findings of the quantitative and qualitative analysis substantiate that our model is both robust and has superior performance in terms of segmentation. Furthermore, these findings exemplify the potential of GAN models as an effective methodology for computer-aided tooth segmentation, diagnosis, and treatment planning.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143901095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信