Image and Vision Computing最新文献

筛选
英文 中文
Noise-robust re-identification with triple-consistency perception 利用三重一致性感知进行噪声抑制再识别
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-28 DOI: 10.1016/j.imavis.2024.105197
{"title":"Noise-robust re-identification with triple-consistency perception","authors":"","doi":"10.1016/j.imavis.2024.105197","DOIUrl":"10.1016/j.imavis.2024.105197","url":null,"abstract":"<div><p>Traditional re-identification (ReID) methods heavily rely on clean and accurately annotated training data, rendering them susceptible to label noise in real-world scenarios. Although some noise-robust learning methods have been proposed and achieved promising recognition performance, however, most of these methods are designed for the image classification task and they are not suitable in ReID (engaging in the association and matching of objects rather than solely focusing on their identification). To address this problem, in this paper, we propose a Triple-consistency Perception based Noise-robust Re-identification Model (TcP-ReID), in which we make the model mine and focus more on the clean samples and reliable relationships among samples from different perspectives. Specifically, the self-consistency strategy guides the model to emphasize and prioritize clean samples, thereby preventing overfitting to noise labels during the initial stages of model training. Rather than focusing solely on individual samples, the context-consistency loss exploits similarities between samples in the feature space, promoting predictions for each sample to align with those of its nearest neighbors. Moreover, to further enforce the robustness of our model, a Jensen-Shannon divergence based cross-view consistency loss is introduced by encouraging the consistency of samples across different views. Extensive experiments demonstrate the superiority of the proposed TcP-ReID over the competing methods under instance-dependent noise and instance-independent noise. For instance, on the Market1501 dataset, our method achieves 85.8% in rank-1 accuracy and 56.3% in mAP score (5.6% and 8.7% improvements) under instance-independent noise with <em>noise ratio 50%</em>, and similarly 5.7% and 1.4% under instance-dependent label noise.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visionary vigilance: Optimized YOLOV8 for fallen person detection with large-scale benchmark dataset 有远见的警惕:利用大规模基准数据集优化用于跌倒者检测的 YOLOV8
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-26 DOI: 10.1016/j.imavis.2024.105195
{"title":"Visionary vigilance: Optimized YOLOV8 for fallen person detection with large-scale benchmark dataset","authors":"","doi":"10.1016/j.imavis.2024.105195","DOIUrl":"10.1016/j.imavis.2024.105195","url":null,"abstract":"<div><p>Falls pose a significant risk to elderly people, patients with diseases such as neurological disorders, cardiovascular diseases, and disabled children. This highlights the need for real-time intelligent fall detection (FD) systems for quick relief leading to assisted living. The existing attempts are often based on multimodal approaches which are computationally expensive due to multi-sensor integration. The computer vision (CV) based era for FD needs the deployment of state-of-the-art (SOTA) networks with progressive enhancements to grasp falls effectively. However, CV-based systems often lack the ability to operate efficiently in real-time and the attempts for visual intelligence are usually not integrated at feasible stages of the networks. More importantly, the lack of large-scale well-annotated benchmark datasets limits the ability of FD in challenging and complex environments. To bridge the research gaps, we proposed an enhanced version of YOLOV8 for FD. Our research presents significant contributions by addressing these limitations through three key contributions. Initially, a comprehensive large-scale dataset is introduced which comprises approximately 10,500 image samples with corresponding annotations. The dataset encompasses diverse environmental conditions and scenarios, facilitating the generalization ability for the models. Then, progressive enhancements to the YOLOV8S model are proposed, integrating a focus module in the backbone to optimize feature extraction. Moreover, the convolutional block attention modules (CBAMs) are integrated at the feasible stages of the network to improve spatial and channel contexts for more accurate detection, especially in complex scenes. Finally, an extensive empirical evaluation showcases the superiority of the proposed network over 13 SOTA techniques, substantiated by meticulous benchmarking and qualitative validation across varied environments. The empirical findings and analysis of multiple factors such as model performance, size, and processing time prove that the suggested network displays impressive results. Datasets with annotations, results, and the ways of progressive modifications in the code will be available to the research community at the link <span><span>https://github.com/habib1402/Fall-Detection-DiverseFall10500</span><svg><path></path></svg></span></p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141839590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative feature-driven image replay for continual learning 生成特征驱动图像回放,实现持续学习
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-25 DOI: 10.1016/j.imavis.2024.105187
{"title":"Generative feature-driven image replay for continual learning","authors":"","doi":"10.1016/j.imavis.2024.105187","DOIUrl":"10.1016/j.imavis.2024.105187","url":null,"abstract":"<div><p>Neural networks are prone to catastrophic forgetting when trained incrementally on different tasks. Popular incremental learning methods mitigate such forgetting by retaining a subset of previously seen samples and replaying them during the training on subsequent tasks. However, this is not always possible, e.g., due to data protection regulations. In such restricted scenarios, one can employ generative models to replay either artificial images or hidden features to a classifier. In this work, we propose Genifer (<strong>GEN</strong>erat<strong>I</strong>ve <strong>FE</strong>ature-driven image <strong>R</strong>eplay), where a generative model is trained to replay images that must induce the same hidden features as real samples when they are passed through the classifier. Our technique therefore incorporates the benefits of both image and feature replay, i.e.: (1) unlike conventional image replay, our generative model explicitly learns the distribution of features that are relevant for classification; (2) in contrast to feature replay, our entire classifier remains trainable; and (3) we can leverage image-space augmentations, which increase distillation performance while also mitigating overfitting during the training of the generative model. We show that Genifer substantially outperforms the previous state of the art for various settings on the CIFAR-100 and CUB-200 datasets. The code is available at: <span><span><span>https://github.com/kevthan/feature-driven-image-replay</span></span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantics feature sampling for point-based 3D object detection 基于点的三维物体检测的语义特征采样
IF 4.7 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-23 DOI: 10.1016/j.imavis.2024.105180
Jing-Dong Huang, Ji-Xiang Du, Hong-Bo Zhang, Huai-Jin Liu
{"title":"Semantics feature sampling for point-based 3D object detection","authors":"Jing-Dong Huang, Ji-Xiang Du, Hong-Bo Zhang, Huai-Jin Liu","doi":"10.1016/j.imavis.2024.105180","DOIUrl":"https://doi.org/10.1016/j.imavis.2024.105180","url":null,"abstract":"Currently, 3D object detection is a research hotspot in the field of computer vision. In this paper, we have observed that the commonly used set abstraction module retains excessive irrelevant background information during downsampling, impacting object detection precision. To address this, we propose a mixed sampling method. During point feature extraction, we integrate semantic features into the sampling process, guiding the set abstraction module to sample foreground points. In order to leverage the high-quality 3D proposals generated in the first stage, we have developed a virtual point pooling module aimed at acquiring the features of these proposals. This module facilitates the capture of more comprehensive and resilient ROI features. Experimental results on the KITTI test set show a 3.51% higher Average Precision (AP) compared to the PointRCNN baseline, particularly for moderately challenging car classes, highlighting the effectiveness of our approach.","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142212924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QSMT-net: A query-sensitive proposal and multi-temporal-span matching network for video grounding QSMT-net:用于视频接地的查询敏感提案和多时跨匹配网络
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-23 DOI: 10.1016/j.imavis.2024.105188
{"title":"QSMT-net: A query-sensitive proposal and multi-temporal-span matching network for video grounding","authors":"","doi":"10.1016/j.imavis.2024.105188","DOIUrl":"10.1016/j.imavis.2024.105188","url":null,"abstract":"<div><p>The video grounding task aims to retrieve moments from the videos corresponding to a given textual query. This task poses significant challenges because of the need to comprehend the semantic content of both videos and sentences as well as manage the matching relationship between modalities. Existing approaches struggle to effectively meet this challenge, as they often lack consideration for the diversity in constructing proposals to fit segments from varied scenes and disregard the multi-temporal scale matching relationship between queries and proposals. In this paper, we propose the Query-Sensitive Proposal and Multi-Temporal-Span Matching Network (QSMT-Net), an innovative framework designed to generate more distinctive proposals and to enhance the matching between queries and candidate proposals over varying temporal spans. First, we fortify the connection between modes by instituting fine-grained interactions between video clips and textual words. Subsequently, through a learnable pooling mechanism, we dynamically construct candidate proposals tailored to specific queries, thus implementing a query-sensitive proposal generation strategy. Second, we enhanced the model's ability to differentiate adjacent candidate proposals through the multi-temporal-span matching network, which facilitated selecting the most accurate proposal results under various time scales. Experiments on three widely used benchmarks, Charades-STA, TACoS and ActivityNet Captions, our approach demonstrated significant improvements over state-of-the-art methods, indicating promising advancements in video grounding.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141841605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-label recognition in open driving scenarios based on bipartite-driven superimposed dynamic graph 基于双方位驱动叠加动态图的开放式驾驶场景中的多标签识别
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-22 DOI: 10.1016/j.imavis.2024.105189
{"title":"Multi-label recognition in open driving scenarios based on bipartite-driven superimposed dynamic graph","authors":"","doi":"10.1016/j.imavis.2024.105189","DOIUrl":"10.1016/j.imavis.2024.105189","url":null,"abstract":"<div><p>The multi-label image recognition task is widely prevalent in real-world scenarios. Overcoming the issue of overlapping and densely packed objects in complex scenes is crucial. For instance, in traffic scenarios, there are overlaps and close proximity among pedestrians, various types of vehicles, and signage. However, a primary obstacle in leveraging label relationships to enhance image classification lies in effectively integrating label semantic topology information with the image data itself. In this paper, we propose a novel framework, the Bipartite-driven Superimposed Dynamic Graph Convolutional Network (Bi-SDNet), augmented with Mapping Alignment Module (MAM) and Semantic Decoupling Module(SDM). Our approach initially decomposes input features into representations capable of discerning category label semantics at multiple scales, facilitated by MAM and SDM modules. Furthermore, through the meticulously designed Superimposed Dynamic Graph, we adeptly capture content-aware category relationships for each image, effectively modeling the relationships between these representations for the final recognition task. We conducted extensive experiments on publicly available benchmark datasets and the traffic scene dataset WZ-traffic. The model achieved an impressive 87.5% mean average precision (mAP) on the MS-COCO dataset and a commendable 91% mAP on the WZ-traffic dataset. Our research introduces novel techniques and significant breakthroughs in this field, furnishing powerful tools for enhancing model performance.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDConv: Linear deformable convolution for improving convolutional neural networks LDConv:用于改进卷积神经网络的线性可变形卷积
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-20 DOI: 10.1016/j.imavis.2024.105190
{"title":"LDConv: Linear deformable convolution for improving convolutional neural networks","authors":"","doi":"10.1016/j.imavis.2024.105190","DOIUrl":"10.1016/j.imavis.2024.105190","url":null,"abstract":"<div><p>Neural networks based on convolutional operations have achieved remarkable results in the field of deep learning, but there are two inherent flaws in standard convolutional operations. On the one hand, the convolution operation is confined to a local window, so it cannot capture information from other locations, and its sampled shapes is fixed. On the other hand, the size of the convolutional kernel is fixed to k <span><math><mo>×</mo></math></span> k, which is a fixed square shape, and the number of parameters tends to grow squarely with size. Although Deformable Convolution (Deformable Conv) address the problem of fixed sampling of standard convolutions, the number of parameters also tends to grow in a squared manner, and Deformable Conv do not explore the effect of different initial sample shapes on network performance. In response to the above questions, the Linear Deformable Convolution (LDConv) is explored in this work, which gives the convolution kernel an arbitrary number of parameters and arbitrary sampled shapes to provide richer options for the trade-off between network overhead and performance. In LDConv, a novel coordinate generation algorithm is defined to generate different initial sampled positions for convolutional kernels of arbitrary size. To adapt to changing targets, offsets are introduced to adjust the shape of the samples at each position. LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth. Compared to Deformable Conv, LDConv provides richer choices and can be equivalent to deformable convolution when the number of parameters of LDConv is set to the square of K. Differently, this paper also explores the effect of neural networks by using LDConv with the same size and different initial sampling shapes. LDConv completes the process of efficient feature extraction by irregular convolutional operations and brings more exploration options for convolutional sampled shapes. Object detection experiments on representative datasets COCO2017, VOC 7 + 12, and VisDrone-DET2021 fully demonstrate the advantages of LDConv. LDConv is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance. The code for the relevant tasks can be found at <span><span><span>https://github.com/CV-ZhangXin/LDConv</span></span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141960480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring holistic discriminative representation for micro-expression recognition via contrastive learning 通过对比学习探索微表情识别的整体判别表征
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-20 DOI: 10.1016/j.imavis.2024.105186
{"title":"Exploring holistic discriminative representation for micro-expression recognition via contrastive learning","authors":"","doi":"10.1016/j.imavis.2024.105186","DOIUrl":"10.1016/j.imavis.2024.105186","url":null,"abstract":"<div><p>Recently, deep learning-based micro-expression recognition (MER) has been remarkably successful in the affective computing and computer vision communities. However, the most challenging issue that hinders the performance of MER is low intensity. Instead of forcefully transforming the input from micro-expressions to exaggerated micro-expressions by a fixed video motion magnification factor, our approach introduces a sophisticated pretext task with an intensity-agnostic strategy to enhance the discriminative capacity of each micro-expression sample holistically through contrastive transfer learning. This strategy enables us to progressively transfer knowledge and leverage the rich facial expression information from macro-expression samples. In addition, we reconsider that the core of the MER task is to refine and incorporate the instance-level and class-level discriminative features from the initial indistinguishable information. As a result, we jointly merge the two views to learn a holistic-level representation. Simultaneously, to ensure a strong association and guidance between the instance-level view and the class-level view, we maintain their consistency through an alignment loss. The results showed that the proposed method could significantly improve the performance of MER on CASME II, SAMM, SMIC, and CAS(ME)<sup>3</sup> datasets.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale segmentation net for segregating heterogeneous brain tumors: Gliomas on multimodal MR images 用于分离异质脑肿瘤的多尺度分割网--神经胶质瘤多模态磁共振图像
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-18 DOI: 10.1016/j.imavis.2024.105191
{"title":"Multiscale segmentation net for segregating heterogeneous brain tumors: Gliomas on multimodal MR images","authors":"","doi":"10.1016/j.imavis.2024.105191","DOIUrl":"10.1016/j.imavis.2024.105191","url":null,"abstract":"<div><p>In this research, the 3D volumetric segmentation of heterogeneous brain tumors such as Gliomas- anaplastic astrocytoma, and Glioblastoma Multiforme (GBM) is performed to extract enhancing tumor (ET), whole tumor (WT), and tumor core (TC) regions using T1, T2, and FLAIR images. Therefore, a deep learning-based encoder-decoder architecture named “MS-SegNet” using 3D multi-scale convolutional layers is proposed. The proposed architecture employs multi-scale feature extraction (MS-FE) block the filter size 3 × 3 × 3 to extract confined information like tumor boundary and edges of necrotic part. The filter of size 5 × 5 × 5 focuses on varied features like shape, size, and location of tumor region with edema. The local and global features from different MR modalities are extracted for segmenting thin and meshed boundaries of heterogeneous tumors between anatomical sub-regions like peritumoral edema, enhancing tumor, and necrotic tumor core. The learning parameters on introducing the MS-FE block are reduced to 10 million, which is much less than other architectures like 3D-Unet which takes into consideration 27 million features leading to the consumption of less computational power. A customized loss function is also prophesied based on a combination of dice loss and focal loss along with metrics such as accuracy and Intersection over Union (IoU) i.e. the overlapping of ground truth mask and predicted value for addressing the class imbalance problem. For evaluating the efficacy of the proposed method, four evaluation metrics such as Dice Coefficient (DSC), Sensitivity, Specificity, and Hausdorff95 distance (H95) are employed for analyzing the model's overall performance. It is observed that the proposed MS-SegNet architecture achieved the DSC of 0.81, 0.91, and 0.83 on BraTS 2020; 0.86, 0.92, and 0.84 on BraTS 2021 for ET, WT, and TC respectively. The developed model is also tested on a real-time dataset collected from the Post Graduate Institute of Medical Education &amp; Research (PGIMER), Chandigarh. The DSC of 0.79, 0.76, and 0.68 for ET, WT, and TC respectively on the real-time dataset. These findings show that deep learning models with enhanced feature extraction capabilities can be readily trained to attain high accuracy in segmenting heterogeneous brain tumors and hold promising results. In the future, other tumor datasets will be explored for the detection and treatment planning of brain tumors to check the effectiveness of the model in real-world healthcare environments.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141847503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing cervical cancer diagnosis: Integrated attention-transformer system with weakly supervised learning 加强宫颈癌诊断:采用弱监督学习的综合注意力转换系统
IF 4.2 3区 计算机科学
Image and Vision Computing Pub Date : 2024-07-18 DOI: 10.1016/j.imavis.2024.105193
{"title":"Enhancing cervical cancer diagnosis: Integrated attention-transformer system with weakly supervised learning","authors":"","doi":"10.1016/j.imavis.2024.105193","DOIUrl":"10.1016/j.imavis.2024.105193","url":null,"abstract":"<div><p>Cervical cancer screening through cytopathological images poses a significant challenge due to the intricate nature of cancer cells, often resulting in high misdiagnosis rates. This study presents the Integrated Attention-Transformer System (IATS), a pioneering framework designed to enhance the precision and efficiency of cervical cancer cell image analysis, surpassing the capabilities of existing deep learning models. Instead of relying solely on convolutional neural networks (CNNs), IATS leverages the power of transformers, a recently emerged architecture, to holistically capture both global and local features within the images. It employs a multi-pronged approach: Vision Transformer (ViT) module captures the overall spatial context and interactions between cells, providing a crucial understanding of potential cancer patterns. Token-to-token module zooms in on individual cells, meticulously examining subtle malignant features that might be missed by CNNs. SeNet integration with ResNet101 and DenseNet169 refines feature extraction by dynamically analyzing the importance of different features captured by these popular deep learning architectures. SeNet acts like a skilled analyst, prioritizing the most informative features for accurate cancer cell identification. Weighted voting combines the insights from each module, leading to robust and accurate identification, minimizing misdiagnosis risk. The proposed framework achieves an impressive accuracy of 98.44% on Mendeley dataset and 95.88% on SIPaKMeD dataset, outperforming 25 deep learning models, which included Convolutional Neural Network (CNN) and Vision Transformer (VT) models. These results reveal a 2.5% accuracy improvement compared to the best-performing CNN model on the Mendeley dataset. This significant advancement holds the potential to revolutionize cervical cancer screening by substantially reducing misdiagnosis rates and improving patient outcomes. While this study focuses on model performance, future work will explore its computational efficiency and real-world clinical integration to ensure its broader impact on patient care.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141849147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信