Pattern Recognition Letters最新文献

筛选
英文 中文
Multi-task convolution neural network-based lifting scheme for image compression 基于多任务卷积神经网络的图像压缩提升方案
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-20 DOI: 10.1016/j.patrec.2025.05.001
Tassnim Dardouri , Mounir Kaaniche , Amel Benazza-Benyahia , Gabriel Dauphin
{"title":"Multi-task convolution neural network-based lifting scheme for image compression","authors":"Tassnim Dardouri ,&nbsp;Mounir Kaaniche ,&nbsp;Amel Benazza-Benyahia ,&nbsp;Gabriel Dauphin","doi":"10.1016/j.patrec.2025.05.001","DOIUrl":"10.1016/j.patrec.2025.05.001","url":null,"abstract":"<div><div>Lifting schemes have attracted much interest in different image processing tasks, and more specifically in the image compression field. In this context, the optimization of the lifting operators (i.e. the prediction and update ones) plays a crucial role in the design of efficient lifting-based image coding systems. In this respect, we propose in this paper to further investigate the exploitation of neural networks in a standard non-separable lifting scheme structure. More precisely, unlike previous works, where different neural network models are employed for all the prediction and update steps involved in a lifting scheme-based decomposition, our design consists in building a new multi-task convolutional neural network model that takes into account the similarities between two prediction stages. Simulations carried out on three popular image datasets show the benefits of the proposed learning-based image coding approach.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 66-72"},"PeriodicalIF":3.9,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating clinical knowledge and imaging for medical report generation 整合临床知识和影像,生成医疗报告
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-17 DOI: 10.1016/j.patrec.2025.04.036
Meng Zhao , Juncai Liu , Hongyu Shen , Bin Yan , Mingtao Pei , Yi Wang
{"title":"Integrating clinical knowledge and imaging for medical report generation","authors":"Meng Zhao ,&nbsp;Juncai Liu ,&nbsp;Hongyu Shen ,&nbsp;Bin Yan ,&nbsp;Mingtao Pei ,&nbsp;Yi Wang","doi":"10.1016/j.patrec.2025.04.036","DOIUrl":"10.1016/j.patrec.2025.04.036","url":null,"abstract":"<div><div>Medical report generation is an important cross-modal task in the field of medicine, aiming to automatically generate professional and accurate reports for given medical images. Integrating clinical knowledge into the task of medical report generation can enhance the semantic accuracy of medical image feature descriptions and improves the interpretability and robustness of the model. In this paper, we propose to integrate clinical knowledge and image content to generate medical report. The clinical knowledge is presented by a clinical knowledge graph, where anatomical structures and observations are represented as nodes, and their relationships are represented as edges. We design a graph generation module to dynamically generate relevant knowledge graphs specific to each image, which can diversify the knowledge graph structures and expand the coverage of clinical knowledge to provide a more extensive range of clinical knowledge during report generation. Furthermore, we design a graph attention module to facilitate optimized feature representation within the clinical knowledge graph by incorporating message passing between nodes and edges. This fosters a more comprehensive understanding of the relationships and significance of clinical information. Experimental evaluations conducted on the IU X-Ray and MIMIC CXR datasets demonstrate the superiority of the proposed method in generating medical reports. The results highlight the potential of leveraging clinical knowledge for enhancing the precision and clinical relevance of the generated reports.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 59-65"},"PeriodicalIF":3.9,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transformer based on Voxel Spatial-Channel Attention for 3D object detection 基于体素空间通道注意的三维目标检测变压器
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-17 DOI: 10.1016/j.patrec.2025.04.034
Jun Lu, Guangyu Ji, Chengtao Cai, Kaibin Qin
{"title":"A Transformer based on Voxel Spatial-Channel Attention for 3D object detection","authors":"Jun Lu,&nbsp;Guangyu Ji,&nbsp;Chengtao Cai,&nbsp;Kaibin Qin","doi":"10.1016/j.patrec.2025.04.034","DOIUrl":"10.1016/j.patrec.2025.04.034","url":null,"abstract":"<div><div>Existing voxel-based object detection methods primarily use convolution or sparse convolution for feature extraction, followed by the classification and regression tasks based on voxel features. However, the coarse representation of point clouds by voxels can limit the ability to capture small object features, and the precision of 3D bounding-box regression may also be compromised, ultimately impacting detection accuracy. To address this issue, we propose a novel voxel-based architecture, the Voxel Spatial-Channel Transformer (VoxSCT), to detect 3D objects from point clouds through point-to-point translation. VoxSCT is constructed based on a Voxel-based Spatial-Channel Attention (VSCA) module. The global and local channel attention modules of VSCA enhance the model’s sensitivity to local feature variations within a voxel, enabling it to distinguish different objects within the same voxel. Additionally, the global and local spatial attention modules of VSCA identify relationships between different parts of an object scattered across multiple voxels, allowing the network to better represent entire objects. By integrating various geometric features, VoxSCT enhances the representation of small objects. Ultimately, it reassigns voxel features to the original points through a cross-attention module, utilizing the original points for classification and regression, thereby improving the precision of 3D bounding-boxes. The proposed VoxSCT combines the accuracy of point-based models with the efficiency of voxel-based models, making it a promising alternative for voxel-based backbones. VoxSCT achieves mAP scores of 78.22% and 70.56% on LEVEL 1 and LEVEL 2 of the vehicle category in the Waymo validation 3D detection benchmark.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 37-43"},"PeriodicalIF":3.9,"publicationDate":"2025-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing document dewarping evaluation: A new metric with improved accuracy and efficiency 增强文档去翘曲评估:提高准确性和效率的新度量
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-16 DOI: 10.1016/j.patrec.2025.04.038
Jiaxin Zhang , Peirong Zhang , Dezhi Peng , Haowei Xu , Lianwen Jin
{"title":"Enhancing document dewarping evaluation: A new metric with improved accuracy and efficiency","authors":"Jiaxin Zhang ,&nbsp;Peirong Zhang ,&nbsp;Dezhi Peng ,&nbsp;Haowei Xu ,&nbsp;Lianwen Jin","doi":"10.1016/j.patrec.2025.04.038","DOIUrl":"10.1016/j.patrec.2025.04.038","url":null,"abstract":"<div><div>Recently, the task of document image dewarping has garnered significant attention. With the development of a series of advanced models, the performance on various benchmark datasets has seen considerable improvement, as evidenced by the increasingly better quantitative outcomes. However, several recent studies have unveiled that the commonly used evaluation metrics may not consistently represent the dewarping performance, leading to discrepancies between evaluation results and human perceptual judgments. While some alternative metrics have been recently proposed, their efficacy has not been fully validated, and we found that their performance remains suboptimal. To address these issues, we propose a new metric, termed DocAligner Distortion (DD), to mitigate the deficiencies observed in existing metrics. We conduct comprehensive comparisons and analyses between DD and the prevailing metrics used in document image dewarping. Results demonstrate that DD significantly outperforms its predecessors with better accuracy and efficiency. Codes are available at https://github.com/ZZZHANG-jx/DocAligner-Distortion.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 51-58"},"PeriodicalIF":3.9,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSI-GCN: Translation and scaling invariant GCN for 3D point cloud analysis TSI-GCN:用于三维点云分析的平移和缩放不变GCN
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-15 DOI: 10.1016/j.patrec.2025.04.037
Zijin Du , Jiye Liang , Kaixuan Yao , Feilong Cao
{"title":"TSI-GCN: Translation and scaling invariant GCN for 3D point cloud analysis","authors":"Zijin Du ,&nbsp;Jiye Liang ,&nbsp;Kaixuan Yao ,&nbsp;Feilong Cao","doi":"10.1016/j.patrec.2025.04.037","DOIUrl":"10.1016/j.patrec.2025.04.037","url":null,"abstract":"<div><div>Point cloud is a crucial data format for 3D vision, but its irregularity makes it challenging to comprehend the associated geometric information. Although some previous research has attempted to improve deep learning on point cloud and achieved promising results, they often overlook the robust shape descriptors of 3D targets, making them susceptible to translation and scaling transformations. This paper proposes a novel framework for point cloud analysis, to achieve feature extraction with translation and scaling invariance. It mainly includes local adaptive kernel, translation and scaling invariant convolution (TSIConv), and graph attention pooling. The key component is the design of TSIConv, which extracts the shape information with translation and scaling invariance. Then it performs convolution with local adaptive kernels to capture the features in various shape structures. Following the convolution layer, we add the graph attention pooling to coarsen point cloud, thus achieving multi-scale analysis and computational overhead reduction. The proposed framework, consisting of two networks, completes point cloud classification and part segmentation tasks in an end-to-end manner. The property analysis and experiments demonstrate that our model strictly guarantees the translation and scaling invariance, meanwhile achieving comparable performance to previous methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 30-36"},"PeriodicalIF":3.9,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image segmentation via two-step deep variational priors 基于两步深度变分先验的图像分割
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-14 DOI: 10.1016/j.patrec.2025.04.030
Lu Tan , Xue-Cheng Tai , Ling Li , Wan-Quan Liu , Raymond H. Chan , Dan-Feng Hong
{"title":"Image segmentation via two-step deep variational priors","authors":"Lu Tan ,&nbsp;Xue-Cheng Tai ,&nbsp;Ling Li ,&nbsp;Wan-Quan Liu ,&nbsp;Raymond H. Chan ,&nbsp;Dan-Feng Hong","doi":"10.1016/j.patrec.2025.04.030","DOIUrl":"10.1016/j.patrec.2025.04.030","url":null,"abstract":"<div><div>This paper proposes an iterative deep variational approach for image segmentation in a fusion manner: it is not only able to realize selective segmentation, but can also alleviate the issue of parameter/initialization dependency. Moreover, it possesses a refinement process designed to handle challenging scenarios, such as images containing obscured, damaged, or absent objects, or those with complex backgrounds. Our proposed approach consists of two main procedures, i.e., selective segmentation and shape transformation. The first procedure works as a stem in a totally unsupervised way. A convolutional neural network (CNN) based architecture is properly incorporated into the selective weighting constrained variational segmentation model. The second procedure is to further refine the outputs. This part can be achieved in two ways: one direction is to establish a joint model with the semantic shape constraint. The other technical direction is to make the shape descriptor separated from the joint model and work as an individual unit. In the proposed approach, the minimization problem is transformed from iterative minimization for each variable to automatically minimizing the loss function by learning the generator network parameters. This also leads to a good inductive bias associated with classic variational methods. Extensive experiments have demonstrated the significant advantages.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 44-50"},"PeriodicalIF":3.9,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144099177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional Stable Diffusion for Distortion Correction and Image Rectification 畸变校正和图像校正的条件稳定扩散
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-13 DOI: 10.1016/j.patrec.2025.04.033
Pooja Kumari, Sukhendu Das
{"title":"Conditional Stable Diffusion for Distortion Correction and Image Rectification","authors":"Pooja Kumari,&nbsp;Sukhendu Das","doi":"10.1016/j.patrec.2025.04.033","DOIUrl":"10.1016/j.patrec.2025.04.033","url":null,"abstract":"<div><div>Image rectification and distortion correction are fundamental tasks in the field of image processing and computer vision, with it is applications ranging from document processing to medical imaging. This study presents a novel Conditional Stable Diffusion framework designed to tackle the challenges posed by diverse types of image distortions. Unlike existing traditional methods, our approach introduces an adaptive diffusion process that customizes its behavior based on the specific characteristics of the input image. By introducing controlled noise in a bidirectional manner, our model learns to interpret and refine various distortion patterns and progressively refines the image into a more uniform distribution. Furthermore, to complement the diffusion process, we incorporate a Guided Rectification Network (GRN) that generates reliable conditions from the input image, effectively reducing ambiguity between the distorted and target outputs. The integration of stable diffusion is justified by its versatility in handling diverse types and degrees of distortion. Our proposed method effectively handles a wide range of distortions—including projective and complex lens-based distortions such as barrel and pincushion—by dynamically adapting to each unique distortion type. Whether stemming from lens abnormalities, perspective discrepancies, or other factors, our proposed stable diffusion-based method consistently adapts to the specific characteristics of the distortion, yielding superior outcomes. Experimental results across benchmark datasets demonstrate that our method consistently outperforms existing state-of-the-art approaches. Additionally, we highlight that our work is the first instance of the diffusion method being used to simultaneously address various distortion types (barrel, pincushion, lens, etc.) for multi-distortion image rectification. This Conditional Stable Diffusion framework thus offers a promising advancement for robust and versatile image distortion correction.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"194 ","pages":"Pages 62-70"},"PeriodicalIF":3.9,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-low resource languages in palm leaf manuscript recognition: Syllable-based augmentation and error analysis 棕榈叶手稿识别中的多低资源语言:基于音节的增强和错误分析
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-13 DOI: 10.1016/j.patrec.2025.04.031
Nimol Thuon , Jun Du , Panhapin Theang , Ranysakol Thuon
{"title":"Multi-low resource languages in palm leaf manuscript recognition: Syllable-based augmentation and error analysis","authors":"Nimol Thuon ,&nbsp;Jun Du ,&nbsp;Panhapin Theang ,&nbsp;Ranysakol Thuon","doi":"10.1016/j.patrec.2025.04.031","DOIUrl":"10.1016/j.patrec.2025.04.031","url":null,"abstract":"<div><div>Recognizing text from palm leaf manuscripts in low-resource, non-Latin languages like Balinese, Khmer, and Sundanese poses significant challenges due to limited annotated data and complex structures. Unlike modern languages, these ancient scripts exhibit unique linguistic complexities that hinder effective recognition and digital preservation. Building on the success of syllable analysis augmentation for the Khmer script, we propose a framework, PALM-SADA, for multi-script recognition. PALM-SADA integrates visual and linguistic processing using a hybrid CNN-Transformer architecture. The framework introduces syllable analysis augmentation techniques, consisting of two main components. (1) Monosyllabic synthesis generates single-syllable words by combining glyphs from isolated glyph datasets using predefined grammar forms. And (2) Polysyllabic synthesis creates longer, grammatically correct text sequences by combining monosyllabic words and isolated glyphs. To ensure linguistic integrity, grammar forms and vocabulary lists of complete words were meticulously designed and validated, preserving the linguistic characteristics of the augmented data. For recognition, PALM-SADA employs a hybrid CNN-Transformer network that enhances both feature extraction and transcription accuracy. CNN layers capture local features, while Transformer layers model global dependencies. A Transformer-based decoder further refines transcriptions by leveraging contextual relationships within the text. Experiments conducted on the ICFHR 2018 contest datasets demonstrate that PALM-SADA significantly outperforms existing methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 8-15"},"PeriodicalIF":3.9,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144068522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAMFND: Cross-modal adaptive-aware learning for multimodal fake news detection CAMFND:多模态假新闻检测的跨模态自适应感知学习
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-12 DOI: 10.1016/j.patrec.2025.02.035
Ying Guo , Yuan Li , Kexin Zhen , Bingxin Li , Jie Liu
{"title":"CAMFND: Cross-modal adaptive-aware learning for multimodal fake news detection","authors":"Ying Guo ,&nbsp;Yuan Li ,&nbsp;Kexin Zhen ,&nbsp;Bingxin Li ,&nbsp;Jie Liu","doi":"10.1016/j.patrec.2025.02.035","DOIUrl":"10.1016/j.patrec.2025.02.035","url":null,"abstract":"<div><div>Recently, there has been a growing focus on the automatic identification of multimodal fake news detection. A fundamental challenge of multimodal fake news detection lies in the inherent semantic ambiguity across different content modalities. Decisions stemming from distinct unimodal sources may exhibit discrepancies, potentially creating inconsistency with the collective insights derived from multimodal data fusion. To address this issue, we propose CAMFND: a cross-modal adaptive-aware learning framework for multi-modal fake news detection, aiming to reduce semantic ambiguities among different modalities. CAMFND consists of (1) a cross-modal alignment module to transform the heterogeneous unimodality features into a shared semantic space, (2) a cross-modal adaptive-interactive module to capture the semantic correlation and consistency, computed by the multi-modal gated fusion unit, (3) a cross-modal adaptive-selective module to decide the semantic meaning or bias, guided by the multi-modal semantic matching score. CAMFND enhances the fake news detection by intelligently and dynamically combining features from uni-modality and identifying correlations across different modalities. It leverages unimodal features in scenarios with low cross-modal ambiguity, while utilizing cross-modal correlations in cases of high cross-modal uncertainty. The experimental results show that CAMFND significantly surpasses prior methodologies and sets new benchmarks on both English Twitter and Chinese Weibo datasets, marking a notable advancement in performance.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 1-7"},"PeriodicalIF":3.9,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144068521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Negotiation games with structured post-hoc intents 具有结构化事后意图的谈判游戏
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-05-12 DOI: 10.1016/j.patrec.2025.04.029
David Warren, Mark Dras, Malcolm Ryan
{"title":"Negotiation games with structured post-hoc intents","authors":"David Warren,&nbsp;Mark Dras,&nbsp;Malcolm Ryan","doi":"10.1016/j.patrec.2025.04.029","DOIUrl":"10.1016/j.patrec.2025.04.029","url":null,"abstract":"<div><div>An important class of negotiation games that use human language do not have predefined ‘moves’: it is up to the agents in the game to define moves via natural language that will lead them towards their goal. In the context of other games, however, a notion of <em>intents</em> — structured moves from a predefined set — have been found to be useful. In this paper, we show that it is possible to define and learn <em>post-hoc intents</em> in a practical way for AI agents in a negotiation game, using a text-to-text Transformer model; we show that this improves agent performance, and further allows the definition of a wider range of agents for training.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"195 ","pages":"Pages 23-29"},"PeriodicalIF":3.9,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144071571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信