Pattern Recognition Letters最新文献

筛选
英文 中文
RAM: Interpreting real-world image super-resolution in the industry environment RAM:在工业环境中解读真实世界的图像超分辨率
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-04-03 DOI: 10.1016/j.patrec.2025.03.034
Ze-Yu Mi, Yu-Bin Yang
{"title":"RAM: Interpreting real-world image super-resolution in the industry environment","authors":"Ze-Yu Mi,&nbsp;Yu-Bin Yang","doi":"10.1016/j.patrec.2025.03.034","DOIUrl":"10.1016/j.patrec.2025.03.034","url":null,"abstract":"<div><div>Industrial image super-resolution (SR) plays a crucial role in various industrial applications by generating high-resolution images that enhance image quality, clarity, and texture. The interpretability of industrial SR models is becoming increasingly important, enabling designers and quality inspectors to perform detailed image analysis and make more informed decisions. However, existing interpretability methods struggle to adapt to the complex degradation and diverse image patterns in industrial SR, making it challenging to provide reliable and accurate interpretations. To address this challenge, we propose a novel approach, Real Attribution Maps (RAM), designed for precise interpretation of industrial SR. RAM introduces two key components: the multi-path downsampling (MPD) function and the multi-progressive degradation (MPG) function. The MPD generates multiple attribution paths by applying a range of downsampling strategies, while the MPG incorporates random degradation kernels to better simulate real-world conditions, ensuring more accurate feature attribution. The final attribution map is derived by averaging the results from all paths. Extensive experiments conducted on industrial datasets, including IndSR, Wafer Maps, and Pelvis, validate the effectiveness of RAM. Our results show substantial improvements in several interpretation evaluation metrics and enhanced visual explanations that eliminate irrelevant interference. This work provides a powerful and versatile tool for explaining industrial SR models, offering significant advances in the interpretability of complex industrial images.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 86-92"},"PeriodicalIF":3.9,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dark channel map and union training strategy for object detection in foggy scenes
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-04-02 DOI: 10.1016/j.patrec.2025.03.024
Zhanqiang Huo , Sen Li , Sensen Meng , Yingxu Qiao , Shan Zhao , Luyao Liu
{"title":"Dark channel map and union training strategy for object detection in foggy scenes","authors":"Zhanqiang Huo ,&nbsp;Sen Li ,&nbsp;Sensen Meng ,&nbsp;Yingxu Qiao ,&nbsp;Shan Zhao ,&nbsp;Luyao Liu","doi":"10.1016/j.patrec.2025.03.024","DOIUrl":"10.1016/j.patrec.2025.03.024","url":null,"abstract":"<div><div>Most existing object detection methods in real-world hazy scenarios fail to handle the heterogeneous haze and treat clear images and hazy images as adversarial while ignoring the latent information beneficial in clear images for detection, resulting in sub-optimal performance. To alleviate the above problems, we propose a new dark channel map-guided detection paradigm (DG-Net) in an end-to-end manner and provide an interpretable idea for object detection in hazy scenes from an entirely new perspective. Specifically, we design a unique dark channel map-guided feature fusion (DGFF) module to handle the adverse impact of the heterogeneous haze, which enables the model to focus on potential regions that may contain detection objects adaptively, assign higher weights to these regions, and thus improve the network’s ability to learn and represent the features of hazy images. To more effectively utilize the latent features of clear images, we propose a new simple but effective union training strategy (UTS) that considers the clear images as a complement to the hazy images, which enables the DGFF module to work better. In addition, we introduce Focal loss and Self-calibrated convolutions to enhance the performance of the DG-Net. Extensive experiments show that DG-Net outperforms the state-of-the-art detection methods quantitatively and qualitatively, especially in real-world hazy datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 79-85"},"PeriodicalIF":3.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An online adaptive augmentation strategy for cervical cytopathology image recognition
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-04-01 DOI: 10.1016/j.patrec.2025.03.023
Hongmei Tang , Xinyi Liu , Shenghua Cheng , Xiuli Liu
{"title":"An online adaptive augmentation strategy for cervical cytopathology image recognition","authors":"Hongmei Tang ,&nbsp;Xinyi Liu ,&nbsp;Shenghua Cheng ,&nbsp;Xiuli Liu","doi":"10.1016/j.patrec.2025.03.023","DOIUrl":"10.1016/j.patrec.2025.03.023","url":null,"abstract":"<div><div>Cervical cancer has become one of the malignant tumors among women, posing a significant threat to women’s health worldwide. Efficient computer-aided screening techniques are extremely significant for popularizing cervical cancer screening and reducing the mortality rate. However, in practical, applications, variations in staining styles and image quality of multi-center digital pathological smears pose challenges for current cervical cell recognition deep learning-based methods. In this paper, we propose an online adaptive data augmentation strategy : a simple, search-free, and adaptive method designed to improve model generalization. We transform the process of selecting the optimal augmentation strategy into the determination of probabilities assigned to each operation. The core of the method lies in directly optimize the probability distribution of the entire augmentation space based on the model’s performance on the validation set. Extensive experiments on the multi-center cervical cytopathology image datasets demonstrate that our method outperforms the state-of-the-art automatic augmentation methods in model generalization evaluation. We achieve an average accuracy of 87.80% on five external test sets, with only a 6.85% difference from internal test accuracy. Our work contributes to enhancing generalization of cervical cell recognition methods in multi-center scenarios.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 93-98"},"PeriodicalIF":3.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143783446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating visual-adaptive audio representation for audio recognition
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-28 DOI: 10.1016/j.patrec.2025.03.020
Jongsu Youn , Dae Ung Jo , Seungmo Seo , Sukhyun Kim , Jongwon Choi
{"title":"Generating visual-adaptive audio representation for audio recognition","authors":"Jongsu Youn ,&nbsp;Dae Ung Jo ,&nbsp;Seungmo Seo ,&nbsp;Sukhyun Kim ,&nbsp;Jongwon Choi","doi":"10.1016/j.patrec.2025.03.020","DOIUrl":"10.1016/j.patrec.2025.03.020","url":null,"abstract":"<div><div>We propose “<em>Visual-adaptive Audio Spectrogram Generation</em>” (VASG), which is an innovative audio feature generation method preserving the Mel-spectrogram’s structure while enhancing its own discriminability. VASG maintains the spatio-temporal information of the Mel-spectrogram without degrading the performance of existing audio recognition and improves intra-class discriminability by incorporating the relational knowledge of images. VASG incorporates images only during the training phase, and once trained, VASG can be utilized as a converter that takes an input Mel-spectrogram and outputs an enhanced Mel-spectrogram, improving the discriminability of audio spectrograms without requiring further training during application. To effectively increase the discriminability of the encoded audio feature, we introduce a novel audio-visual correlation learning loss, named “Batch-wise Correlation Transfer” loss, that aligns inter-correlation between audio and visual modality. When applying pre-trained VASG to convert environmental sound classification benchmarks, we observed performance improvements in various audio classification models. Using the enhanced Mel-spectrograms produced by VASG, as opposed to the original Mel-spectrogram input, led to performance gains in recent state-of-the-art models, with accuracy increases of up to 4.27%.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 65-71"},"PeriodicalIF":3.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143760781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust camera-independent color chart localization using YOLO
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-28 DOI: 10.1016/j.patrec.2025.03.022
Luca Cogo, Marco Buzzelli, Simone Bianco, Raimondo Schettini
{"title":"Robust camera-independent color chart localization using YOLO","authors":"Luca Cogo,&nbsp;Marco Buzzelli,&nbsp;Simone Bianco,&nbsp;Raimondo Schettini","doi":"10.1016/j.patrec.2025.03.022","DOIUrl":"10.1016/j.patrec.2025.03.022","url":null,"abstract":"<div><div>Accurate color information plays a critical role in numerous computer vision tasks, with the Macbeth ColorChecker being a widely used reference target due to its colorimetrically characterized color patches. However, automating the precise extraction of color information in complex scenes remains a challenge. In this paper, we propose a novel method for the automatic detection and accurate extraction of color information from Macbeth ColorCheckers in challenging environments. Our approach involves two distinct phases: (i) a chart localization step using a deep learning model to identify the presence of the ColorChecker, and (ii) a consensus-based pose estimation and color extraction phase that ensures precise localization and description of individual color patches. We rigorously evaluate our method using the widely adopted NUS and ColorChecker datasets. Comparative results against state-of-the-art methods show that our method outperforms the best solution in the state of the art achieving about 5% improvement on the ColorChecker dataset and about 17% on the NUS dataset. Furthermore, the design of our approach enables it to handle the presence of multiple ColorCheckers in complex scenes. Code will be made available after pubblication at: <span><span>https://github.com/LucaCogo/ColorChartLocalization</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 51-58"},"PeriodicalIF":3.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometrical preservation and correlation learning for multi-source unsupervised domain adaptation
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-27 DOI: 10.1016/j.patrec.2025.03.018
Huiling Fu, Yuwu Lu
{"title":"Geometrical preservation and correlation learning for multi-source unsupervised domain adaptation","authors":"Huiling Fu,&nbsp;Yuwu Lu","doi":"10.1016/j.patrec.2025.03.018","DOIUrl":"10.1016/j.patrec.2025.03.018","url":null,"abstract":"<div><div>Multi-source unsupervised domain adaptation (MUDA) aims to improve the performance of the model on the target domain by utilizing useful information from several source domains with distinct distributions. However, due to the diverse information in each domain, how to extract and transfer useful information from source domains is essential for MUDA. Most existing MUDA methods simply minimized the distribution incongruity among multiple domains, without fully considering the unique information within each domain and the relationships between different domains. In response to these challenges, we propose a novel MUDA approach named geometrical preservation correlation learning (GPCL). Specifically, GPCL integrates graph regularization and correlation learning within the nonnegative matrix factorization (NMF) structure, leveraging the inherent geometry of the data distribution to acquire discriminative features while maintaining both the local and global geometrical structures of the original data. Meanwhile, GPCL extracts the maximum correlation information from each source domain and target domain to further narrow their domain discrepancy and ensure positive knowledge transfer. Integrated experimental results across multiple benchmarks verify that GPCL performs better than several existing MUDA approaches, showcasing the efficiency of our method in MUDA. For example, on the Office-Home dataset, GPCL outperforms the SOTA by an average of 1.58%. On the ImageCLEF-DA dataset, GPCL achieves the best results across multiple sub-tasks and the average performance, outperforming the single-source SOTA by 2.3%, 2%, and 1.26%, respectively.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 72-78"},"PeriodicalIF":3.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143760747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FoodMem: Near real-time and precise food video segmentation
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-25 DOI: 10.1016/j.patrec.2025.03.014
Ahmad AlMughrabi , Adrián Galán , Ricardo Marques , Petia Radeva
{"title":"FoodMem: Near real-time and precise food video segmentation","authors":"Ahmad AlMughrabi ,&nbsp;Adrián Galán ,&nbsp;Ricardo Marques ,&nbsp;Petia Radeva","doi":"10.1016/j.patrec.2025.03.014","DOIUrl":"10.1016/j.patrec.2025.03.014","url":null,"abstract":"<div><div>Food segmentation, including in videos, is vital for addressing real-world health, agriculture, and food biotechnology issues. Current limitations lead to inaccurate nutritional analysis, inefficient crop management, and suboptimal food processing, impacting food security and public health. Improving segmentation techniques can enhance dietary assessments, agricultural productivity, and the food production process. This study introduces the development of a robust framework for high-quality, near-real-time segmentation and tracking of food items in videos, using minimal hardware resources. We present FoodMem, a novel framework designed to segment food items from video sequences of 360-degree unbounded scenes. FoodMem can consistently generate masks of food portions in a video sequence, overcoming the limitations of existing semantic segmentation models, such as flickering and prohibitive inference speeds in video processing contexts. To address these issues, FoodMem leverages a two-phase solution: a transformer segmentation phase to create initial segmentation masks and a memory-based tracking phase to monitor food masks in complex scenes. Our framework outperforms current state-of-the-art food segmentation models, yielding superior performance across various conditions, such as camera angles, lighting, reflections, scene complexity, and food diversity.<span><span><sup>2</sup></span></span>This results in reduced segmentation noise, elimination of artifacts, and completion of missing segments. We also introduce a new annotated food dataset encompassing challenging scenarios absent in previous benchmarks. Extensive experiments conducted on MetaFood3D, Nutrition5k, and Vegetables &amp; Fruits datasets demonstrate that FoodMem enhances the state-of-the-art by 2.5% mean average precision in food video segmentation and is <span><math><mrow><mn>58</mn><mo>×</mo></mrow></math></span> faster on average. The source code is available at: <span><span><sup>3</sup></span></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 59-64"},"PeriodicalIF":3.9,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Group commonality graph: Multimodal pedestrian trajectory prediction via deep group features
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-24 DOI: 10.1016/j.patrec.2025.03.019
Di Zhou , Ying Gao , Hui Li , Xiaoya Liu , Qinghua Lin
{"title":"Group commonality graph: Multimodal pedestrian trajectory prediction via deep group features","authors":"Di Zhou ,&nbsp;Ying Gao ,&nbsp;Hui Li ,&nbsp;Xiaoya Liu ,&nbsp;Qinghua Lin","doi":"10.1016/j.patrec.2025.03.019","DOIUrl":"10.1016/j.patrec.2025.03.019","url":null,"abstract":"<div><div>Pedestrian trajectory prediction is a challenging task in domains such as autonomous driving and robot motion planning. Existing methods often focus on aggregating nearby individuals into a single group, while neglecting individual differences and the risks of unreliable interactions. Therefore we propose a novel framework termed group commonality graph, which comprises a group feature capture network and a spatial–temporal graph sparse connected network. The previous network can group and pool pedestrians based on their characteristics, capturing and integrating deep features of the group to generate the final prediction. The subsequent network learns pedestrian motion patterns and simulates their interactive relationships. The framework not only addresses the limitations of overly simplistic aggregation methods but also ensures reliable interactions with sparse directionality. Additionally, to evaluate the effectiveness of our model, we introduce a new evaluation metric termed collision prediction error, which incorporates map environment information to assess the comprehensiveness of multimodal prediction results. Experimental results on public pedestrian trajectory prediction benchmark demonstrate that our method outperforms the state-of-the-art methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 36-42"},"PeriodicalIF":3.9,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSMT: Few-shot object detection via Multi-Task Decoupled
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-21 DOI: 10.1016/j.patrec.2025.03.016
Jiahui Qin , Yang Xu , Yifan Fu , Zebin Wu , Zhihui Wei
{"title":"FSMT: Few-shot object detection via Multi-Task Decoupled","authors":"Jiahui Qin ,&nbsp;Yang Xu ,&nbsp;Yifan Fu ,&nbsp;Zebin Wu ,&nbsp;Zhihui Wei","doi":"10.1016/j.patrec.2025.03.016","DOIUrl":"10.1016/j.patrec.2025.03.016","url":null,"abstract":"<div><div>With the advancement of object detection technology, few-shot object detection (FSOD) has become a research hotspot. Existing methods face two major challenges: base models have limited generalization to unseen categories, especially with limited few-shot data, where the shared feature representation fails to meet the distinct needs of classification and regression tasks; FSOD is susceptible to overfitting during training. To address these issues, this paper proposes a Multi-Task Decoupled Method (MTDM), which enhances the model’s generalization to new categories by separating the feature extraction processes for different tasks. Additionally, a dynamic adjustment strategy is adopted, which adaptively modifies the IOU threshold and loss function parameters based on variations in the training data, reducing the risk of overfitting and maximizing the utilization of limited data resources. Experimental results show that the proposed hybrid model performs well on multiple few-shot datasets, effectively overcoming the challenges posed by limited annotated data.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 8-14"},"PeriodicalIF":3.9,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143685765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating large language models with explainable fuzzy inference systems for trusty steel defect detection
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-03-20 DOI: 10.1016/j.patrec.2025.03.017
Kening Zhang , Yung Po Tsang , Carman K.M. Lee , C.H. Wu
{"title":"Integrating large language models with explainable fuzzy inference systems for trusty steel defect detection","authors":"Kening Zhang ,&nbsp;Yung Po Tsang ,&nbsp;Carman K.M. Lee ,&nbsp;C.H. Wu","doi":"10.1016/j.patrec.2025.03.017","DOIUrl":"10.1016/j.patrec.2025.03.017","url":null,"abstract":"<div><div>In industrial applications, the complexity of machine learning models often makes their decision-making processes difficult to interpret and lack transparency, particularly in the steel manufacturing sector. Understanding these processes is crucial for ensuring quality control, regulatory compliance, and gaining the trust of stakeholders. To address this issue, this paper proposes LE-FIS, a large language models (<strong>L</strong>LMs)-based <strong>E</strong>xplainable <strong>F</strong>uzzy <strong>I</strong>nference <strong>S</strong>ystem to interpret black-box models for steel defect detection. The method introduces a locally trained, globally predicted deep detection approach (LTGP), which segments the image into small parts for local training and then tests on the entire image for steel defect detection. Then, LE-FIS is designed to explain the LTGP by automatically generating rules and membership functions, with a genetic algorithm (GA) used to optimize parameters. Furthermore, state-of-the-art LLMs are employed to interpret the results of LE-FIS, and evaluation metrics are established for comparison and analysis. Experimental results demonstrate that LTGP performs well in defect detection tasks, and LE-FIS supported by LLMs provides a trustworthy and interpretable model for steel defect detection, which enhances transparency and reliability in industrial environments.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"192 ","pages":"Pages 29-35"},"PeriodicalIF":3.9,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143697954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信