Pattern Recognition Letters最新文献

筛选
英文 中文
TransBranch: A transformer branch architecture for fine-grained recognition TransBranch:用于细粒度识别的转换分支体系结构
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-19 DOI: 10.1016/j.patrec.2025.05.017
Cheng Pang , Dingzhou Xie , Yingjie Song , Rushi Lan
{"title":"TransBranch: A transformer branch architecture for fine-grained recognition","authors":"Cheng Pang ,&nbsp;Dingzhou Xie ,&nbsp;Yingjie Song ,&nbsp;Rushi Lan","doi":"10.1016/j.patrec.2025.05.017","DOIUrl":"10.1016/j.patrec.2025.05.017","url":null,"abstract":"<div><div>In this paper, we present a novel architecture noted asTransBranch for the challenging fine-grained visual categorization tasks. Distinguished from traditional models based on cross-layer feature fusion, the proposed architecture enhances classification accuracy by strategically integrating image features in a delicate way: features with different levels are generated in parallel, then assembled via a designed content-aware cross-level fusion mechanism, by which the multi-level features compensate each other and highlight the discriminative cues for visually similar subcategories. To this end, we devise an adaptive weighting mechanism, which dynamically adjusts the weights of features at different levels based on the difficulty of distinguishing subcategories and the semantics of image contents. This mechanism identifies discriminative features from cluttered backgrounds and guides the model to focus on the rare categories, improving the recognition while alleviating the long-tail distribution issue. Furthermore, a multi-scale patch embedding strategy has been devised to ensure the completeness of semantic image contents during feature learning. Experimental results show the proposed model outperforms current transformer-based architectures across benchmarked datasets for fine-grained visual categorization, especially in distinguishing categories with extremely similar features.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 274-280"},"PeriodicalIF":3.9,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144534263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSCL-RVT: Generalized supervised contrastive learning with global–local feature fusion for micro-expression recognition 基于全局-局部特征融合的广义监督对比学习微表情识别
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-17 DOI: 10.1016/j.patrec.2025.05.027
Fan Song, Junhua Li, Zhengxiu Li, Ming Li
{"title":"GSCL-RVT: Generalized supervised contrastive learning with global–local feature fusion for micro-expression recognition","authors":"Fan Song,&nbsp;Junhua Li,&nbsp;Zhengxiu Li,&nbsp;Ming Li","doi":"10.1016/j.patrec.2025.05.027","DOIUrl":"10.1016/j.patrec.2025.05.027","url":null,"abstract":"<div><div>Micro-expressions (MEs) are instantaneous facial expressions that appear quickly after an emotionally evocative event and are difficult to suppress, and they can reveal one’s genuine feelings and emotions. With their spontaneous and transient nature, MEs provide a unique perspective for sentiment analysis. However, their subtle and transient nature, coupled with the scarcity and lack of diversity of existing datasets, brings great challenges in discriminative feature learning and model generalization. To address these issues, this paper proposes a novel micro-expression recognition (MER) framework. This framework integrates a feature fusion network by blending residual blocks with a vision transformer (RVT), which can capture local details and integrate global contextual information in images across multiple levels. Furthermore, a generalized supervised contrastive learning (GSCL) strategy is introduced in this paper, wherein traditional one-hot labels are transformed into mixed labels. This strategy then proceeds to compare the similarity between the mixed labels and anchors, with the aim of minimizing the cross-entropy between the label similarity and the potential similarity. This approach aims to optimize the semantic spatial metrics between different MEs and enhance the model’s feature learning capabilities. In addition, we propose a method for augmenting data through region substitution, based on the local features of samples belonging to the same category. This approach works synergistically with a generalized supervised contrastive learning framework, with the objective of addressing the issue of limited micro-expression (ME) data availability. Lastly, we conduct a series of experiments with both Single Database Evaluation (SDE) and Composite Database Evaluation (CDE) protocols, obtaining either optimal or near-optimal results. We also provide sufficiently interpretable analyses to demonstrate the superiority and effectiveness of our proposed methodology.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 169-176"},"PeriodicalIF":3.9,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144335961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Gumbel-Softmax gradient estimator for generic discrete random variables 一般离散随机变量的广义Gumbel-Softmax梯度估计
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-13 DOI: 10.1016/j.patrec.2025.05.024
Weonyoung Joo , Dongjun Kim , Seungjae Shin , Il-Chul Moon
{"title":"Generalized Gumbel-Softmax gradient estimator for generic discrete random variables","authors":"Weonyoung Joo ,&nbsp;Dongjun Kim ,&nbsp;Seungjae Shin ,&nbsp;Il-Chul Moon","doi":"10.1016/j.patrec.2025.05.024","DOIUrl":"10.1016/j.patrec.2025.05.024","url":null,"abstract":"<div><div>Estimating the gradients of stochastic nodes in stochastic computational graphs is one of the crucial research questions in the deep generative modeling community, which enables gradient descent optimization on neural network parameters. Stochastic gradient estimators of discrete random variables, such as the Gumbel-Softmax reparameterization trick for Bernoulli and categorical distributions, are widely explored. Meanwhile, other discrete distribution cases, such as the Poisson, geometric, binomial, multinomial, negative binomial, etc., have not been explored. This paper proposes a generalized version of the Gumbel-Softmax stochastic gradient estimator. The proposed method is able to reparameterize generic discrete distributions, not restricted to the Bernoulli and the categorical, and it enables learning on large-scale stochastic computational graphs with discrete random nodes. Our experiments consist of (1) synthetic examples and applications on variational autoencoders, which show the efficacy of our methods; and (2) topic models, which demonstrate the value of the proposed estimation in practice.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 148-155"},"PeriodicalIF":3.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization-based image dehazing network with pseudo-3D convolution 伪三维卷积偏振图像去雾网络
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-13 DOI: 10.1016/j.patrec.2025.05.023
Xin Wang , Wei Fu , Haichao Yu
{"title":"Polarization-based image dehazing network with pseudo-3D convolution","authors":"Xin Wang ,&nbsp;Wei Fu ,&nbsp;Haichao Yu","doi":"10.1016/j.patrec.2025.05.023","DOIUrl":"10.1016/j.patrec.2025.05.023","url":null,"abstract":"<div><div>In this study, we present a pseudo-3D convolutional feature fusion attention network specifically designed for polarization-based image dehazing. Within this network, we introduce a novel feature attention module based on the Pseudo-3D convolution structure, integrating spatial feature attention and polarization feature attention mechanisms. Through a differentiated weight assignment model, this module allocates varying attention to haze at different locations and thicknesses, and adopts diverse processing approaches for hazy images captured at different polarization angle channels. In addition, we introduce a basic block structure that combines local residual learning, an attention module, and an octaves convolutional residual module. This integration allows the network to disregard information from thin hazy regions and low-frequency details, focusing more on critical information, significantly enhancing network performance. Experimental results unequivocally demonstrate the state-of-the-art performance of our method on both synthetic and real-world hazy images.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 156-161"},"PeriodicalIF":3.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical evaluation of rewiring approaches in graph neural networks 图神经网络中重新布线方法的实证评估
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-12 DOI: 10.1016/j.patrec.2025.05.021
Alessio Micheli, Domenico Tortorella
{"title":"An empirical evaluation of rewiring approaches in graph neural networks","authors":"Alessio Micheli,&nbsp;Domenico Tortorella","doi":"10.1016/j.patrec.2025.05.021","DOIUrl":"10.1016/j.patrec.2025.05.021","url":null,"abstract":"<div><div>Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 134-141"},"PeriodicalIF":3.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144288694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifespan age synthesis on human faces with decorrelation constraints and geometry guidance 基于去相关约束和几何制导的人脸寿命年龄综合
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-12 DOI: 10.1016/j.patrec.2025.05.020
Jiu-Cheng Xie , Lingqing Zhang , Hao Gao , Chi-Man Pun
{"title":"Lifespan age synthesis on human faces with decorrelation constraints and geometry guidance","authors":"Jiu-Cheng Xie ,&nbsp;Lingqing Zhang ,&nbsp;Hao Gao ,&nbsp;Chi-Man Pun","doi":"10.1016/j.patrec.2025.05.020","DOIUrl":"10.1016/j.patrec.2025.05.020","url":null,"abstract":"<div><div>It is challenging to use a single portrait as the reference and synthesize matching facial appearances throughout the lifetime. The following issues more or less plague previous attempts at this task: the loss of identity information and unnatural and fragmented changes in age-related patterns. To alleviate these problems, we propose a new method for lifespan age synthesis with decorrelation constraints and geometry guidance. In particular, orthogonality is imposed on two branches of features extracted from the source face so that they encode different kinds of facial information. Additionally, we develop a hybrid learning strategy based on joint supervision of landmarks and age labels, which guides the model to learn facial shape and texture transformation simultaneously. Qualitative and quantitative evaluations demonstrate that our approach outperforms state-of-the-art competitors. Relevant source code is available at <span><span>https://github.com/zlq1z2l3q/GGDC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 126-133"},"PeriodicalIF":3.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144280440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GVI: Guideable Visual Interpretation on medical tomographic images to improve the performance of deep network GVI:医学断层图像的可引导视觉解译,以提高深度网络的性能
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-11 DOI: 10.1016/j.patrec.2025.05.019
Hui Liu , Fan Wei , Lixin Yan , Sushan Wang , Chongfu Jia , Lina Zhang , Jiansheng Peng , Yi Xu
{"title":"GVI: Guideable Visual Interpretation on medical tomographic images to improve the performance of deep network","authors":"Hui Liu ,&nbsp;Fan Wei ,&nbsp;Lixin Yan ,&nbsp;Sushan Wang ,&nbsp;Chongfu Jia ,&nbsp;Lina Zhang ,&nbsp;Jiansheng Peng ,&nbsp;Yi Xu","doi":"10.1016/j.patrec.2025.05.019","DOIUrl":"10.1016/j.patrec.2025.05.019","url":null,"abstract":"<div><div>In medical image analysis, the demand for interpretable deep neural networks is rapidly growing. However, a major challenge is that most existing interpretative methods are applied after training, leading to a lack of integration with the model’s learning process. As a result, these methods often fail to highlight regions within complex medical images critical for decision-making, such as abnormal tissues or lesions, which are essential for accurate diagnoses and treatment planning. This paper introduces Guided Visual Interpretation (GVI), a framework designed to enhance both the performance and interpretability of deep networks. Building on a deep network model with image-level labels, GVI incorporates a small amount of pixel-level annotations combined with attention mechanisms. These mechanisms facilitate visual interpretation through forward propagation, directing the model’s focus to the most relevant regions. By aligning the network’s decision-making with human cognitive processes, GVI improves interpretability. In our study, an attention layer was added after the convolutional layers of a pre-trained classification network. GVI is trained using a mixed supervision approach that integrates pixel-level annotations with a large amount of image-level data. Experimental results on both private and public datasets show that GVI generates visual explanations consistent with human decision-making principles and achieves superior classification accuracy compared to traditional methods. These findings highlight GVI’s potential to improve interpretability and diagnostic performance in critical fields like medical imaging.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 162-168"},"PeriodicalIF":3.9,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mask-based anomaly segmentation in complex driving scenes 基于掩模的复杂驾驶场景异常分割
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-10 DOI: 10.1016/j.patrec.2025.05.013
Pan Wang, Chengzhi Lyu, Lei Zhang, Hong He, Fang Dai
{"title":"Mask-based anomaly segmentation in complex driving scenes","authors":"Pan Wang,&nbsp;Chengzhi Lyu,&nbsp;Lei Zhang,&nbsp;Hong He,&nbsp;Fang Dai","doi":"10.1016/j.patrec.2025.05.013","DOIUrl":"10.1016/j.patrec.2025.05.013","url":null,"abstract":"<div><div>Road anomaly segmentation plays a pivotal role in advancing the safety of autonomous driving by facilitating the detection of unknown objects in complex traffic environments. Nevertheless, traditional semantic segmentation models, limited by predefined categories, often struggle to accurately identify anomalous objects. In this study, we propose AnomaskDrive, a mask-based anomaly segmentation approach that integrates a comprehensive mask-based attention mechanism and a mask refinement strategy within an RbA framework, enhancing the detection of anomalous objects in complex scenes. The proposed mask-based attention mechanism effectively distinguishes between foreground and background regions, thereby enhancing the segmentation of anomalies in cluttered road environments. Additionally, the inclusion of a mask refinement strategy minimizes false positives and elevates overall segmentation accuracy, demonstrating the robustness and effectiveness of our method. Benchmark evaluations on Road Anomaly and Fishyscapes Lost &amp;Found datasets demonstrate that AnomaskDrive outperforms existing methods, achieving AUC/AP/FPR@95 scores of 98.56%/90.87%/4.68% and 97.75%/74.54%/4.25%, respectively, underscoring its competitive advantage in anomaly segmentation.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 142-147"},"PeriodicalIF":3.9,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Panoramic brain network analyzer: A residual graph network with attention mechanism for autism spectrum disorder diagnosis 全景脑网络分析仪:一种带有注意机制的残差图网络诊断自闭症谱系障碍
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-10 DOI: 10.1016/j.patrec.2025.05.015
Jihe Chen , Song Zeng , Jiahao Yang , Zhibin Du
{"title":"Panoramic brain network analyzer: A residual graph network with attention mechanism for autism spectrum disorder diagnosis","authors":"Jihe Chen ,&nbsp;Song Zeng ,&nbsp;Jiahao Yang ,&nbsp;Zhibin Du","doi":"10.1016/j.patrec.2025.05.015","DOIUrl":"10.1016/j.patrec.2025.05.015","url":null,"abstract":"<div><div>Autism Spectrum Disorder (ASD) is a prevalent neurodevelopmental disorder nowadays, which is featured by the deficits in reciprocal social communication and the presence of restricted and repetitive patterns of behaviors. It is generally acknowledged that the resting-state functional magnetic resonance imaging (fMRI) for the brain functional connectivity (FC) detection is one of the most effective ways in predicting ASD. However, many challenges still exist, e.g., the gradient vanishing in deep GCN networks, the difficulties in localizing potential biomarkers for diagnosis. To address these issues, in this paper we propose a new ASD diagnostic model, called Panoramic Brain Network Analyzer (PBNA). The main advantage of our new model is to introduce the residual techniques and various attention mechanisms to deepen GCN architecture, which enables to learn more advanced information. Additionally, an innovation of the current graph pooling methods is also given, in which we incorporate the softmax and straight-through to alleviate dimensionality explosion. The empirical results on the ABIDE CC200, CC400 and AAL datasets demonstrate the superiority of PBNA, these evidences support PBNA to be a more accurate and efficient clinical diagnosis. More precisely, by utilizing a five-fold cross-validation strategy, the ACC indicators of PBNA on the three datasets could reach 75.77%, 74.11%, 74.65%, respectively, surpassing most of the state-of-the-art diagnostic methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 109-116"},"PeriodicalIF":3.9,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Est3D2Real-estimated 3D-to-real data embeddings for real time sign language recognizer Est3D2Real-estimated 3D-to-real数据嵌入用于实时手语识别器
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-09 DOI: 10.1016/j.patrec.2025.05.012
Kishore P.V.V. , Anil Kumar D.
{"title":"Est3D2Real-estimated 3D-to-real data embeddings for real time sign language recognizer","authors":"Kishore P.V.V. ,&nbsp;Anil Kumar D.","doi":"10.1016/j.patrec.2025.05.012","DOIUrl":"10.1016/j.patrec.2025.05.012","url":null,"abstract":"<div><div>Human pose estimation predicts 3D skeletal joints from 2D video data. These estimated 3D joints are sensitive to video data anomalies, posing a threat to applications such as real-time sign language recognition. The challenge lies in the failure of the estimation model to output pose vectors during the signing process, which significantly impacts downstream classification tasks. To address this issue, we propose the development of a lightweight estimated 3D-to-real data embedding network (Est3D2Real). This network is designed to learn the relationships between the outputs of the pose estimation framework and a 3D motion capture system. Est3D2Real is a four-layer fully connected network, consisting of one input layer, two hidden layers, and one output layer. It employs the Mean Squared Error (MSE) loss function to minimize the distance between the two modalities. The trained Est3D2Real model ensures minimal joint loss in real-time downstream classification tasks. Validation is performed on a 100-gloss 3D sign language dataset, captured using both motion capture and MediaPipe frameworks. Subsequent downstream sign classifiers built on top of the trained Est3D2Real model have shown an approximate improvement of 28%. The code with small datasets is made available at <span><span>https://github.com/pvvkishore/Est3D2Real_SL_MediaPipe_2_Motion_Capture</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 86-92"},"PeriodicalIF":3.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144262376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信