Pattern Recognition Letters最新文献

筛选
英文 中文
Segment Anything Model for detecting salient objects with accurate prompting and Ladder Directional Perception 用精确提示和阶梯方向感知来检测显著物体的分割模型
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-21 DOI: 10.1016/j.patrec.2025.06.002
Yuze Sun, Hongwei Zhao, Jianhang Zhou
{"title":"Segment Anything Model for detecting salient objects with accurate prompting and Ladder Directional Perception","authors":"Yuze Sun,&nbsp;Hongwei Zhao,&nbsp;Jianhang Zhou","doi":"10.1016/j.patrec.2025.06.002","DOIUrl":"10.1016/j.patrec.2025.06.002","url":null,"abstract":"<div><div>Salient object detection (SOD) focuses on finding, mining, and locating the most salient objects in an image. In recent years, with the introduction of SAM, image segmentation models have gradually become more unified. However, applying SAM to SOD still requires further exploration and effort. SOD relies on the extraction of multi-scale information. To enable SAM to perceive and adapt to multi-scale features, we propose the Cross-resolution Modeling Adapter, which is designed to encode the global information of features at different scales while achieving unified modeling of cross-resolution semantics. To aid the fusion of multi-scale features, we introduce the Ladder Directional Perception Fusion Module, which not only broadens the available feature space but also perceives and encodes the long-term and short-term dependencies in a stepped manner. Extensive experiments have demonstrated the effectiveness of the proposed method.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 184-190"},"PeriodicalIF":3.9,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-sharing hierarchical memory fusion network for scribble-supervised video salient object detection 基于知识共享层次记忆融合网络的涂鸦监督视频显著目标检测
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-21 DOI: 10.1016/j.patrec.2025.06.003
Tao Jiang , Feng Hou , Yi Wang , Guangzhu Chen , Ruili Wang
{"title":"Knowledge-sharing hierarchical memory fusion network for scribble-supervised video salient object detection","authors":"Tao Jiang ,&nbsp;Feng Hou ,&nbsp;Yi Wang ,&nbsp;Guangzhu Chen ,&nbsp;Ruili Wang","doi":"10.1016/j.patrec.2025.06.003","DOIUrl":"10.1016/j.patrec.2025.06.003","url":null,"abstract":"<div><div>Scribble annotations offer a practical alternative to pixel-wise labels in video salient object detection (V-SOD). However, their sparse foreground coverage and ambiguous boundaries introduce background interference and error propagation, degrading segmentation accuracy across frames. To address this issue, we propose a novel Knowledge-sharing Hierarchical Memory Fusion Network (KHMF-Net) for scribble-supervised V-SOD. The core of our framework is a Hierarchical Memory Bank (HMB) that stores initial scribbles, historical high-confidence regions, and historical full salient maps, enabling long-term spatiotemporal context modeling to suppress error propagation. Additionally, we introduce an Adaptive Memory Fusion (AMF) module to dynamically integrate multi-confidence features, providing reliable guidance during salient mask expansion. To address background interference, we design an Interactive Equalized Matching (IEM) module with reference-wise softmax, ensuring balanced contributions from reference frame pixels. A dual-attention knowledge-sharing mechanism is further proposed to enhance IEM by transferring high-performance attention features from a Teacher to a Student module, improving segmentation accuracy. Experimental results demonstrate that KHMF-Net’s hierarchical memory architecture and effective background-target discrimination enable state-of-the-art performance on three scribble-annotated datasets, even exceeding some fully supervised approaches. The module and predicted maps are publicly available at <span><span>https://github.com/TOMMYWHY/KHMF-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 177-183"},"PeriodicalIF":3.9,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144471983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSCL-RVT: Generalized supervised contrastive learning with global–local feature fusion for micro-expression recognition 基于全局-局部特征融合的广义监督对比学习微表情识别
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-17 DOI: 10.1016/j.patrec.2025.05.027
Fan Song, Junhua Li, Zhengxiu Li, Ming Li
{"title":"GSCL-RVT: Generalized supervised contrastive learning with global–local feature fusion for micro-expression recognition","authors":"Fan Song,&nbsp;Junhua Li,&nbsp;Zhengxiu Li,&nbsp;Ming Li","doi":"10.1016/j.patrec.2025.05.027","DOIUrl":"10.1016/j.patrec.2025.05.027","url":null,"abstract":"<div><div>Micro-expressions (MEs) are instantaneous facial expressions that appear quickly after an emotionally evocative event and are difficult to suppress, and they can reveal one’s genuine feelings and emotions. With their spontaneous and transient nature, MEs provide a unique perspective for sentiment analysis. However, their subtle and transient nature, coupled with the scarcity and lack of diversity of existing datasets, brings great challenges in discriminative feature learning and model generalization. To address these issues, this paper proposes a novel micro-expression recognition (MER) framework. This framework integrates a feature fusion network by blending residual blocks with a vision transformer (RVT), which can capture local details and integrate global contextual information in images across multiple levels. Furthermore, a generalized supervised contrastive learning (GSCL) strategy is introduced in this paper, wherein traditional one-hot labels are transformed into mixed labels. This strategy then proceeds to compare the similarity between the mixed labels and anchors, with the aim of minimizing the cross-entropy between the label similarity and the potential similarity. This approach aims to optimize the semantic spatial metrics between different MEs and enhance the model’s feature learning capabilities. In addition, we propose a method for augmenting data through region substitution, based on the local features of samples belonging to the same category. This approach works synergistically with a generalized supervised contrastive learning framework, with the objective of addressing the issue of limited micro-expression (ME) data availability. Lastly, we conduct a series of experiments with both Single Database Evaluation (SDE) and Composite Database Evaluation (CDE) protocols, obtaining either optimal or near-optimal results. We also provide sufficiently interpretable analyses to demonstrate the superiority and effectiveness of our proposed methodology.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 169-176"},"PeriodicalIF":3.9,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144335961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Gumbel-Softmax gradient estimator for generic discrete random variables 一般离散随机变量的广义Gumbel-Softmax梯度估计
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-13 DOI: 10.1016/j.patrec.2025.05.024
Weonyoung Joo , Dongjun Kim , Seungjae Shin , Il-Chul Moon
{"title":"Generalized Gumbel-Softmax gradient estimator for generic discrete random variables","authors":"Weonyoung Joo ,&nbsp;Dongjun Kim ,&nbsp;Seungjae Shin ,&nbsp;Il-Chul Moon","doi":"10.1016/j.patrec.2025.05.024","DOIUrl":"10.1016/j.patrec.2025.05.024","url":null,"abstract":"<div><div>Estimating the gradients of stochastic nodes in stochastic computational graphs is one of the crucial research questions in the deep generative modeling community, which enables gradient descent optimization on neural network parameters. Stochastic gradient estimators of discrete random variables, such as the Gumbel-Softmax reparameterization trick for Bernoulli and categorical distributions, are widely explored. Meanwhile, other discrete distribution cases, such as the Poisson, geometric, binomial, multinomial, negative binomial, etc., have not been explored. This paper proposes a generalized version of the Gumbel-Softmax stochastic gradient estimator. The proposed method is able to reparameterize generic discrete distributions, not restricted to the Bernoulli and the categorical, and it enables learning on large-scale stochastic computational graphs with discrete random nodes. Our experiments consist of (1) synthetic examples and applications on variational autoencoders, which show the efficacy of our methods; and (2) topic models, which demonstrate the value of the proposed estimation in practice.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 148-155"},"PeriodicalIF":3.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Polarization-based image dehazing network with pseudo-3D convolution 伪三维卷积偏振图像去雾网络
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-13 DOI: 10.1016/j.patrec.2025.05.023
Xin Wang , Wei Fu , Haichao Yu
{"title":"Polarization-based image dehazing network with pseudo-3D convolution","authors":"Xin Wang ,&nbsp;Wei Fu ,&nbsp;Haichao Yu","doi":"10.1016/j.patrec.2025.05.023","DOIUrl":"10.1016/j.patrec.2025.05.023","url":null,"abstract":"<div><div>In this study, we present a pseudo-3D convolutional feature fusion attention network specifically designed for polarization-based image dehazing. Within this network, we introduce a novel feature attention module based on the Pseudo-3D convolution structure, integrating spatial feature attention and polarization feature attention mechanisms. Through a differentiated weight assignment model, this module allocates varying attention to haze at different locations and thicknesses, and adopts diverse processing approaches for hazy images captured at different polarization angle channels. In addition, we introduce a basic block structure that combines local residual learning, an attention module, and an octaves convolutional residual module. This integration allows the network to disregard information from thin hazy regions and low-frequency details, focusing more on critical information, significantly enhancing network performance. Experimental results unequivocally demonstrate the state-of-the-art performance of our method on both synthetic and real-world hazy images.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 156-161"},"PeriodicalIF":3.9,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical evaluation of rewiring approaches in graph neural networks 图神经网络中重新布线方法的实证评估
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-12 DOI: 10.1016/j.patrec.2025.05.021
Alessio Micheli, Domenico Tortorella
{"title":"An empirical evaluation of rewiring approaches in graph neural networks","authors":"Alessio Micheli,&nbsp;Domenico Tortorella","doi":"10.1016/j.patrec.2025.05.021","DOIUrl":"10.1016/j.patrec.2025.05.021","url":null,"abstract":"<div><div>Graph neural networks compute node representations by performing multiple message-passing steps that consist in local aggregations of node features. Having deep models that can leverage longer-range interactions between nodes is hindered by the issues of over-smoothing and over-squashing. In particular, the latter is attributed to the graph topology which guides the message-passing, causing a node representation to become insensitive to information contained at distant nodes. Many graph rewiring methods have been proposed to remedy or mitigate this problem. However, properly evaluating the benefits of these methods is made difficult by the coupling of over-squashing with other issues strictly related to model training, such as vanishing gradients. Therefore, we propose an evaluation setting based on message-passing models that do not require training to compute node and graph representations. We perform a systematic experimental comparison on real-world node and graph classification tasks, showing that rewiring the underlying graph rarely does confer a practical benefit for message-passing.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 134-141"},"PeriodicalIF":3.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144288694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifespan age synthesis on human faces with decorrelation constraints and geometry guidance 基于去相关约束和几何制导的人脸寿命年龄综合
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-12 DOI: 10.1016/j.patrec.2025.05.020
Jiu-Cheng Xie , Lingqing Zhang , Hao Gao , Chi-Man Pun
{"title":"Lifespan age synthesis on human faces with decorrelation constraints and geometry guidance","authors":"Jiu-Cheng Xie ,&nbsp;Lingqing Zhang ,&nbsp;Hao Gao ,&nbsp;Chi-Man Pun","doi":"10.1016/j.patrec.2025.05.020","DOIUrl":"10.1016/j.patrec.2025.05.020","url":null,"abstract":"<div><div>It is challenging to use a single portrait as the reference and synthesize matching facial appearances throughout the lifetime. The following issues more or less plague previous attempts at this task: the loss of identity information and unnatural and fragmented changes in age-related patterns. To alleviate these problems, we propose a new method for lifespan age synthesis with decorrelation constraints and geometry guidance. In particular, orthogonality is imposed on two branches of features extracted from the source face so that they encode different kinds of facial information. Additionally, we develop a hybrid learning strategy based on joint supervision of landmarks and age labels, which guides the model to learn facial shape and texture transformation simultaneously. Qualitative and quantitative evaluations demonstrate that our approach outperforms state-of-the-art competitors. Relevant source code is available at <span><span>https://github.com/zlq1z2l3q/GGDC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 126-133"},"PeriodicalIF":3.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144280440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GVI: Guideable Visual Interpretation on medical tomographic images to improve the performance of deep network GVI:医学断层图像的可引导视觉解译,以提高深度网络的性能
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-11 DOI: 10.1016/j.patrec.2025.05.019
Hui Liu , Fan Wei , Lixin Yan , Sushan Wang , Chongfu Jia , Lina Zhang , Jiansheng Peng , Yi Xu
{"title":"GVI: Guideable Visual Interpretation on medical tomographic images to improve the performance of deep network","authors":"Hui Liu ,&nbsp;Fan Wei ,&nbsp;Lixin Yan ,&nbsp;Sushan Wang ,&nbsp;Chongfu Jia ,&nbsp;Lina Zhang ,&nbsp;Jiansheng Peng ,&nbsp;Yi Xu","doi":"10.1016/j.patrec.2025.05.019","DOIUrl":"10.1016/j.patrec.2025.05.019","url":null,"abstract":"<div><div>In medical image analysis, the demand for interpretable deep neural networks is rapidly growing. However, a major challenge is that most existing interpretative methods are applied after training, leading to a lack of integration with the model’s learning process. As a result, these methods often fail to highlight regions within complex medical images critical for decision-making, such as abnormal tissues or lesions, which are essential for accurate diagnoses and treatment planning. This paper introduces Guided Visual Interpretation (GVI), a framework designed to enhance both the performance and interpretability of deep networks. Building on a deep network model with image-level labels, GVI incorporates a small amount of pixel-level annotations combined with attention mechanisms. These mechanisms facilitate visual interpretation through forward propagation, directing the model’s focus to the most relevant regions. By aligning the network’s decision-making with human cognitive processes, GVI improves interpretability. In our study, an attention layer was added after the convolutional layers of a pre-trained classification network. GVI is trained using a mixed supervision approach that integrates pixel-level annotations with a large amount of image-level data. Experimental results on both private and public datasets show that GVI generates visual explanations consistent with human decision-making principles and achieves superior classification accuracy compared to traditional methods. These findings highlight GVI’s potential to improve interpretability and diagnostic performance in critical fields like medical imaging.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 162-168"},"PeriodicalIF":3.9,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144307862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mask-based anomaly segmentation in complex driving scenes 基于掩模的复杂驾驶场景异常分割
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-10 DOI: 10.1016/j.patrec.2025.05.013
Pan Wang, Chengzhi Lyu, Lei Zhang, Hong He, Fang Dai
{"title":"Mask-based anomaly segmentation in complex driving scenes","authors":"Pan Wang,&nbsp;Chengzhi Lyu,&nbsp;Lei Zhang,&nbsp;Hong He,&nbsp;Fang Dai","doi":"10.1016/j.patrec.2025.05.013","DOIUrl":"10.1016/j.patrec.2025.05.013","url":null,"abstract":"<div><div>Road anomaly segmentation plays a pivotal role in advancing the safety of autonomous driving by facilitating the detection of unknown objects in complex traffic environments. Nevertheless, traditional semantic segmentation models, limited by predefined categories, often struggle to accurately identify anomalous objects. In this study, we propose AnomaskDrive, a mask-based anomaly segmentation approach that integrates a comprehensive mask-based attention mechanism and a mask refinement strategy within an RbA framework, enhancing the detection of anomalous objects in complex scenes. The proposed mask-based attention mechanism effectively distinguishes between foreground and background regions, thereby enhancing the segmentation of anomalies in cluttered road environments. Additionally, the inclusion of a mask refinement strategy minimizes false positives and elevates overall segmentation accuracy, demonstrating the robustness and effectiveness of our method. Benchmark evaluations on Road Anomaly and Fishyscapes Lost &amp;Found datasets demonstrate that AnomaskDrive outperforms existing methods, achieving AUC/AP/FPR@95 scores of 98.56%/90.87%/4.68% and 97.75%/74.54%/4.25%, respectively, underscoring its competitive advantage in anomaly segmentation.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 142-147"},"PeriodicalIF":3.9,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144297868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Panoramic brain network analyzer: A residual graph network with attention mechanism for autism spectrum disorder diagnosis 全景脑网络分析仪:一种带有注意机制的残差图网络诊断自闭症谱系障碍
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-06-10 DOI: 10.1016/j.patrec.2025.05.015
Jihe Chen , Song Zeng , Jiahao Yang , Zhibin Du
{"title":"Panoramic brain network analyzer: A residual graph network with attention mechanism for autism spectrum disorder diagnosis","authors":"Jihe Chen ,&nbsp;Song Zeng ,&nbsp;Jiahao Yang ,&nbsp;Zhibin Du","doi":"10.1016/j.patrec.2025.05.015","DOIUrl":"10.1016/j.patrec.2025.05.015","url":null,"abstract":"<div><div>Autism Spectrum Disorder (ASD) is a prevalent neurodevelopmental disorder nowadays, which is featured by the deficits in reciprocal social communication and the presence of restricted and repetitive patterns of behaviors. It is generally acknowledged that the resting-state functional magnetic resonance imaging (fMRI) for the brain functional connectivity (FC) detection is one of the most effective ways in predicting ASD. However, many challenges still exist, e.g., the gradient vanishing in deep GCN networks, the difficulties in localizing potential biomarkers for diagnosis. To address these issues, in this paper we propose a new ASD diagnostic model, called Panoramic Brain Network Analyzer (PBNA). The main advantage of our new model is to introduce the residual techniques and various attention mechanisms to deepen GCN architecture, which enables to learn more advanced information. Additionally, an innovation of the current graph pooling methods is also given, in which we incorporate the softmax and straight-through to alleviate dimensionality explosion. The empirical results on the ABIDE CC200, CC400 and AAL datasets demonstrate the superiority of PBNA, these evidences support PBNA to be a more accurate and efficient clinical diagnosis. More precisely, by utilizing a five-fold cross-validation strategy, the ACC indicators of PBNA on the three datasets could reach 75.77%, 74.11%, 74.65%, respectively, surpassing most of the state-of-the-art diagnostic methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 109-116"},"PeriodicalIF":3.9,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信