Pattern Recognition最新文献

筛选
英文 中文
Associative graph convolution network for point cloud analysis 用于点云分析的关联图卷积网络
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-06 DOI: 10.1016/j.patcog.2024.111152
Xi Yang , Xingyilang Yin , Nannan Wang , Xinbo Gao
{"title":"Associative graph convolution network for point cloud analysis","authors":"Xi Yang ,&nbsp;Xingyilang Yin ,&nbsp;Nannan Wang ,&nbsp;Xinbo Gao","doi":"10.1016/j.patcog.2024.111152","DOIUrl":"10.1016/j.patcog.2024.111152","url":null,"abstract":"<div><div>Since point cloud is the raw output of most 3D sensors, its effective analysis is in huge demand in the field of autonomous driving and robotic manipulation. However, directly processing point clouds is challenging because point clouds are a kind of disordered and unstructured geometric data. Recently, numerous graph convolution neural networks are proposed for introducing graph structure to point clouds yet far from perfect. Specially, DGCNN tries to learn local geometric of points in semantic space and recomputes the graph using nearest neighbors in the feature space in each layer. However, it discards all the information of the previous graph after each graph update, which neglects the relations between each dynamic update. To this end, we propose an associative graph convolution neural network (AGCN) which mainly consists of associative graph convolution (AGConv) and two kinds of residual connections. AGConv additionally considers the information from the previous graph when computing the edge function on current local neighborhoods in each layer, and it can precisely and continuously capture the local geometric features on point clouds. Residual connections further explore the semantic relations between layers for effective learning on point clouds. Extensive experiments on several benchmark datasets show that our network achieves competitive classification and segmentation results.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111152"},"PeriodicalIF":7.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Riding feeling recognition based on multi-head self-attention LSTM for driverless automobile 基于多头自注意 LSTM 的无人驾驶汽车骑乘感识别
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111135
Xianzhi Tang, Yongjia Xie, Xinlong Li, Bo Wang
{"title":"Riding feeling recognition based on multi-head self-attention LSTM for driverless automobile","authors":"Xianzhi Tang,&nbsp;Yongjia Xie,&nbsp;Xinlong Li,&nbsp;Bo Wang","doi":"10.1016/j.patcog.2024.111135","DOIUrl":"10.1016/j.patcog.2024.111135","url":null,"abstract":"<div><div>With the emergence of driverless technology, passenger ride comfort has become an issue of concern. In recent years, driving fatigue detection and braking sensation evaluation based on EEG signals have received more attention, and analyzing ride comfort using EEG signals is also a more intuitive method. However, it is still a challenge to find an effective method or model to evaluate passenger comfort. In this paper, we propose a long- and short-term memory network model based on a multiple self-attention mechanism for passenger comfort detection. By applying the multiple attention mechanism to the feature extraction process, more efficient classification results are obtained. The results show that the long- and short-term memory network using the multi-head self-attention mechanism is efficient in decision making along with higher classification accuracy. In conclusion, the classifier based on the multi-head attention mechanism proposed in this paper has excellent performance in EEG classification of different emotional states, and has a broad development prospect in brain-computer interaction.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111135"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint utilization of positive and negative pseudo-labels in semi-supervised facial expression recognition 在半监督面部表情识别中联合使用正负伪标签
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111147
Jinwei Lv, Yanli Ren, Guorui Feng
{"title":"Joint utilization of positive and negative pseudo-labels in semi-supervised facial expression recognition","authors":"Jinwei Lv,&nbsp;Yanli Ren,&nbsp;Guorui Feng","doi":"10.1016/j.patcog.2024.111147","DOIUrl":"10.1016/j.patcog.2024.111147","url":null,"abstract":"<div><div>Facial expression recognition has obtained significant attention due to the abundance of unlabeled expressions, and semi-supervised learning aims to leverage unlabeled samples sufficiently. Recent approaches primarily focus on combining an adaptive margin and pseudo-labels to extract hard samples and boost performance. However, the instability of pseudo-labels and the utilization of the rest unlabeled samples remain critical challenges. We introduce a stable-positive-single and negative-multiple pseudo-labels (SPS-NM) method to solve the above two challenges. All unlabeled samples are categorized into three groups properly by adaptive confidence margins. When the maximum confidence scores are high and stable enough, the unlabeled samples are attached with positive pseudo-labels. On the contrary, when the confidence scores of unlabeled samples are low enough, the related negative-multi pseudo-labels are attached to these samples. The quality and quantity of classes in negative pseudo-labels are balanced by <span><math><mrow><mi>t</mi><mi>o</mi><mi>p</mi></mrow></math></span>-<span><math><mi>k</mi></math></span>. Eventually, the remaining unlabeled samples are ambiguous and fail to match their pseudo-labels, but they can still be used to extract valuable features by contrastive learning. We conduct comparative experiments and ablation study on RAF-DB, AffectNet and SFEW datasets to demonstrate that SPS-NM achieves improvement and becomes the state-of-the-art method in facial expression recognition.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111147"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised multimodal change detection based on difference contrast learning for remote sensing imagery 基于遥感图像差异对比学习的自监督多模态变化检测
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111148
Xuan Hou , Yunpeng Bai , Yefan Xie , Yunfeng Zhang , Lei Fu , Ying Li , Changjing Shang , Qiang Shen
{"title":"Self-supervised multimodal change detection based on difference contrast learning for remote sensing imagery","authors":"Xuan Hou ,&nbsp;Yunpeng Bai ,&nbsp;Yefan Xie ,&nbsp;Yunfeng Zhang ,&nbsp;Lei Fu ,&nbsp;Ying Li ,&nbsp;Changjing Shang ,&nbsp;Qiang Shen","doi":"10.1016/j.patcog.2024.111148","DOIUrl":"10.1016/j.patcog.2024.111148","url":null,"abstract":"<div><div>Most existing change detection (CD) methods target homogeneous images. However, in real-world scenarios like disaster management, where CD is urgent and pre-changed and post-changed images are typical of different modalities, significant challenges arise for multimodal change detection (MCD). One challenge is that bi-temporal image pairs, sourced from distinct sensors, may cause an image domain gap. Another issue surfaces when multimodal bi-temporal image pairs require collaborative input from domain experts who are specialised among different image fields for pixel-level annotation, resulting in scarce annotated samples. To address these challenges, this paper proposes a novel self-supervised difference contrast learning framework (Self-DCF). This framework facilitates networks training without labelled samples by automatically exploiting the feature information inherent in bi-temporal imagery to supervise each other mutually. Additionally, a Unified Mapping Unit reduces the domain gap between different modal images. The efficiency and robustness of Self-DCF are validated on five popular datasets, outperforming state-of-the-art algorithms.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111148"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental feature selection: Parallel approach with local neighborhood rough sets and composite entropy 增量特征选择:采用局部邻域粗糙集和复合熵的并行方法
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111141
Weihua Xu, Weirui Ye
{"title":"Incremental feature selection: Parallel approach with local neighborhood rough sets and composite entropy","authors":"Weihua Xu,&nbsp;Weirui Ye","doi":"10.1016/j.patcog.2024.111141","DOIUrl":"10.1016/j.patcog.2024.111141","url":null,"abstract":"<div><div>Rough set theory is a powerful mathematical framework for managing uncertainty and is widely utilized in feature selection. However, traditional rough set-based feature selection algorithms encounter significant challenges, especially when processing large-scale incremental data and adapting to the dynamic nature of real-world scenarios, where both data volume and feature sets are continuously changing. To overcome these limitations, this study proposes an innovative algorithm that integrates local neighborhood rough sets with composite entropy to measure uncertainty in information systems more accurately. By incorporating decision distribution, composite entropy enhances the precision of uncertainty quantification, thereby improving the effectiveness of the algorithm in feature selection. To further improve performance in handling large-scale incremental data, matrix operations are employed in place of traditional set-based methods, allowing the algorithm to fully utilize modern hardware capabilities for accelerated processing. Additionally, parallel computing technology is integrated to further enhance computational speed. An incremental version of the algorithm is also introduced to better adapt to dynamic data environments, increasing its flexibility and practicality. Comprehensive experimental evaluations demonstrate that the proposed algorithm significantly surpasses existing methods in both effectiveness and efficiency.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111141"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HTCSigNet: A Hybrid Transformer and Convolution Signature Network for offline signature verification HTCSigNet:用于离线签名验证的混合变换器和卷积签名网络
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111146
Lidong Zheng , Da Wu , Shengjie Xu, Yuchen Zheng
{"title":"HTCSigNet: A Hybrid Transformer and Convolution Signature Network for offline signature verification","authors":"Lidong Zheng ,&nbsp;Da Wu ,&nbsp;Shengjie Xu,&nbsp;Yuchen Zheng","doi":"10.1016/j.patcog.2024.111146","DOIUrl":"10.1016/j.patcog.2024.111146","url":null,"abstract":"<div><div>For Offline Handwritten Signature Verification (OHSV) tasks, traditional Convolutional Neural Networks (CNNs) and transformers are hard to individually capture global and local features from signatures, and single-depth models often suffer from overfitting and poor generalization problems. To overcome those difficulties, in this paper, a novel Hybrid Transformer and Convolution Signature Network (HTCSigNet) is proposed to capture multi-scale features from signatures. Specifically, the HTCSigNet is an innovative framework that consists of two parts: transformer and CNN-based blocks which are used to respectively extract global and local features from signatures. The CNN-based block comprises a Space-to-depth Convolution (SPD-Conv) module which improves the feature learning capability by precisely focusing on signature strokes, a Spatial and Channel Reconstruction Convolution (SCConv) module which enhances model generalization by focusing on more distinctive micro-deformation features while reducing attention to common features, and convolution module that extracts the shape, morphology of specific strokes, and other local features from signatures. In the transformer-based block, there is a Vision Transformer (ViT) which is used to extract overall shape, layout, general direction, and other global features from signatures. After the feature learning stage, Writer-Dependent (WD) and Writer-Independent (WI) verification systems are constructed to evaluate the performance of the proposed HTCSigNet. Extensive experiments on four public signature datasets, GPDSsynthetic, CEDAR, UTSig, and BHSig260 (Bengali and Hindi) demonstrate that the proposed HTCSigNet learns discriminative representations between genuine and skilled forged signatures and achieves state-of-the-art or competitive performance compared with advanced verification systems. Furthermore, the proposed HTCSigNet is easy to transfer to different language datasets in OHSV tasks.<span><span><sup>2</sup></span></span></div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111146"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EGO-LM: An efficient, generic, and out-of-the-box language model for handwritten text recognition EGO-LM:高效、通用、开箱即用的手写文本识别语言模型
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-04 DOI: 10.1016/j.patcog.2024.111130
Hongliang Li , Dezhi Peng , Lianwen Jin
{"title":"EGO-LM: An efficient, generic, and out-of-the-box language model for handwritten text recognition","authors":"Hongliang Li ,&nbsp;Dezhi Peng ,&nbsp;Lianwen Jin","doi":"10.1016/j.patcog.2024.111130","DOIUrl":"10.1016/j.patcog.2024.111130","url":null,"abstract":"<div><div>The language model (LM) plays a crucial role in post-processing handwritten text recognition (HTR) by capturing linguistic patterns. However, traditional rule-based LMs are inefficient, and recent end-to-end LMs require customized training for each HTR model. To address these limitations, we propose an <strong>E</strong>fficient, <strong>G</strong>eneric, and <strong>O</strong>ut-of-the-box <strong>L</strong>anguage <strong>M</strong>odel (EGO-LM) for HTR. To unlock the out-of-the-box capability of the end-to-end LM, we introduce a vision-limited proxy task that focuses on visual-pattern-agnostic linguistic dependencies during training, enhancing the robustness and generality of the LM. The enhanced capabilities also enable EGO-LM to iteratively refine its output for a further accuracy boost without additional tuning. Moreover, we introduce a <strong>D</strong>iverse-<strong>C</strong>orpus <strong>O</strong>nline <strong>H</strong>andwriting dataset (DCOH-120K) with more diverse corpus types and more samples than existing datasets, including 83,142 Chinese and 39,398 English text lines. Extensive experiments demonstrate that EGO-LM can attain state-of-the-art performance while achieving up to 613<span><math><mo>×</mo></math></span> acceleration. The DCOH-120K dataset is available at .</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111130"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAR target augmentation and recognition via cross-domain reconstruction 通过跨域重建进行合成孔径雷达目标增强和识别
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-04 DOI: 10.1016/j.patcog.2024.111117
Ganggang Dong, Yafei Song
{"title":"SAR target augmentation and recognition via cross-domain reconstruction","authors":"Ganggang Dong,&nbsp;Yafei Song","doi":"10.1016/j.patcog.2024.111117","DOIUrl":"10.1016/j.patcog.2024.111117","url":null,"abstract":"<div><div>The deep learning-based target recognition methods have achieved great performance in the preceding works. Large amounts of training data with label were collected to train a deep architecture, by which the inference can be obtained. For radar sensors, the data could be collected easily, yet the prior knowledge on label was difficult to be accessed. To solve the problem, a cross-domain re-imaging target augmentation method was proposed in this paper. The original image was first recast into the frequency domain. The frequency were then randomly filtered by a randomly generated mask. The size and the shape of mask was randomly determined. The filtering results were finally used for re-imaging. The original target can be then reconstructed accordingly. A series of new samples can be generated freely. The amounts and the diversity of dataset can be therefore improved. The proposed augmentation method can be implemented on-line or off-line, making it adaptable to various downstream tasks. Multiple comparative studies throw the light on the superiority of proposed method over the standard and recent techniques. It served to generate the images that would aid the downstream tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111117"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Intra-view and Inter-view Enhanced Tensor Low-rank Induced Affinity Graph Learning 联合视图内和视图间增强型张量低秩诱导亲和图学习
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-04 DOI: 10.1016/j.patcog.2024.111140
Weijun Sun, Chaoye Li, Qiaoyun Li, Xiaozhao Fang, Jiakai He, Lei Liu
{"title":"Joint Intra-view and Inter-view Enhanced Tensor Low-rank Induced Affinity Graph Learning","authors":"Weijun Sun,&nbsp;Chaoye Li,&nbsp;Qiaoyun Li,&nbsp;Xiaozhao Fang,&nbsp;Jiakai He,&nbsp;Lei Liu","doi":"10.1016/j.patcog.2024.111140","DOIUrl":"10.1016/j.patcog.2024.111140","url":null,"abstract":"<div><div>Graph-based and tensor-based multi-view clustering have gained popularity in recent years due to their ability to explore the relationship between samples. However, there are still several shortcomings in the current multi-view graph clustering algorithms. (1) Most previous methods only focus on the inter-view correlation, while ignoring the intra-view correlation. (2) They usually use the Tensor Nuclear Norm (TNN) to approximate the rank of tensors. However, while it has the same penalty for different singular values, the model cannot approximate the true rank of tensors well. To solve these problems in a unified way, we propose a new tensor-based multi-view graph clustering method. Specifically, we introduce the Enhanced Tensor Rank (ETR) minimization of intra-view and inter-view in the process of learning the affinity graph of each view. Compared with 10 state-of-the-art methods on 8 real datasets, the experimental results demonstrate the superiority of our method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111140"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PIM-Net: Progressive Inconsistency Mining Network for image manipulation localization PIM-Net:用于图像处理定位的渐进式不一致挖掘网络
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-03 DOI: 10.1016/j.patcog.2024.111136
Ningning Bai , Xiaofeng Wang , Ruidong Han , Jianpeng Hou , Yihang Wang , Shanmin Pang
{"title":"PIM-Net: Progressive Inconsistency Mining Network for image manipulation localization","authors":"Ningning Bai ,&nbsp;Xiaofeng Wang ,&nbsp;Ruidong Han ,&nbsp;Jianpeng Hou ,&nbsp;Yihang Wang ,&nbsp;Shanmin Pang","doi":"10.1016/j.patcog.2024.111136","DOIUrl":"10.1016/j.patcog.2024.111136","url":null,"abstract":"<div><div>The content authenticity and reliability of digital images have promoted the research on image manipulation localization (IML). Most current deep learning-based methods focus on extracting global or local tampering features for identifying forged regions. These features usually contain semantic information and lead to inaccurate detection results for non-object or incomplete semantic tampered regions. In this study, we propose a novel Progressive Inconsistency Mining Network (PIM-Net) for effective IML. Specifically, PIM-Net consists of two core modules, the Inconsistency Mining Module (ICMM) and the Progressive Fusion Refinement module (PFR). ICMM models the inconsistency between authentic and forged regions at two levels, i.e., pixel correlation inconsistency and region attribute incongruity, while avoiding the interference of semantic information. Then PFR progressively aggregates and refines extracted inconsistent features, which in turn yields finer and pure localization responses. Extensive qualitative and quantitative experiments on five benchmarks demonstrate PIM-Net’s superiority to current state-of-the-art IML methods. Code: <span><span>https://github.com/ningnbai/PIM-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111136"},"PeriodicalIF":7.5,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信