Pattern Recognition最新文献

筛选
英文 中文
PRSN: Prototype resynthesis network with cross-image semantic alignment for few-shot image classification PRSN:原型合成网络与跨图像语义对齐,用于少量图像分类
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-07 DOI: 10.1016/j.patcog.2024.111122
Mengping Dong , Fei Li , Zhenbo Li , Xue Liu
{"title":"PRSN: Prototype resynthesis network with cross-image semantic alignment for few-shot image classification","authors":"Mengping Dong ,&nbsp;Fei Li ,&nbsp;Zhenbo Li ,&nbsp;Xue Liu","doi":"10.1016/j.patcog.2024.111122","DOIUrl":"10.1016/j.patcog.2024.111122","url":null,"abstract":"<div><div>Few-shot image classification aims to learn novel classes with limited labeled samples for each class. Recent research mainly focuses on reconstructing a query image from a support set. However, most methods overlook the nearest semantic base parts of support samples, leading to higher intra-class semantic variation. To address this issue, we propose a novel prototype resynthesis network (PRSN) for few-shot image classification that includes global-level and local-level branches. Firstly, the prototype is compounded from semantically similar base parts to enhance the representation. Then, the query set is used to reconstruct the prototypes, further reducing intra-class variations. Additionally, we design a cross-image semantic alignment to enforce global-level and local-level semantic consistency between different query images of the same class. Our empirical results demonstrate that PRSN achieves remarkable performance across a range of widely recognized benchmarks. For instance, our method outperforms the second-best by 0.69% under 5-way 1-shot settings with ResNet-12 backbone on the <em>mini</em>ImageNet dataset.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111122"},"PeriodicalIF":7.5,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forget to Learn (F2L): Circumventing plasticity–stability trade-off in continuous unsupervised domain adaptation 忘我学习(F2L):在连续无监督领域适应中避免可塑性-稳定性权衡
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-06 DOI: 10.1016/j.patcog.2024.111139
Mohamed Abubakr Hassan, Chi-Guhn Lee
{"title":"Forget to Learn (F2L): Circumventing plasticity–stability trade-off in continuous unsupervised domain adaptation","authors":"Mohamed Abubakr Hassan,&nbsp;Chi-Guhn Lee","doi":"10.1016/j.patcog.2024.111139","DOIUrl":"10.1016/j.patcog.2024.111139","url":null,"abstract":"<div><div>In continuous unsupervised domain adaptation (CUDA), deep learning models struggle with the stability-plasticity trade-off—where the model must forget old knowledge to acquire new one. This paper introduces the “Forget to Learn” (F2L), a novel framework that circumvents such a trade-off. In contrast to state-of-the-art methods that aim to balance the two conflicting objectives, stability and plasticity, F2L utilizes active forgetting and knowledge distillation to circumvent the conflict’s root causes. In F2L, dual-encoders are trained, where the first encoder – the ‘Specialist’ – is designed to actively forget, thereby boosting adaptability (i.e., plasticity) and generating high-accuracy pseudo labels on the new domains. Such pseudo labels are then used to transfer/accumulate the specialist knowledge to the second encoder—the ‘Generalist’ through conflict-free knowledge distillation. Empirical and ablation studies confirmed F2L’s superiority on different datasets and against different SOTAs. Furthermore, F2L minimizes the need for hyperparameter tuning, enhances computational and sample efficiency, and excels in problems with long domain sequences—key advantages for practical systems constrained by hardware limitations.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111139"},"PeriodicalIF":7.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Highly realistic synthetic dataset for pixel-level DensePose estimation via diffusion model 通过扩散模型估算像素级 DensePose 的高逼真度合成数据集
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-06 DOI: 10.1016/j.patcog.2024.111137
Jiaxiao Wen, Tao Chu, Qiong Liu
{"title":"Highly realistic synthetic dataset for pixel-level DensePose estimation via diffusion model","authors":"Jiaxiao Wen,&nbsp;Tao Chu,&nbsp;Qiong Liu","doi":"10.1016/j.patcog.2024.111137","DOIUrl":"10.1016/j.patcog.2024.111137","url":null,"abstract":"<div><div>Generating training data with pixel-level annotations for DensePose is a labor-intensive task, resulting in sparse labeling in real-world datasets. Prior solutions have relied on specialized data generation systems to synthesize datasets. However, these synthetic datasets often lack realism and rely on expensive resources such as human body models and texture mappings. In this paper, we address these challenges by introducing a novel data generation method based on the diffusion model, effectively producing highly realistic data without the need for expensive resources. Specifically, our method comprises annotation generation and image generation. Utilizing graphic renderers and SMPL models, we produce synthetic annotations solely based on human poses and shapes. Subsequently, guided by these annotations, we employ simple yet effective textual prompts to generate a wide range of realistic images using the diffusion model. Our experiments conducted on DensePose-COCO dataset demonstrate the superiority of our method compared to existing methods. Code and benchmarks will be released.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111137"},"PeriodicalIF":7.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associative graph convolution network for point cloud analysis 用于点云分析的关联图卷积网络
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-06 DOI: 10.1016/j.patcog.2024.111152
Xi Yang , Xingyilang Yin , Nannan Wang , Xinbo Gao
{"title":"Associative graph convolution network for point cloud analysis","authors":"Xi Yang ,&nbsp;Xingyilang Yin ,&nbsp;Nannan Wang ,&nbsp;Xinbo Gao","doi":"10.1016/j.patcog.2024.111152","DOIUrl":"10.1016/j.patcog.2024.111152","url":null,"abstract":"<div><div>Since point cloud is the raw output of most 3D sensors, its effective analysis is in huge demand in the field of autonomous driving and robotic manipulation. However, directly processing point clouds is challenging because point clouds are a kind of disordered and unstructured geometric data. Recently, numerous graph convolution neural networks are proposed for introducing graph structure to point clouds yet far from perfect. Specially, DGCNN tries to learn local geometric of points in semantic space and recomputes the graph using nearest neighbors in the feature space in each layer. However, it discards all the information of the previous graph after each graph update, which neglects the relations between each dynamic update. To this end, we propose an associative graph convolution neural network (AGCN) which mainly consists of associative graph convolution (AGConv) and two kinds of residual connections. AGConv additionally considers the information from the previous graph when computing the edge function on current local neighborhoods in each layer, and it can precisely and continuously capture the local geometric features on point clouds. Residual connections further explore the semantic relations between layers for effective learning on point clouds. Extensive experiments on several benchmark datasets show that our network achieves competitive classification and segmentation results.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111152"},"PeriodicalIF":7.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Riding feeling recognition based on multi-head self-attention LSTM for driverless automobile 基于多头自注意 LSTM 的无人驾驶汽车骑乘感识别
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111135
Xianzhi Tang, Yongjia Xie, Xinlong Li, Bo Wang
{"title":"Riding feeling recognition based on multi-head self-attention LSTM for driverless automobile","authors":"Xianzhi Tang,&nbsp;Yongjia Xie,&nbsp;Xinlong Li,&nbsp;Bo Wang","doi":"10.1016/j.patcog.2024.111135","DOIUrl":"10.1016/j.patcog.2024.111135","url":null,"abstract":"<div><div>With the emergence of driverless technology, passenger ride comfort has become an issue of concern. In recent years, driving fatigue detection and braking sensation evaluation based on EEG signals have received more attention, and analyzing ride comfort using EEG signals is also a more intuitive method. However, it is still a challenge to find an effective method or model to evaluate passenger comfort. In this paper, we propose a long- and short-term memory network model based on a multiple self-attention mechanism for passenger comfort detection. By applying the multiple attention mechanism to the feature extraction process, more efficient classification results are obtained. The results show that the long- and short-term memory network using the multi-head self-attention mechanism is efficient in decision making along with higher classification accuracy. In conclusion, the classifier based on the multi-head attention mechanism proposed in this paper has excellent performance in EEG classification of different emotional states, and has a broad development prospect in brain-computer interaction.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111135"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint utilization of positive and negative pseudo-labels in semi-supervised facial expression recognition 在半监督面部表情识别中联合使用正负伪标签
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111147
Jinwei Lv, Yanli Ren, Guorui Feng
{"title":"Joint utilization of positive and negative pseudo-labels in semi-supervised facial expression recognition","authors":"Jinwei Lv,&nbsp;Yanli Ren,&nbsp;Guorui Feng","doi":"10.1016/j.patcog.2024.111147","DOIUrl":"10.1016/j.patcog.2024.111147","url":null,"abstract":"<div><div>Facial expression recognition has obtained significant attention due to the abundance of unlabeled expressions, and semi-supervised learning aims to leverage unlabeled samples sufficiently. Recent approaches primarily focus on combining an adaptive margin and pseudo-labels to extract hard samples and boost performance. However, the instability of pseudo-labels and the utilization of the rest unlabeled samples remain critical challenges. We introduce a stable-positive-single and negative-multiple pseudo-labels (SPS-NM) method to solve the above two challenges. All unlabeled samples are categorized into three groups properly by adaptive confidence margins. When the maximum confidence scores are high and stable enough, the unlabeled samples are attached with positive pseudo-labels. On the contrary, when the confidence scores of unlabeled samples are low enough, the related negative-multi pseudo-labels are attached to these samples. The quality and quantity of classes in negative pseudo-labels are balanced by <span><math><mrow><mi>t</mi><mi>o</mi><mi>p</mi></mrow></math></span>-<span><math><mi>k</mi></math></span>. Eventually, the remaining unlabeled samples are ambiguous and fail to match their pseudo-labels, but they can still be used to extract valuable features by contrastive learning. We conduct comparative experiments and ablation study on RAF-DB, AffectNet and SFEW datasets to demonstrate that SPS-NM achieves improvement and becomes the state-of-the-art method in facial expression recognition.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111147"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised multimodal change detection based on difference contrast learning for remote sensing imagery 基于遥感图像差异对比学习的自监督多模态变化检测
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111148
Xuan Hou , Yunpeng Bai , Yefan Xie , Yunfeng Zhang , Lei Fu , Ying Li , Changjing Shang , Qiang Shen
{"title":"Self-supervised multimodal change detection based on difference contrast learning for remote sensing imagery","authors":"Xuan Hou ,&nbsp;Yunpeng Bai ,&nbsp;Yefan Xie ,&nbsp;Yunfeng Zhang ,&nbsp;Lei Fu ,&nbsp;Ying Li ,&nbsp;Changjing Shang ,&nbsp;Qiang Shen","doi":"10.1016/j.patcog.2024.111148","DOIUrl":"10.1016/j.patcog.2024.111148","url":null,"abstract":"<div><div>Most existing change detection (CD) methods target homogeneous images. However, in real-world scenarios like disaster management, where CD is urgent and pre-changed and post-changed images are typical of different modalities, significant challenges arise for multimodal change detection (MCD). One challenge is that bi-temporal image pairs, sourced from distinct sensors, may cause an image domain gap. Another issue surfaces when multimodal bi-temporal image pairs require collaborative input from domain experts who are specialised among different image fields for pixel-level annotation, resulting in scarce annotated samples. To address these challenges, this paper proposes a novel self-supervised difference contrast learning framework (Self-DCF). This framework facilitates networks training without labelled samples by automatically exploiting the feature information inherent in bi-temporal imagery to supervise each other mutually. Additionally, a Unified Mapping Unit reduces the domain gap between different modal images. The efficiency and robustness of Self-DCF are validated on five popular datasets, outperforming state-of-the-art algorithms.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111148"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental feature selection: Parallel approach with local neighborhood rough sets and composite entropy 增量特征选择:采用局部邻域粗糙集和复合熵的并行方法
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111141
Weihua Xu, Weirui Ye
{"title":"Incremental feature selection: Parallel approach with local neighborhood rough sets and composite entropy","authors":"Weihua Xu,&nbsp;Weirui Ye","doi":"10.1016/j.patcog.2024.111141","DOIUrl":"10.1016/j.patcog.2024.111141","url":null,"abstract":"<div><div>Rough set theory is a powerful mathematical framework for managing uncertainty and is widely utilized in feature selection. However, traditional rough set-based feature selection algorithms encounter significant challenges, especially when processing large-scale incremental data and adapting to the dynamic nature of real-world scenarios, where both data volume and feature sets are continuously changing. To overcome these limitations, this study proposes an innovative algorithm that integrates local neighborhood rough sets with composite entropy to measure uncertainty in information systems more accurately. By incorporating decision distribution, composite entropy enhances the precision of uncertainty quantification, thereby improving the effectiveness of the algorithm in feature selection. To further improve performance in handling large-scale incremental data, matrix operations are employed in place of traditional set-based methods, allowing the algorithm to fully utilize modern hardware capabilities for accelerated processing. Additionally, parallel computing technology is integrated to further enhance computational speed. An incremental version of the algorithm is also introduced to better adapt to dynamic data environments, increasing its flexibility and practicality. Comprehensive experimental evaluations demonstrate that the proposed algorithm significantly surpasses existing methods in both effectiveness and efficiency.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111141"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HTCSigNet: A Hybrid Transformer and Convolution Signature Network for offline signature verification HTCSigNet:用于离线签名验证的混合变换器和卷积签名网络
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-05 DOI: 10.1016/j.patcog.2024.111146
Lidong Zheng , Da Wu , Shengjie Xu, Yuchen Zheng
{"title":"HTCSigNet: A Hybrid Transformer and Convolution Signature Network for offline signature verification","authors":"Lidong Zheng ,&nbsp;Da Wu ,&nbsp;Shengjie Xu,&nbsp;Yuchen Zheng","doi":"10.1016/j.patcog.2024.111146","DOIUrl":"10.1016/j.patcog.2024.111146","url":null,"abstract":"<div><div>For Offline Handwritten Signature Verification (OHSV) tasks, traditional Convolutional Neural Networks (CNNs) and transformers are hard to individually capture global and local features from signatures, and single-depth models often suffer from overfitting and poor generalization problems. To overcome those difficulties, in this paper, a novel Hybrid Transformer and Convolution Signature Network (HTCSigNet) is proposed to capture multi-scale features from signatures. Specifically, the HTCSigNet is an innovative framework that consists of two parts: transformer and CNN-based blocks which are used to respectively extract global and local features from signatures. The CNN-based block comprises a Space-to-depth Convolution (SPD-Conv) module which improves the feature learning capability by precisely focusing on signature strokes, a Spatial and Channel Reconstruction Convolution (SCConv) module which enhances model generalization by focusing on more distinctive micro-deformation features while reducing attention to common features, and convolution module that extracts the shape, morphology of specific strokes, and other local features from signatures. In the transformer-based block, there is a Vision Transformer (ViT) which is used to extract overall shape, layout, general direction, and other global features from signatures. After the feature learning stage, Writer-Dependent (WD) and Writer-Independent (WI) verification systems are constructed to evaluate the performance of the proposed HTCSigNet. Extensive experiments on four public signature datasets, GPDSsynthetic, CEDAR, UTSig, and BHSig260 (Bengali and Hindi) demonstrate that the proposed HTCSigNet learns discriminative representations between genuine and skilled forged signatures and achieves state-of-the-art or competitive performance compared with advanced verification systems. Furthermore, the proposed HTCSigNet is easy to transfer to different language datasets in OHSV tasks.<span><span><sup>2</sup></span></span></div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111146"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EGO-LM: An efficient, generic, and out-of-the-box language model for handwritten text recognition EGO-LM:高效、通用、开箱即用的手写文本识别语言模型
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-04 DOI: 10.1016/j.patcog.2024.111130
Hongliang Li , Dezhi Peng , Lianwen Jin
{"title":"EGO-LM: An efficient, generic, and out-of-the-box language model for handwritten text recognition","authors":"Hongliang Li ,&nbsp;Dezhi Peng ,&nbsp;Lianwen Jin","doi":"10.1016/j.patcog.2024.111130","DOIUrl":"10.1016/j.patcog.2024.111130","url":null,"abstract":"<div><div>The language model (LM) plays a crucial role in post-processing handwritten text recognition (HTR) by capturing linguistic patterns. However, traditional rule-based LMs are inefficient, and recent end-to-end LMs require customized training for each HTR model. To address these limitations, we propose an <strong>E</strong>fficient, <strong>G</strong>eneric, and <strong>O</strong>ut-of-the-box <strong>L</strong>anguage <strong>M</strong>odel (EGO-LM) for HTR. To unlock the out-of-the-box capability of the end-to-end LM, we introduce a vision-limited proxy task that focuses on visual-pattern-agnostic linguistic dependencies during training, enhancing the robustness and generality of the LM. The enhanced capabilities also enable EGO-LM to iteratively refine its output for a further accuracy boost without additional tuning. Moreover, we introduce a <strong>D</strong>iverse-<strong>C</strong>orpus <strong>O</strong>nline <strong>H</strong>andwriting dataset (DCOH-120K) with more diverse corpus types and more samples than existing datasets, including 83,142 Chinese and 39,398 English text lines. Extensive experiments demonstrate that EGO-LM can attain state-of-the-art performance while achieving up to 613<span><math><mo>×</mo></math></span> acceleration. The DCOH-120K dataset is available at .</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111130"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信