Journal of Visual Communication and Image Representation最新文献

筛选
英文 中文
A memory access number constraint-based string prediction technique for high throughput SCC implemented in AVS3 在 AVS3 中实现基于内存访问数约束的字符串预测技术,以实现高吞吐量 SCC
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-03 DOI: 10.1016/j.jvcir.2024.104338
Liping Zhao , Zuge Yan , Keli Hu , Sheng Feng , Jiangda Wang , Xueyan Cao , Tao Lin
{"title":"A memory access number constraint-based string prediction technique for high throughput SCC implemented in AVS3","authors":"Liping Zhao ,&nbsp;Zuge Yan ,&nbsp;Keli Hu ,&nbsp;Sheng Feng ,&nbsp;Jiangda Wang ,&nbsp;Xueyan Cao ,&nbsp;Tao Lin","doi":"10.1016/j.jvcir.2024.104338","DOIUrl":"10.1016/j.jvcir.2024.104338","url":null,"abstract":"<div><div>String prediction (SP) is a highly efficient screen content coding (SCC) tool that has been adopted in international and Chinese video coding standards. SP exhibits a highly flexible and efficient ability to predict repetitive matching patterns. However, SP also suffers from low throughput of decoded display output pixels per memory access, which is synchronized with the decoder clock, due to the high number of memory accesses required to decode an SP coding unit for display. Even in state-of-the-art (SOTA) SP, the worst-case scenario involves two memory accesses for decoding each 4-pixel basic string unit across two memory access units, resulting in a throughput as low as two pixels per memory access (PPMA). To solve this problem, we are the first to propose a technique called memory access number constraint-based string prediction (MANC-SP) to achieve high throughput in SCC. First, a novel MANC-SP framework is proposed, a well-designed memory access number constraint rule is established on the basis of statistical data, and a constrained RDO-based string searching method is presented. Compared with the existing SOTA SP, the experimental results demonstrate that MANC-SP can improve the throughput from 2 to 2.67 PPMA, achieving a throughput improvement of <strong>33.33%</strong> while maintaining a negligible impact on coding efficiency and complexity.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104338"},"PeriodicalIF":2.6,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Faster-slow network fused with enhanced fine-grained features for action recognition 快慢网络融合增强型细粒度特征进行动作识别
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-30 DOI: 10.1016/j.jvcir.2024.104328
Xuegang Wu , Jiawei Zhu , Liu Yang
{"title":"Faster-slow network fused with enhanced fine-grained features for action recognition","authors":"Xuegang Wu ,&nbsp;Jiawei Zhu ,&nbsp;Liu Yang","doi":"10.1016/j.jvcir.2024.104328","DOIUrl":"10.1016/j.jvcir.2024.104328","url":null,"abstract":"<div><div>Two-stream methods, which separate human actions and backgrounds into temporal and spatial streams visually, have shown promising results in action recognition datasets. However, prior researches emphasize motion modeling but overlook the robust correlation between motion features and spatial information, causing restriction of the model’s ability to recognize behaviors entailing occlusions or rapid changes. Therefore, we introduce Faster-slow, an improved framework for frame-level motion features. It introduces a Behavioural Feature Enhancement (BFE) module based on a novel two-stream network with different temporal resolutions. BFE consists of two components: MM, which incorporates motion-aware attention to capture dependencies between adjacent frames; STC, which enhances spatio-temporal and channel information to generate optimized features. Overall, BFE facilitates the extraction of finer-grained motion information, while ensuring a stable fusion of information across both streams. We evaluate the Faster-slow on the Atomic Visual Actions dataset, and the Faster-AVA dataset constructed in this paper, yielding promising experimental results.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104328"},"PeriodicalIF":2.6,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight macro-pixel quality enhancement network for light field images compressed by versatile video coding 通过多功能视频编码压缩光场图像的轻量级宏像素质量增强网络
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-30 DOI: 10.1016/j.jvcir.2024.104329
Hongyue Huang , Chen Cui , Chuanmin Jia , Xinfeng Zhang , Siwei Ma
{"title":"Lightweight macro-pixel quality enhancement network for light field images compressed by versatile video coding","authors":"Hongyue Huang ,&nbsp;Chen Cui ,&nbsp;Chuanmin Jia ,&nbsp;Xinfeng Zhang ,&nbsp;Siwei Ma","doi":"10.1016/j.jvcir.2024.104329","DOIUrl":"10.1016/j.jvcir.2024.104329","url":null,"abstract":"<div><div>Previous research demonstrated that filtering Macro-Pixels (MPs) in a decoded Light Field Image (LFI) sequence can effectively enhances the quality of the corresponding Sub-Aperture Images (SAIs). In this paper, we propose a deep-learning-based quality enhancement model following the MP-wise processing approach tailored to LFIs encoded by the Versatile Video Coding (VVC) standard. The proposed novel Res2Net Quality Enhancement Convolutional Neural Network (R2NQE-CNN) architecture is both lightweight and powerful, in which the Res2Net modules are employed to perform LFI filtering for the first time, and are implemented with a novel improved 3D-feature-processing structure. The proposed method incorporates only 205K model parameters and achieves significant Y-BD-rate reductions over VVC of up to 32%, representing a relative improvement of up to 33% compared to the state-of-the-art method, which has more than three times the number of parameters of our proposed model.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104329"},"PeriodicalIF":2.6,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142659529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TrMLGAN: Transmission MultiLoss Generative Adversarial Network framework for image dehazing TrMLGAN:用于图像去毛刺的传输多损失生成对抗网络框架
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-28 DOI: 10.1016/j.jvcir.2024.104324
Pulkit Dwivedi, Soumendu Chakraborty
{"title":"TrMLGAN: Transmission MultiLoss Generative Adversarial Network framework for image dehazing","authors":"Pulkit Dwivedi,&nbsp;Soumendu Chakraborty","doi":"10.1016/j.jvcir.2024.104324","DOIUrl":"10.1016/j.jvcir.2024.104324","url":null,"abstract":"<div><div>Hazy environments significantly degrade image quality, leading to poor contrast and reduced visibility. Existing dehazing methods often struggle to predict the transmission map, which is crucial for accurate dehazing. This study introduces the Transmission MultiLoss Generative Adversarial Network (TrMLGAN), a novel framework designed to enhance transmission map estimation for improved dehazing. The transmission map is initially computed using a dark channel prior-based approach and refined using the TrMLGAN framework, which leverages Generative Adversarial Networks (GANs). By integrating multiple loss functions, such as adversarial, pixel-wise similarity, perceptual similarity, and SSIM losses, our method focuses on various aspects of image quality. This enables robust dehazing performance without direct dependence on ground-truth images. Evaluations using PSNR, SSIM, FADE, NIQE, BRISQUE, and SSEQ metrics show that TrMLGAN significantly outperforms state-of-the-art methods across datasets including D-HAZY, HSTS, SOTS Outdoor, NH-HAZE, and D-Hazy, validating its potential for real-world applications.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104324"},"PeriodicalIF":2.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video Question Answering: A survey of the state-of-the-art 视频问题解答:最新技术调查
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-28 DOI: 10.1016/j.jvcir.2024.104320
Jeshmol P.J., Binsu C. Kovoor
{"title":"Video Question Answering: A survey of the state-of-the-art","authors":"Jeshmol P.J.,&nbsp;Binsu C. Kovoor","doi":"10.1016/j.jvcir.2024.104320","DOIUrl":"10.1016/j.jvcir.2024.104320","url":null,"abstract":"<div><div>Video Question Answering (VideoQA) emerges as a prominent trend in the domain of Artificial Intelligence, Computer Vision, and Natural Language Processing. It involves developing systems capable of understanding, analyzing, and responding to questions about the content of videos. The Proposed survey presents an in-depth overview of the current landscape of Question Answering, shedding light on the challenges, methodologies, datasets, and innovative approaches in the domain. The key components of the Video Question Answering (VideoQA) framework include video feature extraction, question processing, reasoning, and response generation. It underscores the importance of datasets in shaping VideoQA research and the diversity of question types, from factual inquiries to spatial and temporal reasoning. The survey highlights the ongoing research directions and future prospects for VideoQA. Finally, the proposed survey gives a road map for future explorations at the intersection of multiple disciplines, emphasizing the ultimate objective of pushing the boundaries of knowledge and innovation.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104320"},"PeriodicalIF":2.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistent prototype contrastive learning for weakly supervised person search 针对弱监督人员搜索的一致原型对比学习
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-28 DOI: 10.1016/j.jvcir.2024.104321
Huadong Lin , Xiaohan Yu , Pengcheng Zhang , Xiao Bai , Jin Zheng
{"title":"Consistent prototype contrastive learning for weakly supervised person search","authors":"Huadong Lin ,&nbsp;Xiaohan Yu ,&nbsp;Pengcheng Zhang ,&nbsp;Xiao Bai ,&nbsp;Jin Zheng","doi":"10.1016/j.jvcir.2024.104321","DOIUrl":"10.1016/j.jvcir.2024.104321","url":null,"abstract":"<div><div>Weakly supervised person search simultaneously addresses detection and re-identification tasks without relying on person identity labels. Prototype-based contrastive learning is commonly used to address unsupervised person re-identification. We argue that prototypes suffer from spatial, temporal, and label inconsistencies, which result in their inaccurate representation. In this paper, we propose a novel Consistent Prototype Contrastive Learning (CPCL) framework to address prototype inconsistency. For spatial inconsistency, a greedy update strategy is developed to introduce ground truth proposals in the training process and update the memory bank only with the ground truth features. To improve temporal consistency, CPCL employs a local window strategy to calculate the prototype within a specific temporal domain window. To tackle label inconsistency, CPCL adopts a prototype nearest neighbor consistency method that leverages the intrinsic information of the prototypes to rectify the pseudo-labels. Experimentally, the proposed method exhibits remarkable performance improvements on both the CUHK-SYSU and PRW datasets, achieving an mAP of 90.2% and 29.3% respectively. Moreover, it achieves state-of-the-art performance on the CUHK-SYSU dataset. The code will be available on the project website: <span><span>https://github.com/JackFlying/cpcl</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104321"},"PeriodicalIF":2.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MT-Net: Single image dehazing based on meta learning, knowledge transfer and contrastive learning MT-Net:基于元学习、知识迁移和对比学习的单幅图像去毛刺技术
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-28 DOI: 10.1016/j.jvcir.2024.104325
Jianlei Liu, Bingqing Yang, Shilong Wang, Maoli Wang
{"title":"MT-Net: Single image dehazing based on meta learning, knowledge transfer and contrastive learning","authors":"Jianlei Liu,&nbsp;Bingqing Yang,&nbsp;Shilong Wang,&nbsp;Maoli Wang","doi":"10.1016/j.jvcir.2024.104325","DOIUrl":"10.1016/j.jvcir.2024.104325","url":null,"abstract":"<div><div>Single image dehazing is becoming increasingly important as its results impact the efficiency of subsequent computer vision tasks. While many methods have been proposed to address this challenge, existing dehazing approaches often exhibit limited adaptability to different types of images and lack future learnability. In light of this, we propose a dehazing network based on meta-learning, knowledge transfer, and contrastive learning, abbreviated as MT-Net. In our approach, we combine knowledge transfer with meta-learning to tackle these challenges, thus enhancing the network’s generalization performance. We refine the structure of knowledge transfer by introducing a two-phases approach to facilitate learning under the guidance of teacher networks and learning committee networks. We also optimize the negative examples of contrastive learning to reduce the contrast space. Extensive experiments conducted on synthetic and real datasets demonstrate the remarkable performance of our method in both quantitative and qualitative comparisons. The code has been released on <span><span>https://github.com/71717171fan/MT-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104325"},"PeriodicalIF":2.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human gait recognition using joint spatiotemporal modulation in deep convolutional neural networks 利用深度卷积神经网络的时空联合调制识别人类步态
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-28 DOI: 10.1016/j.jvcir.2024.104322
Mohammad Iman Junaid , Allam Jaya Prakash , Samit Ari
{"title":"Human gait recognition using joint spatiotemporal modulation in deep convolutional neural networks","authors":"Mohammad Iman Junaid ,&nbsp;Allam Jaya Prakash ,&nbsp;Samit Ari","doi":"10.1016/j.jvcir.2024.104322","DOIUrl":"10.1016/j.jvcir.2024.104322","url":null,"abstract":"<div><div>Gait, a person’s distinctive walking pattern, offers a promising biometric modality for surveillance applications. Unlike fingerprints or iris scans, gait can be captured from a distance without the subject’s direct cooperation or awareness. This makes it ideal for surveillance and security applications. Traditional convolutional neural networks (CNNs) often struggle with the inherent variations within video data, limiting their effectiveness in gait recognition. The proposed technique in this work introduces a unique joint spatial–temporal modulation network designed to overcome this limitation. By extracting discriminative feature representations across varying frame levels, the network effectively leverages both spatial and temporal variations within video sequences. The proposed architecture integrates attention-based CNNs for spatial feature extraction and a Bidirectional Long Short-Term Memory (Bi-LSTM) network with a temporal attention module to analyse temporal dynamics. The use of attention in spatial and temporal blocks enhances the network’s capability of focusing on the most relevant segments of the video data. This can improve efficiency since the combined approach enhances learning capabilities when processing complex gait videos. We evaluated the effectiveness of the proposed network using two major datasets, namely CASIA-B and OUMVLP. Experimental analysis on CASIA B demonstrates that the proposed network achieves an average rank-1 accuracy of 98.20% for normal walking, 94.50% for walking with a bag and 80.40% for clothing scenarios. The proposed network also achieved an accuracy of 89.10% for OU-MVLP. These results show the proposed method‘s ability to generalize to large-scale data and consistently outperform current state-of-the-art gait recognition techniques.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104322"},"PeriodicalIF":2.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowd counting network based on attention feature fusion and multi-column feature enhancement 基于注意力特征融合和多列特征增强的人群计数网络
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-28 DOI: 10.1016/j.jvcir.2024.104323
Qian Liu, Yixiong Zhong, Jiongtao Fang
{"title":"Crowd counting network based on attention feature fusion and multi-column feature enhancement","authors":"Qian Liu,&nbsp;Yixiong Zhong,&nbsp;Jiongtao Fang","doi":"10.1016/j.jvcir.2024.104323","DOIUrl":"10.1016/j.jvcir.2024.104323","url":null,"abstract":"<div><div>Density map estimation is commonly used for crowd counting. However, using it alone may make some individuals difficult to recognize, due to the problems of target occlusions, scale variations, complex background and heterogeneous distribution. To alleviate these problems, we propose a two-stage crowd counting network based on attention feature fusion and multi-column feature enhancement (AFF-MFE-TNet). In the first stage, AFF-MFE-TNet transforms the input image into a probability map. In the second stage, a multi-column feature enhancement module is constructed to enhance features by expanding the receptive fields, a dual attention feature fusion module is designed to adaptively fuse the features of different scales through attention mechanisms, and a triple counting loss is presented for AFF-MFE-TNet, which can fit the ground truth probability maps and density maps better, and improve the counting performance. Experimental results show that AFF-MFE-TNet can effectively improve the accuracy of crowd counting, as compared with the state-of-the-art.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104323"},"PeriodicalIF":2.6,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVP-HOT: A Moderate Visual Prompt for Hyperspectral Object Tracking MVP-HOT:用于高光谱物体跟踪的适度视觉提示
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-10-26 DOI: 10.1016/j.jvcir.2024.104326
Lin Zhao, Shaoxiong Xie, Jia Li, Ping Tan, Wenjin Hu
{"title":"MVP-HOT: A Moderate Visual Prompt for Hyperspectral Object Tracking","authors":"Lin Zhao,&nbsp;Shaoxiong Xie,&nbsp;Jia Li,&nbsp;Ping Tan,&nbsp;Wenjin Hu","doi":"10.1016/j.jvcir.2024.104326","DOIUrl":"10.1016/j.jvcir.2024.104326","url":null,"abstract":"<div><div>The growing attention to hyperspectral object tracking (HOT) can be attributed to the extended spectral information available in hyperspectral images (HSIs), especially in complex scenarios. This potential makes it a promising alternative to traditional RGB-based tracking methods. However, the scarcity of large hyperspectral datasets poses a challenge for training robust hyperspectral trackers using deep learning methods. Prompt learning, a new paradigm emerging in large language models, involves adapting or fine-tuning a pre-trained model for a specific downstream task by providing task-specific inputs. Inspired by the recent success of prompt learning in language and visual tasks, we propose a novel and efficient prompt learning method for HOT tasks, termed Moderate Visual Prompt for HOT (MVP-HOT). Specifically, MVP-HOT freezes the parameters of the pre-trained model and employs HSIs as visual prompts to leverage the knowledge of the underlying RGB model. Additionally, we develop a moderate and effective strategy to incrementally adapt the HSI prompt information. Our proposed method uses only a few (1.7M) learnable parameters and demonstrates its effectiveness through extensive experiments, MVP-HOT can achieve state-of-the-art performance on three hyperspectral datasets.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"105 ","pages":"Article 104326"},"PeriodicalIF":2.6,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信