Journal of Visual Communication and Image Representation最新文献

筛选
英文 中文
Cell tracking-by-detection using elliptical bounding boxes
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-14 DOI: 10.1016/j.jvcir.2025.104425
Lucas N. Kirsten, Cláudio R. Jung
{"title":"Cell tracking-by-detection using elliptical bounding boxes","authors":"Lucas N. Kirsten,&nbsp;Cláudio R. Jung","doi":"10.1016/j.jvcir.2025.104425","DOIUrl":"10.1016/j.jvcir.2025.104425","url":null,"abstract":"<div><div>Cell detection and tracking are crucial for bio-analysis. Current approaches rely on the tracking-by-model evolution paradigm, where end-to-end deep learning models are trained for cell detection and tracking. However, such methods require extensive amounts of annotated data, which is time-consuming and often requires specialized annotators. The proposed method involves approximating cell shapes as oriented ellipses and utilizing generic-purpose-oriented object detectors for cell detection to alleviate the requirement of annotated data. A global data association algorithm is then employed to explore temporal cell similarity using probability distance metrics, considering that the ellipses relate to two-dimensional Gaussian distributions. The results of this study suggest that the proposed tracking-by-detection paradigm is a viable alternative for cell tracking. The method achieves competitive results and reduces the dependency on extensive annotated data, addressing a common challenge in current cell detection and tracking approaches. Our code is publicly available at <span><span>https://github.com/LucasKirsten/Deep-Cell-Tracking-EBB</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104425"},"PeriodicalIF":2.6,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based weakly supervised 3D human pose estimation
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-14 DOI: 10.1016/j.jvcir.2025.104432
Xiao-guang Wu , Hu-jie Xie , Xiao-chen Niu , Chen Wang , Ze-lei Wang , Shi-wen Zhang , Yu-ze Shan
{"title":"Transformer-based weakly supervised 3D human pose estimation","authors":"Xiao-guang Wu ,&nbsp;Hu-jie Xie ,&nbsp;Xiao-chen Niu ,&nbsp;Chen Wang ,&nbsp;Ze-lei Wang ,&nbsp;Shi-wen Zhang ,&nbsp;Yu-ze Shan","doi":"10.1016/j.jvcir.2025.104432","DOIUrl":"10.1016/j.jvcir.2025.104432","url":null,"abstract":"<div><div>Deep learning-based 3D human pose estimation methods typically require large amounts of 3D pose annotations. However, due to limitations in data quality and the scarcity of 3D labeled data, researchers have adopted weak supervision methods to reduce the demand for annotated data. Compared to traditional approaches, Transformers have recently achieved remarkable success in 3D human pose estimation. Leveraging their powerful modeling and generalization capabilities, Transformers effectively capture patterns and features in the data, even under limited data conditions, mitigating the issue of data scarcity. Nonetheless, the Transformer architecture struggles to capture long-term dependencies and spatio-temporal correlations between joints when processing spatio-temporal features, which limits its ability to model temporal and spatial relationships comprehensively. To address these challenges and better utilize limited labeled data under weak supervision, we proposed an improved Transformer-based model. By grouping joints according to body parts, we enhanced the spatio-temporal correlations between joints. Additionally, the integration of LSTM captures long-term dependencies, improving temporal sequence modeling and enabling the generation of accurate 3D poses from limited data. These structural improvements, combined with weak supervision strategies, enhance the model’s performance while reducing the reliance on extensive 3D annotations. Furthermore, a multi-hypothesis strategy and temporal smoothness consistency constraints were employed to regulate variations between adjacent time steps. Comparisons on the Human3.6M and HumanEva datasets validate the effectiveness of our approach.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"109 ","pages":"Article 104432"},"PeriodicalIF":2.6,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143654646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint reference frame synthesis and post filter enhancement for Versatile Video Coding
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-13 DOI: 10.1016/j.jvcir.2025.104433
Weijie Bao , Yuantong Zhang , Jianghao Jia , Zhenzhong Chen , Shan Liu
{"title":"Joint reference frame synthesis and post filter enhancement for Versatile Video Coding","authors":"Weijie Bao ,&nbsp;Yuantong Zhang ,&nbsp;Jianghao Jia ,&nbsp;Zhenzhong Chen ,&nbsp;Shan Liu","doi":"10.1016/j.jvcir.2025.104433","DOIUrl":"10.1016/j.jvcir.2025.104433","url":null,"abstract":"<div><div>This paper presents the joint reference frame synthesis (RFS) and post-processing filter enhancement (PFE) for Versatile Video Coding (VVC), aiming to explore the combination of different neural network-based video coding (NNVC) tools to better utilize the hierarchical bi-directional coding structure of VVC. Both RFS and PFE utilize the Space–Time Enhancement Network (STENet), which receives two input frames with artifacts and produces two enhanced frames with suppressed artifacts, along with an intermediate synthesized frame. STENet comprises two pipelines, the synthesis pipeline and the enhancement pipeline, tailored for different purposes. During RFS, two reconstructed frames are sent into STENet’s synthesis pipeline to synthesize a virtual reference frame, similar to the current to-be-coded frame. The synthesized frame serves as an additional reference frame inserted into the reference picture list (RPL). During PFE, two reconstructed frames are fed into STENet’s enhancement pipeline to alleviate their artifacts and distortions, resulting in enhanced frames with reduced artifacts and distortions. To reduce inference complexity, we propose joint inference of RFS and PFE (JISE), achieved through a single execution of STENet. Integrated into the VVC reference software VTM-15.0, RFS, PFE, and JISE are coordinated within a novel Space–Time Enhancement Window (STEW) under Random Access (RA) configuration. The proposed method could achieve –7.34%/–17.21%/–16.65% BD-rate (PSNR) on average for three components under RA configuration.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104433"},"PeriodicalIF":2.6,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-tiered Spatio-temporal Feature Extraction for Micro-expression Classification
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-12 DOI: 10.1016/j.jvcir.2025.104436
Ankita Jain , Dhananjoy Bhakta , Prasenjit Dey
{"title":"Two-tiered Spatio-temporal Feature Extraction for Micro-expression Classification","authors":"Ankita Jain ,&nbsp;Dhananjoy Bhakta ,&nbsp;Prasenjit Dey","doi":"10.1016/j.jvcir.2025.104436","DOIUrl":"10.1016/j.jvcir.2025.104436","url":null,"abstract":"<div><div>This paper proposed a framework called DAuLiLSTM (<strong>DAu</strong>Vi + <strong>LiLSTM</strong>) for Micro-expression (ME) classification. It extracts spatio-temporal (ST) features through two novel components: dynamic image of augmented video (DAuVi) and Lightnet with LSTM (LiLSTM). The first component presents a unique strategy to generate multiple dynamic images of each original ME video that contain the relevant ST features. It proposes an algorithm that works as a sliding window and ensures the incorporation of the apex frame in each dynamic image. The second component further processes those images to extract additional ST features. The LiLSTM consists of two deep networks: Lightnet and LSTM. The Lightnet extracts the spatial information and LSTM learns the temporal sequences. A combination of both components extracts ST features sequentially twice and ensures that the model captures all ST features. We found that our model outperforms 14 state-of-the-art techniques in accuracy and F1-score on three ME datasets.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"109 ","pages":"Article 104436"},"PeriodicalIF":2.6,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143637408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust and adaptive framework with space–time memory networks for Visual Object Tracking
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-11 DOI: 10.1016/j.jvcir.2025.104431
Yu Zheng, Yong Liu, Xun Che
{"title":"A robust and adaptive framework with space–time memory networks for Visual Object Tracking","authors":"Yu Zheng,&nbsp;Yong Liu,&nbsp;Xun Che","doi":"10.1016/j.jvcir.2025.104431","DOIUrl":"10.1016/j.jvcir.2025.104431","url":null,"abstract":"<div><div>These trackers based on the space–time memory network locate the target object in the search image employing contextual information from multiple memory frames and their corresponding foreground–background features. It is conceivable that these trackers are susceptible to the memory frame quality as well as the accuracy of the corresponding foreground labels. In the previous works, experienced methods are employed to obtain memory frames from historical frames, which hinders the improvement of generalization and performance. To address the above limitations, we propose a robust and adaptive extraction strategy for memory frames to ensure that the most representative historical frames are selected into the set of memory frames to increase the accuracy of localization and reduce failures due to error accumulation. Specifically, we propose an extraction network to evaluate historical frames, where historical frames with the highest score are labeled as the memory frame and conversely dropped. Qualitative and quantitative analyses were implemented on multiple datasets (OTB100, LaSOT and GOT-10K), and the proposed method obtains significant gain in performance over the previous works, especially for challenging scenarios. while bringing only a negligible inference speed degradation, otherwise, the proposed method obtains competitive results compared to other state-of-the-art (SOTA) methods.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104431"},"PeriodicalIF":2.6,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143619607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-guided quantization-aware training for EEG-based emotion recognition
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-05 DOI: 10.1016/j.jvcir.2025.104415
Sheng-hua Zhong , Jiahao Shi , Yi Wang
{"title":"Knowledge-guided quantization-aware training for EEG-based emotion recognition","authors":"Sheng-hua Zhong ,&nbsp;Jiahao Shi ,&nbsp;Yi Wang","doi":"10.1016/j.jvcir.2025.104415","DOIUrl":"10.1016/j.jvcir.2025.104415","url":null,"abstract":"<div><div>Emotion recognition is of paramount importance in various domains. In recent years, the use of models that employ electroencephalogram data as input has seen substantial achievements. However, the increasing complexity of these EEG models presents substantial challenges that hinder their deployment in resource-limited environments. This situation emphasizes the critical need for effective model compression. However, extreme compression often leads to significant degradation in model performance. To address this issue, we propose a novel Knowledge-Guided Quantization-Aware Training method for EEG-based emotion recognition task. This method integrates knowledge from emotional neuroscience into the quantization process, emphasizing the importance of the prefrontal cortex part in the EEG sample selection process to construct the calibration set and successfully enhance the performance of Quantization-Aware Training techniques. Experimental results demonstrate that our proposed framework achieves quantization to 8 bits, which leads to surpassing SOTAs in EEG-based emotion recognition. The source code is made available at: <span><span>https://github.com/Stewen24/KGCC</span><svg><path></path></svg></span> .</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104415"},"PeriodicalIF":2.6,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DUWS Net: Wavelet-based dual U-shaped spatial-frequency fusion transformer network for medical image segmentation
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-01 DOI: 10.1016/j.jvcir.2025.104428
Liang Zhu , Kuan Shen , Guangwen Wang , Yujie Hao , Lijun Zheng , Yanping Lu
{"title":"DUWS Net: Wavelet-based dual U-shaped spatial-frequency fusion transformer network for medical image segmentation","authors":"Liang Zhu ,&nbsp;Kuan Shen ,&nbsp;Guangwen Wang ,&nbsp;Yujie Hao ,&nbsp;Lijun Zheng ,&nbsp;Yanping Lu","doi":"10.1016/j.jvcir.2025.104428","DOIUrl":"10.1016/j.jvcir.2025.104428","url":null,"abstract":"<div><div>Medical image segmentation is crucial for disease monitoring, diagnosis, and treatment planning. However, due to the complexity of medical images and their rich frequency information, networks face challenges in segmenting regions of interest using single-domain information. This study proposes a wavelet-transform-based dual U-Net fusion Transformer network for medical image segmentation, aiming to address the shortcomings of current methods. The network supplements spatial information through an external U-Net encoder-decoder structure, enabling deeper extraction of spatial features from the images. The internal U-shaped structure utilizes wavelet transform to capture low-frequency and high-frequency components of feature maps, performing linear self-attention interactions between these frequencies. This allows the network to learn global structures from low frequencies while capturing detailed features from high frequencies. Finally, spatial and frequency domain features are fused through alternating weighting based on spatial and channel dimensions. Experimental results show that the proposed method outperforms traditional single-domain segmentation models.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104428"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143550990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delicate image segmentation based on cosine kernel graph cut
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-03-01 DOI: 10.1016/j.jvcir.2025.104430
Mehrnaz Niazi , Kambiz Rahbar , Fatemeh Taheri , Mansour Sheikhan , Maryam Khademi
{"title":"Delicate image segmentation based on cosine kernel graph cut","authors":"Mehrnaz Niazi ,&nbsp;Kambiz Rahbar ,&nbsp;Fatemeh Taheri ,&nbsp;Mansour Sheikhan ,&nbsp;Maryam Khademi","doi":"10.1016/j.jvcir.2025.104430","DOIUrl":"10.1016/j.jvcir.2025.104430","url":null,"abstract":"<div><div>The kernel graph cut approach is effective but highly dependent on the choice of kernel used to map data into a new feature space. This study introduces an enhanced kernel-based graph cut method specifically designed for segmenting complex images. The proposed method extends the RBF kernel by incorporating a unique mapping function that includes two components from the MacLaurin cosine kernel series, known for its ability to decorrelate regions and compress energy. This enhanced feature space enables the objective function to include a data fidelity term, which preserves the standard deviation of each region’s data in the segmented image, along with a regularization term that maintains smooth boundaries. The proposed method retains the computational efficiency typical of graph-based techniques while enhancing segmentation accuracy for intricate images. Experimental evaluations on widely-used datasets with complex shapes and fine boundaries demonstrate the effectiveness of this kernel-based approach compared to existing methods.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104430"},"PeriodicalIF":2.6,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying usability assessment method for surveillance video anomaly detection with multiple distortion
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-02-28 DOI: 10.1016/j.jvcir.2025.104417
Nengxin Li , Xichen Yang , Tianhai Chen , Tianshu Wang , Genlin Ji
{"title":"Applying usability assessment method for surveillance video anomaly detection with multiple distortion","authors":"Nengxin Li ,&nbsp;Xichen Yang ,&nbsp;Tianhai Chen ,&nbsp;Tianshu Wang ,&nbsp;Genlin Ji","doi":"10.1016/j.jvcir.2025.104417","DOIUrl":"10.1016/j.jvcir.2025.104417","url":null,"abstract":"<div><div>With the extensive deployment of surveillance cameras, video anomaly detection (VAD) is commonly employed to various practical scenarios such as subway stations, parks, and roads. However, the surveillance camera can be easily influenced by weather and hardware degradation during data collection, resulting in information loss. Insufficient information will lead to a decrease in accuracy and credibility for anomaly detection. Accurately measuring the impact of information loss on anomaly detection can be helpful in practical application, and provide reliable application scheme of surveillance data. Therefore, we construct a dataset which contains surveillance data with multiple distortions. Based on the dataset, sufficient reliable data can be provided to measure the impact of data quality for anomaly detection methods. On the basis of the impact of data quality on anomaly detection, thresholds have been designed for data screening to improve the performance of anomaly detection. Finally, an image usability assessment (IUA) method was proposed to accurately screen surveillance data via the designed thresholds. Experimental results demonstrate that the constructed dataset was reasonable and reliable. The proposed IUA method can accurately screen the data to improve the performance of VAD methods, and meet the requirements of practical application scenarios on surveillance data. The dataset has been open-sourced at <span><span>https://github.com/dart-into/MultipleDistortionDataset</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104417"},"PeriodicalIF":2.6,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised monocular depth estimation with large kernel attention and dynamic scene perception
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2025-02-27 DOI: 10.1016/j.jvcir.2025.104413
Xuezhi Xiang , Yao Wang , Xiaoheng Li , Lei Zhang , Xiantong Zhen
{"title":"Self-supervised monocular depth estimation with large kernel attention and dynamic scene perception","authors":"Xuezhi Xiang ,&nbsp;Yao Wang ,&nbsp;Xiaoheng Li ,&nbsp;Lei Zhang ,&nbsp;Xiantong Zhen","doi":"10.1016/j.jvcir.2025.104413","DOIUrl":"10.1016/j.jvcir.2025.104413","url":null,"abstract":"<div><div>Self-supervised monocular depth estimation has emerged as a promising approach since it does not rely on labeled training data. Most methods combine convolution and Transformer to model long-distance dependencies to estimate depth accurately. However, Transformer treats 2D image features as 1D sequences, and positional encoding somewhat mitigates the loss of spatial information between different feature blocks, tending to overlook channel features, which limit the performance of depth estimation. In this paper, we propose a self-supervised monocular depth estimation network to get finer details. Specifically, we propose a decoder based on large kernel attention, which can model long-distance dependencies without compromising the two-dimension structure of features while maintaining feature channel adaptivity. In addition, we introduce a dynamic scene perception (DSP) module, which dynamically adjusts the receptive fields to capture more accurate depth discontinuities context information, thereby enhancing the network’s ability to process complex scenes. Besides, we introduce an up-sampling module to accurately recover the fine details in the depth map. Our method achieves highly competitive results on the KITTI dataset (AbsRel = 0.095, SqRel = 0.613, RMSElog = 0.169, <span><math><mi>δ</mi></math></span>1 = 0.907), and shows great generalization performance on the challenging indoor dataset NYUv2 dataset.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"108 ","pages":"Article 104413"},"PeriodicalIF":2.6,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信