Multimedia Systems最新文献

筛选
英文 中文
A channel-gained single-model network with variable rate for multispectral image compression in UAV air-to-ground remote sensing 用于无人机空对地遥感多光谱图像压缩的具有可变速率的信道增益单模型网络
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-07-02 DOI: 10.1007/s00530-024-01398-6
Wei Wang, Daiyin Zhu, Kedi Hu
{"title":"A channel-gained single-model network with variable rate for multispectral image compression in UAV air-to-ground remote sensing","authors":"Wei Wang, Daiyin Zhu, Kedi Hu","doi":"10.1007/s00530-024-01398-6","DOIUrl":"https://doi.org/10.1007/s00530-024-01398-6","url":null,"abstract":"<p>Unmanned aerial vehicle (UAV) air-to-ground remote sensing technology, has the advantages of long flight duration, real-time image transmission, wide applicability, low cost, and so on. To better preserve the integrity of image features during transmission and storage, and improve efficiency in the meanwhile, image compression is a very important link. Nowadays the image compressor based on deep learning framework has been updating as the technological development. However, in order to obtain enough bit rates to fit the performance curve, there is always a severe computational burden, especially for multispectral image compression. This problem arises not only because the complexity of the algorithm is deepening, but also repeated training with rate-distortion optimization. In this paper, a channel-gained single-model network with variable rate for multispectral image compression is proposed. First, a channel gained module is introduced to map the channel content of the image to vector domain as amplitude factors, which leads to representation scaling, as well as obtaining the image representation of different bit rates in a single model. Second, after extracting spatial-spectral features, a plug-and-play dynamic response attention mechanism module is applied to take good care of distinguishing the content correlation of features and weighting the important area dynamically without adding extra parameters. Besides, a hyperprior autoencoder is used to make full use of edge information for entropy estimation, which contributes to a more accurate entropy model. The experiments prove that the proposed method greatly reduces the computational cost, while maintaining good compression performance and surpasses JPEG2000 and some other algorithms based on deep learning in PSNR, MSSSIM and MSA.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PointDMIG: a dynamic motion-informed graph neural network for 3D action recognition PointDMIG:用于三维动作识别的动态运动信息图神经网络
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-27 DOI: 10.1007/s00530-024-01395-9
Yao Du, Zhenjie Hou, Xing Li, Jiuzhen Liang, Kaijun You, Xinwen Zhou
{"title":"PointDMIG: a dynamic motion-informed graph neural network for 3D action recognition","authors":"Yao Du, Zhenjie Hou, Xing Li, Jiuzhen Liang, Kaijun You, Xinwen Zhou","doi":"10.1007/s00530-024-01395-9","DOIUrl":"https://doi.org/10.1007/s00530-024-01395-9","url":null,"abstract":"<p>Point cloud contains rich spatial information, providing effective supplementary clues for action recognition. Existing action recognition algorithms based on point cloud sequences typically employ complex spatiotemporal local encoding to capture the spatiotemporal features, leading to the loss of spatial information and the inability to establish long-term spatial correlation. In this paper, we propose a PointDMIG network that models the long-term spatio-temporal correlation in point cloud sequences while retaining spatial structure information. Specifically, we first employ graph-based static point cloud techniques to construct topological structures for input point cloud sequences and encodes them as human static appearance feature vectors, introducing inherent frame-level parallelism to avoid the loss of spatial information. Then, we extend the technique for static point clouds by integrating the motion information of points between adjacent frames into the topological graph structure, capturing the long-term spatio-temporal evolution of human static appearance while preserving its spatial structure. Moreover, in order to enhance the semantic representation of the point cloud sequences, PointDMIG reconstructs the downsampled point set in the feature extraction process, further enriching the spatio-temporal information of human body movements. Experimental results on NTU RGB+D 60 and MSR Action 3D show that PointDMIG significantly improves the accuracy of 3D human action recognition based on point cloud sequences. We also performed an extended experiment on gesture recognition on the SHREC 2017 dataset, and PointDMIG achieved competitive results.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-branch feature fusion and refinement network for salient object detection 用于突出物体检测的多分支特征融合与细化网络
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-26 DOI: 10.1007/s00530-024-01356-2
Jinyu Yang, Yanjiao Shi, Jin Zhang, Qianqian Guo, Qing Zhang, Liu Cui
{"title":"Multi-branch feature fusion and refinement network for salient object detection","authors":"Jinyu Yang, Yanjiao Shi, Jin Zhang, Qianqian Guo, Qing Zhang, Liu Cui","doi":"10.1007/s00530-024-01356-2","DOIUrl":"https://doi.org/10.1007/s00530-024-01356-2","url":null,"abstract":"<p>With the development of convolutional neural networks (CNNs), salient object detection methods have made great progress in performance. Most methods are designed with complex structures to aggregate the multi-level feature maps, to reach the goal of filtering noise and obtaining rich information. However, there is no differentiation when dealing with the multi-level features, and only a uniform treatment is used in general. Based on the above considerations, in this paper, we propose a multi-branch feature fusion and refinement network (MFFRNet), which is a framework for treating low-level features and high-level features differently, and effectively fuses the information of multi-level features to make the results more accurate. We propose a detail optimization module (DOM) designed for the rich detail information in low-level features and a pyramid feature extraction module (PFEM) designed for the rich semantic information in high-level features, as well as a feature optimization module (FOM) for refining the fused feature of multiple levels. Extensive experiments are conducted on six benchmark datasets, and the results show that our approach outperforms the state-of-the-art methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive secret image sharing based on Boolean operations and polynomial interpolations 基于布尔运算和多项式插值的渐进式秘密图像共享
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-26 DOI: 10.1007/s00530-024-01393-x
Hao Chen, Lizhi Xiong, Ching-Nung Yang
{"title":"Progressive secret image sharing based on Boolean operations and polynomial interpolations","authors":"Hao Chen, Lizhi Xiong, Ching-Nung Yang","doi":"10.1007/s00530-024-01393-x","DOIUrl":"https://doi.org/10.1007/s00530-024-01393-x","url":null,"abstract":"<p>With the expansion of network bandwidth and the rise of social networks, image sharing on open networks has become a trend. The ensuing privacy leakage events have aroused widespread concerns. Therefore, image sharing that protects privacy is desired. Progressive Secret Image Sharing (PSIS), a multilevel privacy protection technology for images, offers a promising solution. However, the progressivity of the majority of PSIS schemes depends on the preprocessing of a secret image, which increases the calculation costs. In addition, a block-based PSIS may have information leakage security risks when processing highly confidential images. Many existing PSIS schemes use a single operation to share secret images, which makes the schemes inflexible and limits the application scenarios. Therefore, we propose a PSIS based on Boolean operations and Polynomial interpolations (PSIS-BP). The proposed scheme divides the bit-plane of the pixel into two parts. One part is utilized to perform the Boolean operation, and the other part is used to perform the polynomial interpolation. Different assignment strategies can produce different progressive reconstruction levels and expand the application scenarios of the scheme. Theoretical analyses and experimental results demonstrate that the proposed scheme is secure, low-cost, and flexible.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMFNet: deep matrix factorization network for image compressed sensing DMFNet:用于图像压缩传感的深度矩阵因式分解网络
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-26 DOI: 10.1007/s00530-024-01380-2
Hengyou Wang, Haocheng Li, Xiang Jiang
{"title":"DMFNet: deep matrix factorization network for image compressed sensing","authors":"Hengyou Wang, Haocheng Li, Xiang Jiang","doi":"10.1007/s00530-024-01380-2","DOIUrl":"https://doi.org/10.1007/s00530-024-01380-2","url":null,"abstract":"<p>Due to its outstanding performance in image processing, deep learning (DL) is successfully utilized in compressed sensing (CS) reconstruction. However, most existing DL-based reconstruction methods capture local features mainly through stacked convolutional layers while ignoring global structural information. In this paper, we propose a novel deep matrix factorization network (dubbed DMFNet), which takes advantage of detailed textures and global structural information of images to achieve better CS reconstruction. Specifically, the proposed DMFNet contains the sampling-initialization module and the DMF reconstruction module. In the sampling-initialization module, a saliency detector is employed to evaluate the salience of different regions and generate the corresponding feature map. Then, a block ratio allocation strategy (BRA) is developed to allocate CS ratios based on the feature map adaptively. Subsequently, we perform a block-by-block initialization reconstruction by a derived mathematical formula. In the DMF reconstruction module, we explore the global structural information by the low-rank matrix factorization. For the variable updating, we design the variables updating networks based on the deep unfolding networks (DUNs) and the U-net but not in a conventional way based on mathematical formulas. Extensive experimental results demonstrate that the proposed DMFNet obtains better reconstruction quality and noise robustness on several benchmark datasets compared to state-of-the-art methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-guided LiDAR segmentation and odometry using image-to-point cloud saliency transfer 利用图像到点云显著性转移进行注意力引导的激光雷达分割和里程测量
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-24 DOI: 10.1007/s00530-024-01389-7
Guanqun Ding, Nevrez İmamoğlu, Ali Caglayan, Masahiro Murakawa, Ryosuke Nakamura
{"title":"Attention-guided LiDAR segmentation and odometry using image-to-point cloud saliency transfer","authors":"Guanqun Ding, Nevrez İmamoğlu, Ali Caglayan, Masahiro Murakawa, Ryosuke Nakamura","doi":"10.1007/s00530-024-01389-7","DOIUrl":"https://doi.org/10.1007/s00530-024-01389-7","url":null,"abstract":"<p>LiDAR odometry estimation and 3D semantic segmentation are crucial for autonomous driving, which has achieved remarkable advances recently. However, these tasks are challenging due to the imbalance of points in different semantic categories for 3D semantic segmentation and the influence of dynamic objects for LiDAR odometry estimation, which increases the importance of using representative/salient landmarks as reference points for robust feature learning. To address these challenges, we propose a saliency-guided approach that leverages attention information to improve the performance of LiDAR odometry estimation and semantic segmentation models. Unlike in the image domain, only a few studies have addressed point cloud saliency information due to the lack of annotated training data. To alleviate this, we first present a universal framework to transfer saliency distribution knowledge from color images to point clouds, and use this to construct a pseudo-saliency dataset (i.e. FordSaliency) for point clouds. Then, we adopt point cloud based backbones to learn saliency distribution from pseudo-saliency labels, which is followed by our proposed SalLiDAR module. SalLiDAR is a saliency-guided 3D semantic segmentation model that integrates saliency information to improve segmentation performance. Finally, we introduce SalLONet, a self-supervised saliency-guided LiDAR odometry network that uses the semantic and saliency predictions of SalLiDAR to achieve better odometry estimation. Our extensive experiments on benchmark datasets demonstrate that the proposed SalLiDAR and SalLONet models achieve state-of-the-art performance against existing methods, highlighting the effectiveness of image-to-LiDAR saliency knowledge transfer. Source code will be available at https://github.com/nevrez/SalLONet</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Welsch penalty for edge-aware image decomposition 用于边缘感知图像分解的广义韦尔施惩罚
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-21 DOI: 10.1007/s00530-024-01382-0
Yang Yang, Shunli Ji, Xinyu Wang, Lanling Zeng, Yongzhao Zhan
{"title":"Generalized Welsch penalty for edge-aware image decomposition","authors":"Yang Yang, Shunli Ji, Xinyu Wang, Lanling Zeng, Yongzhao Zhan","doi":"10.1007/s00530-024-01382-0","DOIUrl":"https://doi.org/10.1007/s00530-024-01382-0","url":null,"abstract":"<p>Edge-aware image decomposition is an essential topic in the field of multimedia signal processing. In this paper, we propose a novel non-convex penalty function, which we name the generalized Welsch function. We show that the proposed penalty function is more than a generalization of most existing penalty functions for edge-aware regularization, thus, it better facilitates edge-awareness. We embed the proposed penalty function into a novel optimization model for edge-aware image decomposition. To solve the optimization model with non-convex penalty function, we propose an efficient algorithm based on the additive quadratic minimization and Fourier domain optimization. We have experimented with the proposed method in a variety of tasks, including image smoothing, detail enhancement, HDR tone mapping, and JPEG compression artifact removal. Experiment results show that our method outperforms the state-of-the-art image decomposition methods. Furthermore, our method is highly efficient, it is able to render real-time processing of 720P color images on a modern GPU.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSC diffusion: patch-based simplified conditional diffusion model for low-light image enhancement PSC 扩散:用于弱光图像增强的基于斑块的简化条件扩散模型
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-21 DOI: 10.1007/s00530-024-01391-z
Fei Wan, Bingxin Xu, Weiguo Pan, Hongzhe Liu
{"title":"PSC diffusion: patch-based simplified conditional diffusion model for low-light image enhancement","authors":"Fei Wan, Bingxin Xu, Weiguo Pan, Hongzhe Liu","doi":"10.1007/s00530-024-01391-z","DOIUrl":"https://doi.org/10.1007/s00530-024-01391-z","url":null,"abstract":"<p>Low-light image enhancement is pivotal for augmenting the utility and recognition of visuals captured under inadequate lighting conditions. Previous methods based on Generative Adversarial Networks (GAN) are affected by mode collapse and lack attention to the inherent characteristics of low-light images. This paper propose the Patch-based Simplified Conditional Diffusion Model (PSC Diffusion) for low-light image enhancement due to the outstanding performance of diffusion models in image generation. Specifically, recognizing the potential issue of gradient vanishing in extremely low-light images due to smaller pixel values, we design a simplified U-Net architecture with SimpleGate and Parameter-free attention (SimPF) block to predict noise. This architecture utilizes parameter-free attention mechanism and fewer convolutional layers to reduce multiplication operations across feature maps, resulting in a 12–51% reduction in parameters compared to U-Nets used in several prominent diffusion models, which also accelerates the sampling speed. In addition, preserving intricate details in images during the diffusion process is achieved through employing a patch-based diffusion strategy, integrated with global structure-aware regularization, which effectively enhances the overall quality of the enhanced images. Experiments show that the method proposed in this paper achieves richer image details and better perceptual quality, while the sampling speed is over 35% faster than similar diffusion model-based methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hard semantic mask strategy for automatic facial action unit recognition with teacher–student model 利用师生模型自动识别面部动作单元的硬语义掩码策略
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-20 DOI: 10.1007/s00530-024-01385-x
Zichen Liang, Haiying Xia, Yumei Tan, Shuxiang Song
{"title":"Hard semantic mask strategy for automatic facial action unit recognition with teacher–student model","authors":"Zichen Liang, Haiying Xia, Yumei Tan, Shuxiang Song","doi":"10.1007/s00530-024-01385-x","DOIUrl":"https://doi.org/10.1007/s00530-024-01385-x","url":null,"abstract":"<p>Facial Action Coding System (FACS) is a widely used technique in affective computing, which defines a series of facial action units (AUs) corresponding to localized regions of the face. Fine-grained feature information of critical regions is crucial for accurate AU recognition. However, conventional random masking techniques used in Masked Image Modeling (MIM) often overlook the inherent symmetry of faces and the complex interrelationships among facial muscles, leading to a lack of critical local details and poor AU recognition performance. To address these limitations, we propose a novel teacher-student model-based MIM framework called Hard Semantic Masking Strategy Teacher–Student (HSMS-TS). Specifically, we first introduce a hard semantic mask strategy in the teacher model, aims to guide the student network to focus on learning fine-grained AU-related representations. Then, the student network utilizes the attention maps from the pretrained teacher model to generate a more challenging masking method from a predefined template, increasing the learning difficulty and helping the student acquire better AU-related representations. The experimental results on two publicly available datasets, i.e., BP4D and DISFA, show the effectiveness of our proposed method with exceptional performance. Code will be publicly available at http://github.com/lzichen/HSMS-TS.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage-UNet network based on group normalization for single image deraining 基于组归一化的两级 UNet 网络,用于单幅图像去污
IF 3.9 3区 计算机科学
Multimedia Systems Pub Date : 2024-06-20 DOI: 10.1007/s00530-024-01362-4
Weina Zhou, Hao Han
{"title":"A two-stage-UNet network based on group normalization for single image deraining","authors":"Weina Zhou, Hao Han","doi":"10.1007/s00530-024-01362-4","DOIUrl":"https://doi.org/10.1007/s00530-024-01362-4","url":null,"abstract":"<p>Rain streaks can seriously damage the optical quality of image and affect image processing in many scenes. Deep learning methods achieve state-of-the-art performance in the task of single-image rain removal. However, most deraining models based on deep learning only deal with local relationships, they didn’t sufficiently consider the contextual information over long distances in the task of rain removal. This drawback can lead to residual rain streaks and insufficient recovery of texture details. Therefore, a Two-Stage-UNet Network based on Group Normalization named TSUGN is created to solve these problems. It decomposes the image deraining task into easier and smaller subtasks for capturing more contextual information. And in order to balance spatial details and high-level contextual information, group normalization is also added to our Group Normalization Feature Residual Block (GNFRB). By fully taking into account of multi-scale features information, a Scale-Feature Fusion Module(SFFM)is proposed to learn features with different scales. In addition, a new feature compensation method is proposed to deal with the model bias issue by combining a parameter-free <span>(3-D)</span> attention module SimAM with GNFRB. Comprehensive experiments demonstrate the superiority of the proposed network in terms of computational efficiency, end-to-end trainability and easy implementation. It has great potential in image recovery tasks.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":null,"pages":null},"PeriodicalIF":3.9,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141515060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信