Journal of Visual Communication and Image Representation最新文献

筛选
英文 中文
DA4NeRF: Depth-aware Augmentation technique for Neural Radiance Fields
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-12-09 DOI: 10.1016/j.jvcir.2024.104365
Hamed Razavi Khosroshahi , Jaime Sancho , Gun Bang , Gauthier Lafruit , Eduardo Juarez , Mehrdad Teratani
{"title":"DA4NeRF: Depth-aware Augmentation technique for Neural Radiance Fields","authors":"Hamed Razavi Khosroshahi ,&nbsp;Jaime Sancho ,&nbsp;Gun Bang ,&nbsp;Gauthier Lafruit ,&nbsp;Eduardo Juarez ,&nbsp;Mehrdad Teratani","doi":"10.1016/j.jvcir.2024.104365","DOIUrl":"10.1016/j.jvcir.2024.104365","url":null,"abstract":"<div><div>Neural Radiance Fields (NeRF) demonstrate impressive capabilities in rendering novel views of specific scenes by learning an implicit volumetric representation from posed RGB images without any depth information. View synthesis is the computational process of synthesizing novel images of a scene from different viewpoints, based on a set of existing images. One big problem is the need for a large number of images in the training datasets for neural network-based view synthesis frameworks. The challenge of data augmentation for view synthesis applications has not been addressed yet. NeRF models require comprehensive scene coverage in multiple views to accurately estimate radiance and density at any point. In cases without sufficient coverage of scenes with different viewing directions, cannot effectively interpolate or extrapolate unseen scene parts. In this paper, we introduce a new pipeline to tackle this data augmentation problem using depth data. We use MPEG’s Depth Estimation Reference Software and Reference View Synthesizer to add novel non-existent views to the training sets needed for the NeRF framework. Experimental results show that our approach improves the quality of the rendered images using NeRF’s model. The average quality increased by 6.4 dB in terms of Peak Signal-to-Noise Ratio (PSNR), with the highest increase being 11 dB. Our approach not only adds the ability to handle the sparsely captured multiview content to be used in the NeRF framework, but also makes NeRF more accurate and useful for creating high-quality virtual views.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104365"},"PeriodicalIF":2.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-aware representations for unsupervised Camouflaged Object Detection
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-12-09 DOI: 10.1016/j.jvcir.2024.104366
Zelin Lu, Xing Zhao, Liang Xie, Haoran Liang, Ronghua Liang
{"title":"Semantic-aware representations for unsupervised Camouflaged Object Detection","authors":"Zelin Lu,&nbsp;Xing Zhao,&nbsp;Liang Xie,&nbsp;Haoran Liang,&nbsp;Ronghua Liang","doi":"10.1016/j.jvcir.2024.104366","DOIUrl":"10.1016/j.jvcir.2024.104366","url":null,"abstract":"<div><div>Unsupervised image segmentation algorithms face challenges due to the lack of human annotations. They typically employ representations derived from self-supervised models to generate pseudo-labels for supervising model training. Using this strategy, the model’s performance largely depends on the quality of the generated pseudo-labels. In this study, we design an unsupervised framework to perform COD (Camouflaged Object Detection) without the need for generating pseudo-labels. Specifically, we utilize semantic-aware representations, trained in a self-supervised manner on large-scale unlabeled datasets, to guide the training process. These representations not only capturing rich contextual semantic information but also assist in refining the blurred boundaries of camouflaged objects. Furthermore, we design a framework that integrates these semantic-aware representations with task-specific features, enabling the model to perform the UCOD (Unsupervised Camouflaged Object Detection) task with enhanced contextual understanding. Moreover, we introduce an innovative multi-scale token loss function, which maintain the structural integrity of objects at various scales in the model’s predictions through mutual supervision between different features and scales. Extensive experimental validation demonstrates that our model significantly enhances the performance of UCOD, closely approaching the capabilities of state-of-the-art weakly-supervised COD models.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104366"},"PeriodicalIF":2.6,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRGNet: Dual-Relation Graph Network for point cloud analysis
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-12-05 DOI: 10.1016/j.jvcir.2024.104353
Ce Zhou, Qiang Ling
{"title":"DRGNet: Dual-Relation Graph Network for point cloud analysis","authors":"Ce Zhou,&nbsp;Qiang Ling","doi":"10.1016/j.jvcir.2024.104353","DOIUrl":"10.1016/j.jvcir.2024.104353","url":null,"abstract":"<div><div>Recently point cloud analysis has attracted more and more attention. However, it is a challenging task because point clouds are irregular, sparse, and unordered. To accomplish that task, this paper proposes Dual Relation Convolution (DRConv) which utilizes both geometric relations and feature-level relations to effectively aggregate discriminative features. The geometric relations take the explicit geometric structures to establish the spatial connections in the local regions while the implicit feature-level relations are taken to capture the neighboring points with the same semantic properties. Based on our proposed DRConv, we construct a Dual-Relation Graph Network (DRGNet) for point cloud analysis. To capture long-range contextual information, our DRGNet searches for neighboring points in both 3D geometric space and feature space to effectively aggregate local and distant information. Furthermore, we propose a Channel Attention Block (CAB), which puts more emphasis on important feature channels and effectively captures global information, and can further improve the performance of point cloud segmentation. Extensive experiments on object classification, shape part segmentation, normal estimation, and semantic segmentation tasks demonstrate that our proposed methods can achieve superior performance.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104353"},"PeriodicalIF":2.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143174815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtualized three-dimensional reference tables for efficient data embedding
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-30 DOI: 10.1016/j.jvcir.2024.104351
Wien Hong , Guan-Zhong Su , Wei-Ling Lin , Tung-Shou Chen
{"title":"Virtualized three-dimensional reference tables for efficient data embedding","authors":"Wien Hong ,&nbsp;Guan-Zhong Su ,&nbsp;Wei-Ling Lin ,&nbsp;Tung-Shou Chen","doi":"10.1016/j.jvcir.2024.104351","DOIUrl":"10.1016/j.jvcir.2024.104351","url":null,"abstract":"<div><div>Data embedding methods utilizing a three-dimensional reference table (3DRT) modify pixels to embed digits from various bases using the 3DRT. However, the current 3DRT-based methods are constrained to specific bases and necessitate a physical 3DRT for both embedding and extraction processes. This paper introduces a novel approach that constructs the 3DRT using groups of anisotropic cubes to minimize embedding distortion. The 3DRT is virtualized by representing it as a two-coefficient equation, eliminating the need for a physical 3DRT during embedding and extraction. This virtualization significantly reduces computational complexity, enabling embedding and extraction through straightforward calculations. Furthermore, virtualization decreases the storage space required for the 3DRT. Experimental results demonstrate that the proposed method achieves high image quality and embedding capacity. Specifically, at embedding rate of 2 and 3 bits per pixel, the method produces quality scores of 46.99 dB and 40.91 dB, respectively, across 200 test images.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104351"},"PeriodicalIF":2.6,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-exposure image fusion using adaptive color dissimilarity and dynamic equalization techniques
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-28 DOI: 10.1016/j.jvcir.2024.104350
Jishnu C.R., Vishnukumar S.
{"title":"A multi-exposure image fusion using adaptive color dissimilarity and dynamic equalization techniques","authors":"Jishnu C.R.,&nbsp;Vishnukumar S.","doi":"10.1016/j.jvcir.2024.104350","DOIUrl":"10.1016/j.jvcir.2024.104350","url":null,"abstract":"<div><div>In the domain of image processing, Multi-Exposure Image Fusion (MEF) emerges as a crucial technique for developing high dynamic range (HDR) representations from fusing sequences of low dynamic range images. Conventional fusion methods often suffer from shortcomings such as detail loss, edge artifacts, and color inconsistencies, thereby compromising the quality of the fused output which is further diminished with extremely exposed and limited inputs. While there have been a few efforts to conduct fusion on limited and impaired static input images, there has been no exploration into the fusion of dynamic image sets. This paper proposes an effective MEF approach that operates on a minimum of two extremely exposed, limited datasets of both static and dynamic scenes. The approach initiates with categorizing input images into under-exposed and over-exposed categories based on lighting levels, subsequently applying tailored exposure correction strategies. Through iterative refinement and selection of optimally exposed variant, we construct an advanced intermediate stack, upon which fusion is performed by a pyramidal fusion technique. The method relies on adaptive well-exposedness and color gradient to develop weight maps for pyramidal fusion. The initial weights are refined using a Gaussian filter and this results in the creation of a seamlessly fused image with expanded dynamic range. Additionally, for dynamic imagery, we propose an adaptive color dissimilarity and dynamic equalization to reduce ghosting artifacts. Comparative assessments against existing methodologies, both visually and empirically confirms the superior performance of the proposed model.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"107 ","pages":"Article 104350"},"PeriodicalIF":2.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DetailCaptureYOLO: Accurately Detecting Small Targets in UAV Aerial Images DetailCaptureYOLO:精确检测无人机航拍图像中的小目标
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-28 DOI: 10.1016/j.jvcir.2024.104349
Fengxi Sun , Ning He , Runjie Li , Hongfei Liu , Yuxiang Zou
{"title":"DetailCaptureYOLO: Accurately Detecting Small Targets in UAV Aerial Images","authors":"Fengxi Sun ,&nbsp;Ning He ,&nbsp;Runjie Li ,&nbsp;Hongfei Liu ,&nbsp;Yuxiang Zou","doi":"10.1016/j.jvcir.2024.104349","DOIUrl":"10.1016/j.jvcir.2024.104349","url":null,"abstract":"<div><div>Unmanned aerial vehicle aerial imagery is dominated by small objects, obtaining feature maps with more detailed information is crucial for target detection. Therefore, this paper presents an improved algorithm based on YOLOv9, named DetailCaptureYOLO, which has a strong ability to capture detailed features. First, a dynamic fusion path aggregation network is proposed to dynamically fuse multi-level and multi-scale feature maps, effectively ensuring information integrity and richer features. Additionally, more flexible dynamic upsampling and wavelet transform-based downsampling operators are used to optimize the sampling operations. Finally, the Inner-IoU is used in Powerful-IoU, effectively enhancing the network’s ability to detect small targets. The neck improvement proposed in this paper can be transferred to mainstream object detection algorithms. When applied to YOLOv9, AP50, mAP and AP-small were improved by 8.5%, 5.5% and 7.2%, on the VisDrone dataset. When applied to other algorithms, the improvements in AP50 were 5.1%–6.5%. Experimental results demonstrate that the proposed method excels in detecting small targets and exhibits strong transferability. The codes are at: <span><span>https://github.com/SFXSunFengXi/DetailCaptureYOLO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"106 ","pages":"Article 104349"},"PeriodicalIF":2.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global–local prompts guided image-text embedding, alignment and aggregation for multi-label zero-shot learning 全局-局部提示引导图像-文本嵌入,对齐和聚合多标签零射击学习
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-28 DOI: 10.1016/j.jvcir.2024.104347
Tiecheng Song , Yu Huang , Feng Yang , Anyong Qin , Yue Zhao , Chenqiang Gao
{"title":"Global–local prompts guided image-text embedding, alignment and aggregation for multi-label zero-shot learning","authors":"Tiecheng Song ,&nbsp;Yu Huang ,&nbsp;Feng Yang ,&nbsp;Anyong Qin ,&nbsp;Yue Zhao ,&nbsp;Chenqiang Gao","doi":"10.1016/j.jvcir.2024.104347","DOIUrl":"10.1016/j.jvcir.2024.104347","url":null,"abstract":"<div><div>Multi-label zero-shot learning (MLZSL) aims to classify images into multiple unseen label classes, which is a practical yet challenging task. Recent methods have used vision-language models (VLM) for MLZSL, but they do not well consider the global and local semantic relationships to align images and texts, yielding limited classification performance. In this paper, we propose a novel MLZSL approach, named global–local prompts guided image-text embedding, alignment and aggregation (GLP-EAA) to alleviate this problem. Specifically, based on the parameter-frozen VLM, we divide the image into patches and explore a simple adapter to obtain global and local image embeddings. Meanwhile, we design global-local prompts to obtain text embeddings of different granularities. Then, we introduce global–local alignment losses to establish image-text consistencies at different granularity levels. Finally, we aggregate global and local scores to compute the multi-label classification loss. The aggregated scores are also used for inference. As such, our approach integrates prompt learning, image-text alignment and classification score aggregation into a unified learning framework. Experimental results on NUS-WIDE and MS-COCO datasets demonstrate the superiority of our approach over state-of-the-art methods for both ZSL and generalized ZSL tasks.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"106 ","pages":"Article 104347"},"PeriodicalIF":2.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FormerPose: An efficient multi-scale fusion Transformer network based on RGB-D for 6D pose estimation formpose:一种基于RGB-D的高效多尺度融合变压器网络,用于6D姿态估计
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-28 DOI: 10.1016/j.jvcir.2024.104346
Pihong Hou , Yongfang Zhang , Yi Wu , Pengyu Yan , Fuqiang Zhang
{"title":"FormerPose: An efficient multi-scale fusion Transformer network based on RGB-D for 6D pose estimation","authors":"Pihong Hou ,&nbsp;Yongfang Zhang ,&nbsp;Yi Wu ,&nbsp;Pengyu Yan ,&nbsp;Fuqiang Zhang","doi":"10.1016/j.jvcir.2024.104346","DOIUrl":"10.1016/j.jvcir.2024.104346","url":null,"abstract":"<div><div>The 6D pose estimation based on RGB-D plays a crucial role in object localization and is widely used in the field of robotics. However, traditional CNN-based methods often face limitations, particularly in the scene with complex visuals characterized by minimal features or obstructed. To address these limitations, we propose a novel holistic 6D pose estimation method called FormerPose. It leverages an efficient multi-scale fusion Transformer network based on RGB-D to directly regress the object’s pose. FormerPose can efficiently extract the color and geometric features of objects at different scales, and fuse them based on self-attention and dense fusion method, making it suitable for more restricted scenes. The proposed network realizes an enhanced trade-off between computational efficiency and model performance, achieving in superior results on benchmark datasets, including LineMOD, LineMOD-Occlusion, and YCB-Video. In addition, the robustness and practicability of the method are further verified by a series of robot grasping experiments.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"106 ","pages":"Article 104346"},"PeriodicalIF":2.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour-based object forecasting for autonomous driving 基于轮廓的自动驾驶目标预测
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-28 DOI: 10.1016/j.jvcir.2024.104343
Jaeseok Jang, Dahyun Kim, Dongkwon Jin, Chang-Su Kim
{"title":"Contour-based object forecasting for autonomous driving","authors":"Jaeseok Jang,&nbsp;Dahyun Kim,&nbsp;Dongkwon Jin,&nbsp;Chang-Su Kim","doi":"10.1016/j.jvcir.2024.104343","DOIUrl":"10.1016/j.jvcir.2024.104343","url":null,"abstract":"<div><div>A novel algorithm, called contour-based object forecasting (COF), to simultaneously perform contour-based segmentation and depth estimation of objects in future frames in autonomous driving systems is proposed in this paper. The proposed algorithm consists of encoding, future forecasting, decoding, and 3D rendering stages. First, we extract the features of observed frames, including past and current frames. Second, from these causal features, we predict the features of future frames using the future forecast module. Third, we decode the predicted features into contour and depth estimates. We obtain object depth maps aligned with segmentation masks via the depth completion using the predicted contours. Finally, from the prediction results, we render the forecasted objects in a 3D space. Experimental results demonstrate that the proposed algorithm reliably forecasts the contours and depths of objects in future frames and that the 3D rendering results intuitively visualize the future locations of the objects.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"106 ","pages":"Article 104343"},"PeriodicalIF":2.6,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person re-identification transformer with patch attention and pruning 人员重新识别变压器贴片注意和修剪
IF 2.6 4区 计算机科学
Journal of Visual Communication and Image Representation Pub Date : 2024-11-26 DOI: 10.1016/j.jvcir.2024.104348
Fabrice Ndayishimiye , Gang-Joon Yoon , Joonjae Lee , Sang Min Yoon
{"title":"Person re-identification transformer with patch attention and pruning","authors":"Fabrice Ndayishimiye ,&nbsp;Gang-Joon Yoon ,&nbsp;Joonjae Lee ,&nbsp;Sang Min Yoon","doi":"10.1016/j.jvcir.2024.104348","DOIUrl":"10.1016/j.jvcir.2024.104348","url":null,"abstract":"<div><div>Person re-identification (Re-ID), which is widely used in surveillance and tracking systems, aims to search individuals as they move between different camera views by maintaining identity across various camera views. In the realm of person re-identification (Re-ID), recent advancements have introduced convolutional neural networks (CNNs) and vision transformers (ViTs) as promising solutions. While CNN-based methods excel in local feature extraction, ViTs have emerged as effective alternatives to CNN-based person Re-ID, offering the ability to capture long-range dependencies through multi-head self-attention without relying on convolution and downsampling. However, it still faces challenges such as changes in illumination, viewpoint, pose, low resolutions, and partial occlusions. To address the limitations of widely used person Re-ID datasets and improve the generalization, we present a novel person Re-ID method that enhances global and local information interactions using self-attention modules within a ViT network. It leverages dynamic pruning to extract and prioritize essential image patches effectively. The designed patch selection and pruning for person Re-ID model resulted in a robust feature extractor even in scenarios with partial occlusion, background clutter, and illumination variations. Empirical validation demonstrates its superior performance compared to previous approaches and its adaptability across various domains.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"106 ","pages":"Article 104348"},"PeriodicalIF":2.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信