Signal Processing-Image Communication最新文献

筛选
英文 中文
Struck-out handwritten word detection and restoration for automatic descriptive answer evaluation 用于自动描述性答案评估的划掉的手写单词检测和修复
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-30 DOI: 10.1016/j.image.2024.117214
Dajian Zhong , Shivakumara Palaiahnakote , Umapada Pal , Yue Lu
{"title":"Struck-out handwritten word detection and restoration for automatic descriptive answer evaluation","authors":"Dajian Zhong ,&nbsp;Shivakumara Palaiahnakote ,&nbsp;Umapada Pal ,&nbsp;Yue Lu","doi":"10.1016/j.image.2024.117214","DOIUrl":"10.1016/j.image.2024.117214","url":null,"abstract":"<div><div>Unlike objective type evaluation, descriptive answer evaluation is challenging due to unpredictable answers and free writing style of answers. Because of these, descriptive answer evaluation has received special attention from many researchers. Automatic answer evaluation is useful for the following situations. It can avoid human intervention for marking, eliminates bias marking and most important is that it can save huge manpower. To develop an efficient and accurate system, there are several open challenges. One such open challenge is cleaning the document, which includes struck-out words removal and restoring the struck-out words. In this paper, we have proposed a system for struck-out handwritten word detection and restoration for automatic descriptive answer evaluation. The work has two stages. In the first stage, we explore the combination of ResNet50 and the diagonal line (principal and secondary diagonal lines) segmentation module for detecting words and then classifying struck-out words using a classification network. In the second stage, we explore the combination of U-Net as a backbone and Bi-LSTM for predicting pixels that represent actual text information of the struck-out words based on the relationship between sequences of pixels for restoration. Experimental results on our dataset and standard datasets show that the proposed model is impressive for struck-out word detection and restoration. A comparative study with the state-of-the-art methods shows that the proposed approach outperforms the existing models in terms of struck-out word detection and restoration.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117214"},"PeriodicalIF":3.4,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-reference calibration-free image quality assessment 无需全面参考校准的图像质量评估
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-23 DOI: 10.1016/j.image.2024.117212
Paolo Giannitrapani, Elio D. Di Claudio , Giovanni Jacovitti
{"title":"Full-reference calibration-free image quality assessment","authors":"Paolo Giannitrapani,&nbsp;Elio D. Di Claudio ,&nbsp;Giovanni Jacovitti","doi":"10.1016/j.image.2024.117212","DOIUrl":"10.1016/j.image.2024.117212","url":null,"abstract":"<div><div>Objective Image Quality Assessment (IQA) methods often lack of linearity of their quality estimates with respect to scores expressed by human subjects and therefore IQA metrics undergo a calibration process based on subjective quality examples. However, example-based training presents a challenge in terms of generalization hampering result comparison across different applications and operative conditions. In this paper, new Full Reference (FR) techniques, providing estimates linearly correlated with human scores without using calibration are introduced. We show that on natural images, application of estimation theory and psychophysical principles to images degraded by Gaussian blur leads to a so-called canonical IQA method, whose estimates are linearly correlated to both the subjective scores and the viewing distance. Then, we show that any mainstream IQA methods can be reconducted to the canonical method by converting its metric based on a unique specimen image. The proposed scheme is extended to wide classes of degraded images, e.g. noisy and compressed images. The resulting calibration-free FR IQA methods allows for comparability and interoperability across different imaging systems and on different viewing distances. A comparison of their statistical performance with respect to state-of-the-art calibration prone methods is finally provided, showing that the presented model is a valid alternative to the final 5-parameter calibration step of IQA methods, and the two parameters of the model have a clear operational meaning and are simply determined in practical applications. The enhanced performance are achieved across multiple viewing distance databases by independently realigning the blur values associated with each distance.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117212"},"PeriodicalIF":3.4,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary 利用基于样本依赖字典的在线卷积稀疏编码改进多焦图像融合
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-19 DOI: 10.1016/j.image.2024.117213
Sidi He , Chengfang Zhang , Haoyue Li , Ziliang Feng
{"title":"Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary","authors":"Sidi He ,&nbsp;Chengfang Zhang ,&nbsp;Haoyue Li ,&nbsp;Ziliang Feng","doi":"10.1016/j.image.2024.117213","DOIUrl":"10.1016/j.image.2024.117213","url":null,"abstract":"<div><div>Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117213"},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SynFlowMap: A synchronized optical flow remapping for video motion magnification SynFlowMap:用于视频运动放大的同步光流重映射
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-18 DOI: 10.1016/j.image.2024.117203
Jonathan A.S. Lima , Cristiano J. Miosso , Mylène C.Q. Farias
{"title":"SynFlowMap: A synchronized optical flow remapping for video motion magnification","authors":"Jonathan A.S. Lima ,&nbsp;Cristiano J. Miosso ,&nbsp;Mylène C.Q. Farias","doi":"10.1016/j.image.2024.117203","DOIUrl":"10.1016/j.image.2024.117203","url":null,"abstract":"<div><div>Motion magnification refers to the process of spatially amplifying small movements in a video to reveal important information about a scene. Several motion magnification methods have been proposed, but most of them introduce perceptible and annoying visual artifacts. In this paper, we propose a method that first analyzes the optical flow between the original frame and the corresponding frames, which are motion-magnified with other methods. Then, the method uses the generated optical flow map and the original video to synthesize a combined motion-magnified video. The method is able to amplify the motion by larger values, invert the direction of the motion, and combine filtered motion from multiple frequencies and Eulerian methods. Amongst other advantages, the proposed approach eliminates artifacts caused by Eulerian motion-magnification methods. We present an extensive qualitative and quantitative analysis of the results compared to the main approaches for Eulerian methods. A final contribution of this work is a new video database for motion magnification that allows the evaluation of quantitative motion magnification.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117203"},"PeriodicalIF":3.4,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed virtual selective-forwarding units and SDN-assisted edge computing for optimization of multi-party WebRTC videoconferencing 分布式虚拟选择性转发单元和 SDN 辅助边缘计算优化多方 WebRTC 视频会议
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-12 DOI: 10.1016/j.image.2024.117173
R. Arda Kırmızıoğlu , A. Murat Tekalp , Burak Görkemli
{"title":"Distributed virtual selective-forwarding units and SDN-assisted edge computing for optimization of multi-party WebRTC videoconferencing","authors":"R. Arda Kırmızıoğlu ,&nbsp;A. Murat Tekalp ,&nbsp;Burak Görkemli","doi":"10.1016/j.image.2024.117173","DOIUrl":"10.1016/j.image.2024.117173","url":null,"abstract":"<div><div>Network service providers (NSP) have growing interest in placing network intelligence and services at network edges by deploying software-defined network (SDN) and network function virtualization infrastructure. In multi-party WebRTC videoconferencing using scalable video coding, a selective forwarding unit (SFU) provides connectivity between peers with heterogeneous bandwidth and terminals. An important question is where in the network to place the SFU service in order to minimize end-to-end delay between all pairs of peers. Clearly, there is no single optimal place for a cloud SFU for all possible peer locations. We propose placing virtual SFUs at network edges leveraging NSP edge datacenters to optimize end-to-end delay and usage of overall network resources. The main advantage of the distributed edge-SFU framework is that each peer video stream travels the shortest path to reach other peers similar to mesh connection model, whereas each peer uploads a single stream to its edge-SFU avoiding the upload bottleneck. While the proposed distributed edge-SFU framework applies to both best-effort and managed service models, this paper proposes a premium managed, edge-integrated multi-party WebRTC service architecture with bandwidth and delay guarantees within access networks by SDN-assisted slicing of edge networks. The performance of the proposed distributed edge-SFU service architecture is demonstrated by means of experimental results.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117173"},"PeriodicalIF":3.4,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modulated deformable convolution based on graph convolution network for rail surface crack detection 基于图卷积网络的调制变形卷积用于轨道表面裂纹检测
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-10 DOI: 10.1016/j.image.2024.117202
Shuzhen Tong , Qing Wang , Xuan Wei , Cheng Lu , Xiaobo Lu
{"title":"Modulated deformable convolution based on graph convolution network for rail surface crack detection","authors":"Shuzhen Tong ,&nbsp;Qing Wang ,&nbsp;Xuan Wei ,&nbsp;Cheng Lu ,&nbsp;Xiaobo Lu","doi":"10.1016/j.image.2024.117202","DOIUrl":"10.1016/j.image.2024.117202","url":null,"abstract":"<div><p>Accurate detection of rail surface cracks is essential but also tricky because of the noise, low contrast, and density inhomogeneity. In this paper, to deal with the complex situations in rail surface crack detection, we propose modulated deformable convolution based on a graph convolution network named MDCGCN. The MDCGCN is a novel convolution that calculates the offsets and modulation scalars of the modulated deformable convolution by conducting the graph convolution network on a feature map. The MDCGCN improves the performance of different networks in rail surface crack detection, harming the inference speed slightly. Finally, we demonstrate our methods’ numerical accuracy, computational efficiency, and effectiveness on the public segmentation dataset RSDD and our self-built detection dataset SEU-RSCD and explore an appropriate network structure in the baseline network UNet with the MDCGCN.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117202"},"PeriodicalIF":3.4,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142229160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A global reweighting approach for cross-domain semantic segmentation 跨域语义分割的全局再加权方法
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117197
Yuhang Zhang , Shishun Tian , Muxin Liao , Guoguang Hua , Wenbin Zou , Chen Xu
{"title":"A global reweighting approach for cross-domain semantic segmentation","authors":"Yuhang Zhang ,&nbsp;Shishun Tian ,&nbsp;Muxin Liao ,&nbsp;Guoguang Hua ,&nbsp;Wenbin Zou ,&nbsp;Chen Xu","doi":"10.1016/j.image.2024.117197","DOIUrl":"10.1016/j.image.2024.117197","url":null,"abstract":"<div><div>Unsupervised domain adaptation semantic segmentation attracts much research attention due to the expensive pixel-level annotation cost. Since the adaptation difficulty of samples is different, the weight of samples should be set independently, which is called reweighting. However, existing reweighting methods only calculate local reweighting information from predicted results or context information in batch images of two domains, which may lead to over-alignment or under-alignment problems. To handle this issue, we propose a global reweighting approach. Specifically, we first define the target centroid distance, which describes the distance between the source batch data and the target centroid. Then, we employ a Fréchet Inception Distance metric to evaluate the domain divergence and embed it into the target centroid distance. Finally, a global reweighting strategy is proposed to enhance the knowledge transferability in the source domain supervision. Extensive experiments demonstrate that our approach achieves competitive performance and helps to improve performance in other methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117197"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memory positional encoding for image captioning 图像字幕的记忆位置编码
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117201
Xiaobao Yang , Shuai He , Jie Zhang , Sugang Ma , Zhiqiang Hou , Wei Sun
{"title":"Memory positional encoding for image captioning","authors":"Xiaobao Yang ,&nbsp;Shuai He ,&nbsp;Jie Zhang ,&nbsp;Sugang Ma ,&nbsp;Zhiqiang Hou ,&nbsp;Wei Sun","doi":"10.1016/j.image.2024.117201","DOIUrl":"10.1016/j.image.2024.117201","url":null,"abstract":"<div><p>Transformer-based architectures represent the state-of-the-art in image captioning. Due to its natural parallel internal structure, it cannot be aware of the order of inputting tokens, so the positional encoding becomes an indispensable component of Transformer-based models. However, most of the existing absolute positional encodings (APE) have certain limitations for image captioning. Their spatial positional features are predefined and cannot been well generalized to other forms of data, such as visual data. Meanwhile, each positional features are decoupled from each other and lack internal correlation, therefore which affects the accuracy of spatial position context representation of visual or text semantic to a certain extent. Therefore, we propose a memory positional encoding (MPE), which has generalization ability that can be applied to both the visual encoder and the sequence decoder of the image captioning models. In MPE, each positional feature is recursively generated by the learnable network with memory function, making the current generated positional features effectively inherit the genetic information of the previous <span><math><mi>n</mi></math></span> positions. In addition, existing positional encodings provide positional features with fixed value and scale, that means, they provide the same positional encoding for different inputs, which is unreasonable. Thus, to address the previous issues of scale and value of current positional encoding methods in practical applications, we further explore dynamic memory positional encoding (DMPE) based on MPE. DMPE dynamically adjusts and generates positional features based on different input to provide them with unique positional representation. Extensive experiments on the MSCOCO validate the effectiveness of MPE and DMPE.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117201"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style Optimization Networks for real-time semantic segmentation of rainy and foggy weather 用于雨雾天气实时语义分割的样式优化网络
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117199
Yifang Huang, Haitao He, Hongdou He, Guyu Zhao, Peng Shi, Pengpeng Fu
{"title":"Style Optimization Networks for real-time semantic segmentation of rainy and foggy weather","authors":"Yifang Huang,&nbsp;Haitao He,&nbsp;Hongdou He,&nbsp;Guyu Zhao,&nbsp;Peng Shi,&nbsp;Pengpeng Fu","doi":"10.1016/j.image.2024.117199","DOIUrl":"10.1016/j.image.2024.117199","url":null,"abstract":"<div><div>Semantic segmentation is an essential task in the field of computer vision. Existing semantic segmentation models can achieve good results under good weather and lighting conditions. However, when the external environment changes, the effectiveness of these models are seriously affected. Therefore, we focus on the task of semantic segmentation in rainy and foggy weather. Fog is a common phenomenon in rainy weather conditions and has a negative impact on image visibility. Besides, to make the algorithm satisfy the application requirements of mobile devices, the computational cost and the real-time requirement of the model have become one of the major points of our research. In this paper, we propose a novel Style Optimization Network (SONet) architecture, containing a Style Optimization Module (SOM) that can dynamically learn style information, and a Key information Extraction Module (KEM) that extracts important spatial and contextual information. This can improve the learning ability and robustness of the model for rainy and foggy conditions. Meanwhile, we achieve real-time performance by using lightweight modules and a backbone network with low computational complexity. To validate the effectiveness of our SONet, we synthesized CityScapes dataset for rainy and foggy weather and evaluated the accuracy and complexity of our model. Our model achieves a segmentation accuracy of 75.29% MIoU and 83.62% MPA on a NVIDIA TITAN Xp GPU. Several comparative experiments have shown that our SONet can achieve good performance in semantic segmentation tasks under rainy and foggy weather, and due to the lightweight design of the model we have a good advantage in both accuracy and model complexity.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117199"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel theoretical analysis on optimal pipeline of multi-frame image super-resolution using sparse coding 使用稀疏编码的多帧图像超分辨率优化管道的新理论分析
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2024-09-07 DOI: 10.1016/j.image.2024.117198
Mohammad Mahdi Afrasiabi, Reshad Hosseini, Aliazam Abbasfar
{"title":"A novel theoretical analysis on optimal pipeline of multi-frame image super-resolution using sparse coding","authors":"Mohammad Mahdi Afrasiabi,&nbsp;Reshad Hosseini,&nbsp;Aliazam Abbasfar","doi":"10.1016/j.image.2024.117198","DOIUrl":"10.1016/j.image.2024.117198","url":null,"abstract":"<div><p>Super-resolution is the process of obtaining a high-resolution (HR) image from one or more low-resolution (LR) images. Single image super-resolution (SISR) deals with one LR image while multi-frame super-resolution (MFSR) employs several LR ones to reach the HR output. MFSR pipeline consists of alignment, fusion, and reconstruction. We conduct a theoretical analysis using sparse coding (SC) and iterative shrinkage-thresholding algorithm to fill the gap of mathematical justification in the execution order of the optimal MFSR pipeline. Our analysis recommends executing alignment and fusion before the reconstruction stage (whether through deconvolution or SISR techniques). The suggested order ensures enhanced performance in terms of peak signal-to-noise ratio and structural similarity index. The optimal pipeline also reduces computational complexity compared to intuitive approaches that apply SISR to each input LR image. Also, we demonstrate the usefulness of SC in analysis of computer vision tasks such as MFSR, leveraging the sparsity assumption in natural images. Simulation results support the findings of our theoretical analysis, both quantitatively and qualitatively.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117198"},"PeriodicalIF":3.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信