IEEE open journal of signal processing最新文献

筛选
英文 中文
Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-20 DOI: 10.1109/OJSP.2025.3532199
Zoltan Rozsa;Akos Madaras;Tamas Sziranyi
{"title":"Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps","authors":"Zoltan Rozsa;Akos Madaras;Tamas Sziranyi","doi":"10.1109/OJSP.2025.3532199","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3532199","url":null,"abstract":"LiDAR point clouds are a rich source of information for autonomous vehicles and ADAS systems. However, they can be challenging to segment for moving objects as - among other things - finding correspondences between sparse point clouds of consecutive frames is difficult. Traditional methods rely on a (global or local) map of the environment, which can be demanding to acquire and maintain in real-world conditions and the presence of the moving objects themselves. This paper proposes a novel approach using as minimal sweeps as possible to decrease the computational burden and achieve mapless moving object segmentation (MOS) in LiDAR point clouds. Our approach is based on a multimodal learning model with single-modal inference. The model is trained on a dataset of LiDAR point clouds and related camera images. The model learns to associate features from the two modalities, allowing it to predict dynamic objects even in the absence of a map and the camera modality. We propose semantic information usage for multi-frame instance segmentation in order to enhance performance measures. We evaluate our approach to the SemanticKITTI and Apollo real-world autonomous driving datasets. Our results show that our approach can achieve state-of-the-art performance on moving object segmentation and utilize only a few (even one) LiDAR frames.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"118-128"},"PeriodicalIF":2.9,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10848132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIMMITS'24: Multi-Speaker, Multi-Lingual INDIC TTS With Voice Cloning
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-20 DOI: 10.1109/OJSP.2025.3531782
Sathvik Udupa;Jesuraja Bandekar;Abhayjeet Singh;Deekshitha G;Saurabh Kumar;Sandhya Badiger;Amala Nagireddi;Roopa R;Prasanta Kumar Ghosh;Hema A. Murthy;Pranaw Kumar;Keiichi Tokuda;Mark Hasegawa-Johnson;Philipp Olbrich
{"title":"LIMMITS'24: Multi-Speaker, Multi-Lingual INDIC TTS With Voice Cloning","authors":"Sathvik Udupa;Jesuraja Bandekar;Abhayjeet Singh;Deekshitha G;Saurabh Kumar;Sandhya Badiger;Amala Nagireddi;Roopa R;Prasanta Kumar Ghosh;Hema A. Murthy;Pranaw Kumar;Keiichi Tokuda;Mark Hasegawa-Johnson;Philipp Olbrich","doi":"10.1109/OJSP.2025.3531782","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3531782","url":null,"abstract":"The Multi-speaker, Multi-lingual Indic Text to Speech (TTS) with voice cloning (LIMMITS'24) challenge is organized as part of the ICASSP 2024 signal processing grand challenge. LIMMITS'24 aims at the development of voice cloning for the multi-speaker, multi-lingual Text-to-Speech (TTS) model. Towards this, 80 hours of TTS data has been released in each of Bengali, Chhattisgarhi, English (Indian), and Kannada languages. This is in addition to Telugu, Hindi, and Marathi data released during the LIMMITS'23 challenge. The challenge encourages the advancement of TTS in Indian Languages as well as the development of multi-speaker voice cloning techniques for TTS. The three tracks of LIMMITS'24 have provided an opportunity for various researchers and practitioners around the world to explore the state of the art in research for voice cloning with TTS.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"293-302"},"PeriodicalIF":2.9,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845816","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143489278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Posterior-Based Analysis of Spatio-Temporal Features for Sign Language Assessment
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-17 DOI: 10.1109/OJSP.2025.3531781
Neha Tarigopula;Sandrine Tornay;Ozge Mercanoglu Sincan;Richard Bowden;Mathew Magimai.-Doss
{"title":"Posterior-Based Analysis of Spatio-Temporal Features for Sign Language Assessment","authors":"Neha Tarigopula;Sandrine Tornay;Ozge Mercanoglu Sincan;Richard Bowden;Mathew Magimai.-Doss","doi":"10.1109/OJSP.2025.3531781","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3531781","url":null,"abstract":"Sign Language conveys information through multiple channels composed of manual (handshape, hand movement) and non-manual (facial expression, mouthing, body posture) components. Sign language assessment involves giving granular feedback to a learner, in terms of correctness of the manual and non-manual components, aiding the learner's progress. Existing methods rely on handcrafted skeleton-based features for hand movement within a KL-HMM framework to identify errors in manual components. However, modern deep learning models offer powerful spatio-temporal representations for videos to represent hand movement and facial expressions. Despite their success in classification tasks, these representations often struggle to attribute errors to specific sources, such as incorrect handshape, improper movement, or incorrect facial expressions. To address this limitation, we leverage and analyze the spatio-temporal representations from Inflated 3D Convolutional Networks (I3D) and integrate them into the KL-HMM framework to assess sign language videos on both manual and non-manual components. By applying masking and cropping techniques, we isolate and evaluate distinct channels of hand movement, and facial expressions using the I3D model and handshape using the CNN-based model. Our approach outperforms traditional methods based on handcrafted features, as validated through experiments on the SMILE-DSGS dataset, and therefore demonstrates that it can enhance the effectiveness of sign language learning tools.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"284-292"},"PeriodicalIF":2.9,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Energy Efficient Signal Detection Using SPRT and Ordered Transmissions in Wireless Sensor Networks” 修正“无线传感器网络中使用SPRT和有序传输的节能信号检测”
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-17 DOI: 10.1109/OJSP.2024.3519916
Shailee Yagnik;Ramanarayanan Viswanathan;Lei Cao
{"title":"Correction to “Energy Efficient Signal Detection Using SPRT and Ordered Transmissions in Wireless Sensor Networks”","authors":"Shailee Yagnik;Ramanarayanan Viswanathan;Lei Cao","doi":"10.1109/OJSP.2024.3519916","DOIUrl":"https://doi.org/10.1109/OJSP.2024.3519916","url":null,"abstract":"In [1, p. 1124], a footnote is needed on (13) as shown below: begin{equation*}qquadqquadquad{{alpha }^# } < left( {1 - {{c}_1}} right)alpha + left( {1 - left( {1 - {{c}_1}} right)alpha } right)alphaqquadqquadquad hbox{(13)$^{1}$} end{equation*}","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"16-16"},"PeriodicalIF":2.9,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10845022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formant Tracking by Combining Deep Neural Network and Linear Prediction
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-16 DOI: 10.1109/OJSP.2025.3530876
Sudarsana Reddy Kadiri;Kevin Huang;Christina Hagedorn;Dani Byrd;Paavo Alku;Shrikanth Narayanan
{"title":"Formant Tracking by Combining Deep Neural Network and Linear Prediction","authors":"Sudarsana Reddy Kadiri;Kevin Huang;Christina Hagedorn;Dani Byrd;Paavo Alku;Shrikanth Narayanan","doi":"10.1109/OJSP.2025.3530876","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3530876","url":null,"abstract":"Formant tracking is an area of speech science that has recently undergone a technology shift from classical model-driven signal processing methods to modern data-driven deep learning methods. In this study, these two domains are combined in formant tracking by refining the formants estimated by a data-driven deep neural network (DNN) with formant estimates given by a model-driven linear prediction (LP) method. In the refinement process, the three lowest formants, initially estimated by the DNN-based method, are frame-wise replaced with local spectral peaks identified by the LP method. The LP-based refinement stage can be seamlessly integrated into the DNN without any training. As an LP method, the study advocates the use of quasiclosed phase forward-backward (QCP-FB) analysis. Three spectral representations are compared as DNN inputs: mel-frequency cepstral coefficients (MFCCs), the spectrogram, and the complex spectrogram. Formant tracking performance was evaluated by comparing the proposed refined DNN tracker with seven reference trackers, which included both signal processing and deep learning based methods. As evaluation data, ground truth formants of the Vocal Tract Resonance (VTR) corpus were used. The results demonstrate that the refined DNN trackers outperformed all conventional trackers. The best results were obtained by using the MFCC input for the DNN. The proposed MFCC refinement (MFCC-DNN<sub>QCP-FB</sub>) reduced estimation errors by 0.8 Hz, 12.9 Hz, and 11.7 Hz for the first (F1), second (F2), and third (F3) formants, respectively, compared to the Deep Formants refinement (DeepF<sub>QCP-FB</sub>). When compared to the model-driven KARMA tracking method, the proposed refinement reduced estimation errors by 2.3 Hz, 55.5 Hz, and 143.4 Hz for F1, F2, and F3, respectively. A detailed evaluation across various phonetic categories and gender groups showed that the proposed hybrid refinement approach improves formanttracking performance across most test conditions.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"222-230"},"PeriodicalIF":2.9,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Classification Models With Sophisticated Counterfactual Images
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-16 DOI: 10.1109/OJSP.2025.3530843
Xiang Li;Ren Togo;Keisuke Maeda;Takahiro Ogawa;Miki Haseyama
{"title":"Enhancing Classification Models With Sophisticated Counterfactual Images","authors":"Xiang Li;Ren Togo;Keisuke Maeda;Takahiro Ogawa;Miki Haseyama","doi":"10.1109/OJSP.2025.3530843","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3530843","url":null,"abstract":"In deep learning, training data, which are mainly from realistic scenarios, often carry certain biases. This causes deep learning models to learn incorrect relationships between features when using these training data. However, because these models have <italic>black boxes</i>, these problems cannot be solved effectively. In this paper, we aimed to 1) improve existing processes for generating language-guided counterfactual images and 2) employ counterfactual images to efficiently and directly identify model weaknesses in learning incorrect feature relationships. Furthermore, 3) we combined counterfactual images into datasets to fine-tune the model, thus correcting the model's perception of feature relationships. Through extensive experimentation, we confirmed the improvement in the quality of the generated counterfactual images, which can more effectively enhance the classification ability of various models.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"89-98"},"PeriodicalIF":2.9,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixture of Emotion Dependent Experts: Facial Expressions Recognition in Videos Through Stacked Expert Models
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-16 DOI: 10.1109/OJSP.2025.3530793
Ali N. Salman;Karen Rosero;Lucas Goncalves;Carlos Busso
{"title":"Mixture of Emotion Dependent Experts: Facial Expressions Recognition in Videos Through Stacked Expert Models","authors":"Ali N. Salman;Karen Rosero;Lucas Goncalves;Carlos Busso","doi":"10.1109/OJSP.2025.3530793","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3530793","url":null,"abstract":"Recent advancements in <italic>dynamic facial expression recognition</i> (DFER) have predominantly utilized static features, which are theoretically inferior to dynamic features. However, models fully trained with dynamic features often suffer from over-fitting due to the limited size and diversity of the training data for fully <italic>supervised learning</i> (SL) models. A significant challenge with existing models based on static features in recognizing emotions from videos is their tendency to form biased representations, often unbalanced or skewed towards more prevalent or basic emotional features present in the static domain, especially with posed expression. Therefore, this approach under-represents the nuances present in the dynamic domain. To address this issue, our study introduces a novel approach that we refer to as <italic>mixture of emotion-dependent experts</i> (MoEDE). This strategy relies on emotion-specific feature extractors to produce more diverse emotional static features to train DFER systems. Each emotion-dependent expert focuses exclusively on one emotional category, formulating the problem as binary classifiers. Our DFER model combines these static representations with recurrent models, modeling their temporal relationships. The proposed MoEDE DFER approach achieves a macro F1-score of 74.5%, marking a significant improvement over the baseline, which presented a macro F1-score of 70.9% . The DFER baseline is similar to MoEDE, but it uses a single static feature extractor rather than stacked extractors. Additionally, our proposed approach shows consistent improvements compared to other four popular baselines.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"323-332"},"PeriodicalIF":2.9,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843404","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143629647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility Study of Location Authentication for IoT Data Using Power Grid Signatures
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-16 DOI: 10.1109/OJSP.2025.3530847
Mudi Zhang;Charana Sonnadara;Sahil Shah;Min Wu
{"title":"Feasibility Study of Location Authentication for IoT Data Using Power Grid Signatures","authors":"Mudi Zhang;Charana Sonnadara;Sahil Shah;Min Wu","doi":"10.1109/OJSP.2025.3530847","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3530847","url":null,"abstract":"Ambient signatures related to the power grid offer an under-utilized opportunity to verify the time and location of sensing data collected by the Internet-of-Things (IoT). Such power signatures as the Electrical Network Frequency (ENF) have been used in multimedia forensics to answer questions about the time and location of audio-visual recordings. Going beyond multimedia data, this paper investigates a refined power signature of Electrical Network Voltage (ENV) for IoT sensing data and carries out a feasibility study of location verification for IoT data. ENV reflects the variations of the power system's supply voltage over time and is also present in the optical sensing data, akin to ENF. A physical model showing the presence of ENV in the optical sensing data is presented along with the corresponding signal processing mechanisms to estimate and utilize ENV signals from the power and optical sensing data as location stamps. Experiments are conducted in the State of Maryland of the United States to demonstrate the feasibility of using ENV signals for location authentication of IoT data.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"405-416"},"PeriodicalIF":2.9,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843385","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143740238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jointly Learning From Unimodal and Multimodal-Rated Labels in Audio-Visual Emotion Recognition
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-15 DOI: 10.1109/OJSP.2025.3530274
Lucas Goncalves;Huang-Cheng Chou;Ali N. Salman;Chi-Chun Lee;Carlos Busso
{"title":"Jointly Learning From Unimodal and Multimodal-Rated Labels in Audio-Visual Emotion Recognition","authors":"Lucas Goncalves;Huang-Cheng Chou;Ali N. Salman;Chi-Chun Lee;Carlos Busso","doi":"10.1109/OJSP.2025.3530274","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3530274","url":null,"abstract":"<italic>Audio-visual emotion recognition</i> (AVER) has been an important research area in <italic>human-computer interaction</i> (HCI). Traditionally, audio-visual emotional datasets and corresponding models derive their ground truths from annotations obtained by raters after watching the audio-visual stimuli. This conventional method, however, neglects the nuanced human perception of emotional states, which varies when annotations are made under different emotional stimuli conditions—whether through unimodal or multimodal stimuli. This study investigates the potential for enhanced AVER system performance by integrating diverse levels of annotation stimuli, reflective of varying perceptual evaluations. We propose a two-stage training method to train models with the labels elicited by audio-only, face-only, and audio-visual stimuli. Our approach utilizes different levels of annotation stimuli according to which modality is present within different layers of the model, effectively modeling annotation at the unimodal and multi-modal levels to capture the full scope of emotion perception across unimodal and multimodal contexts. We conduct the experiments and evaluate the models on the CREMA-D emotion database. The proposed methods achieved the best performances in macro-/weighted-F1 scores. Additionally, we measure the model calibration, performance bias, and fairness metrics considering the age, gender, and race of the AVER systems.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"165-174"},"PeriodicalIF":2.9,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10842047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143404005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Regularization With Reverse Sorted Sum of Squares via an Unrolled Difference-of-Convex Approach
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-01-14 DOI: 10.1109/OJSP.2025.3529312
Takayuki Sasaki;Kazuya Hayase;Masaki Kitahara;Shunsuke Ono
{"title":"Sparse Regularization With Reverse Sorted Sum of Squares via an Unrolled Difference-of-Convex Approach","authors":"Takayuki Sasaki;Kazuya Hayase;Masaki Kitahara;Shunsuke Ono","doi":"10.1109/OJSP.2025.3529312","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3529312","url":null,"abstract":"This paper proposes a sparse regularization method with a novel sorted regularization function. Sparse regularization is commonly used to solve underdetermined inverse problems. Traditional sparse regularization functions, such as <inline-formula><tex-math>$L_{1}$</tex-math></inline-formula>-norm, suffer from problems like amplitude underestimation and vanishing perturbations. The reverse ordered weighted <inline-formula><tex-math>$L_{1}$</tex-math></inline-formula>-norm (ROWL) addresses these issues but introduces new challenges. These include developing an algorithm grounded in theory, not heuristics, reducing computational complexity, enabling the automatic determination of numerous parameters, and ensuring the number of iterations remains feasible. In this study, we propose a novel sparse regularization function called the reverse sorted sum of squares (RSSS) and then construct an unrolled algorithm that addresses both the aforementioned two problems and these four challenges. The core idea of our proposed method lies in transforming the optimization problem into a difference-of-convex programming problem, for which solutions are known. In experiments, we apply the RSSS regularization method to image deblurring and super-resolution tasks and confirmed its superior performance compared to conventional methods, all with feasible computational complexity.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"57-67"},"PeriodicalIF":2.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10840312","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信