2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)最新文献

筛选
英文 中文
Deep Learning-Based Modulation Classification for OFDM Systems Without Symbol-Level Synchronization 基于深度学习的无符号级同步OFDM系统调制分类
Byungjun Kim, V. Sathyanarayanan, C. Mecklenbräuker, P. Gerstoft
{"title":"Deep Learning-Based Modulation Classification for OFDM Systems Without Symbol-Level Synchronization","authors":"Byungjun Kim, V. Sathyanarayanan, C. Mecklenbräuker, P. Gerstoft","doi":"10.1109/ICASSPW59220.2023.10193676","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193676","url":null,"abstract":"Deep learning (DL)-based modulation classification of incoherently received orthogonal frequency division multiplexing (OFDM) signals is studied. We propose a novel preprocessing algorithm to build features characterizing the modulation of OFDM signals, which are insensitive to synchronization error. With obtained features, pilot subcarrier indices used for CFO correction may also be estimated. The features obtained with the proposed algorithm are classified with a convolutional neural network (CNN)-based classifier. We have evaluated classification performance with simulated and hardware-generated data. Using these features, the modulation classifier outperforms existing DL-based classifiers which assume symbol-level synchronization with up to 25% classification accuracy performance gain.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131701428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Angular Central Gaussian and Watson Mixture Models for Assessing Dynamic Functional Brain Connectivity During a Motor Task 角中心高斯和沃森混合模型在运动任务中评估动态功能脑连接
A. S. Olsen, Emil Ortvald, K.H. Madsen, Mikkel N. Schmidt, Morten Mørup
{"title":"Angular Central Gaussian and Watson Mixture Models for Assessing Dynamic Functional Brain Connectivity During a Motor Task","authors":"A. S. Olsen, Emil Ortvald, K.H. Madsen, Mikkel N. Schmidt, Morten Mørup","doi":"10.1109/ICASSPW59220.2023.10193021","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193021","url":null,"abstract":"The development of appropriate models for dynamic functional connectivity is imperative to gain a better understanding of the brain both during rest and while performing a task. Leading eigenvector dynamics analysis is among the favored methods for assessing frame-wise connectivity, but eigenvectors are distributed on the sign-symmetric unit hypersphere, which is typically disregarded during modeling. Here we develop both mixture model and Hidden Markov model formulations for two sign-symmetric spherical statistical distributions and display their performance on synthetic data and task-fMRI data involving a finger-tapping task.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131980212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-Band Full-Duplex Solutions in the Paradigm of Integrated Sensing and Communication 集成传感与通信范式中的带内全双工解决方案
A. Harutyunyan, P. Sen
{"title":"In-Band Full-Duplex Solutions in the Paradigm of Integrated Sensing and Communication","authors":"A. Harutyunyan, P. Sen","doi":"10.1109/ICASSPW59220.2023.10193639","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193639","url":null,"abstract":"The paper discusses different aspects in favor of using in-band full-duplex frontends for integrated sensing and communication (ISAC), considered for deployment of future 5G/6G infrastructure. Possible scenarios for practical utilization of the technology are discussed with additional focus on self-interference cancellation issue. An possible system implementation on abstract level is presented for cellular communication scenario. The main contribution of the paper is to highlight the hardware realization challenges and potential solutions, that can mitigate additional signal processing overheads for ISAC systems.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128041270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Site Diversity Switching Prediction AT Q Band Using Deep Learning Techniques in Satellite Communications 卫星通信中使用深度学习技术的站点分集交换预测
Maria Kaselimi, A. J. Roumeliotis, A. Z. Papafragkakis, A. Panagopoulos, N. Doulamis
{"title":"Site Diversity Switching Prediction AT Q Band Using Deep Learning Techniques in Satellite Communications","authors":"Maria Kaselimi, A. J. Roumeliotis, A. Z. Papafragkakis, A. Panagopoulos, N. Doulamis","doi":"10.1109/ICASSPW59220.2023.10193159","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193159","url":null,"abstract":"An efficient deep learning (DL) architecture for switching prediction in site diversity schemes for Q band (39.402GHz) links is presented. The paper proposes the implementation of a DL detector (D) model in each station, that raises a flag when a rain event occurs, exploiting the benefits of transformer networks. When the event is detected, a DL regressor (R) model is triggered to predict future attenuation values for the specific event in each station. Both detector and regressor models consist of attention mechanisms that identify temporal dependencies between the input sequence elements. The experimental evaluation along with state of the art techniques indicate promising results.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122212291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Activity Recognition From Motion and Acoustic Sensors Using Contrastive Learning 利用对比学习从运动和声音传感器中识别人类活动
Rui Zhou, Running Zhao, Edith C. H. Ngai
{"title":"Human Activity Recognition From Motion and Acoustic Sensors Using Contrastive Learning","authors":"Rui Zhou, Running Zhao, Edith C. H. Ngai","doi":"10.1109/ICASSPW59220.2023.10192969","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10192969","url":null,"abstract":"In this paper, we formulate human activity recognition as a downstream task of pretrained multimodal contrastive learning (MCL) models and break the convention of the one-modality-to-one-modality contrastive paradigm by allowing the models to have more than one source modality. Different from the prevailing assumption in MCL that one source modality and one target modality are the counterparts of each other, this work considers the possibility where it takes multiple source modalities with complementary information to match up to a target modality. In particular, we leverage a large-scale pretrained audio-language contrastive model and extend it to accepting IMU and audio input. The experiment results indicate the superiority of using complementary source modalities over using any source modality alone with 10.3% to 35.0% performance gain.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115329544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Semantic Segmentation and Object Detection Framework for Synthetic Aperture Sonar Imagery 合成孔径声纳图像的统一语义分割与目标检测框架
Shannon-Morgan Steele
{"title":"A Unified Semantic Segmentation and Object Detection Framework for Synthetic Aperture Sonar Imagery","authors":"Shannon-Morgan Steele","doi":"10.1109/ICASSPW59220.2023.10193155","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193155","url":null,"abstract":"Manually identifying objects in synthetic aperture sonar (SAS) imagery is costly and time consuming, making identification through computer vision and deep learning techniques an appealing alternative. Depending on the application, a generalized map (semantic segmentation) and/or a characterization of each individual object (object detection) may be desired. Here, we demonstrate a framework that allows us to simultaneously generate both semantic segmentation maps and object detections with a single deep learning model by chaining together a U-Net model with k-means clustering and connected components. This framework streamlines the model training phase by allowing us to utilize a set of semantically segmented training data to yield both semantic segmentation and bounding box predictions. We demonstrate that the deep learning model can achieve accurate predictions with a small training set through transfer learning from a convolutional neural network pretrained on optical imagery. Results from this unified framework will be presented on images of boulders collected during various surveys using a Kraken Robotics miniature SAS (MINSAS).","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124146083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topological Analysis of Low Dimensional Phase Space Trajectories of High Dimensional EEG Signals For Classification of Interictal Epileptiform Discharges 高维脑电图信号低维相空间轨迹拓扑分析用于癫痫样间期放电分类
A. Stiehl, M. Flammer, F. Anselstetter, N. Ille, H. Bornfleth, S. Geißelsöder, C. Uhl
{"title":"Topological Analysis of Low Dimensional Phase Space Trajectories of High Dimensional EEG Signals For Classification of Interictal Epileptiform Discharges","authors":"A. Stiehl, M. Flammer, F. Anselstetter, N. Ille, H. Bornfleth, S. Geißelsöder, C. Uhl","doi":"10.1109/ICASSPW59220.2023.10193167","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193167","url":null,"abstract":"A new topology based feature extraction method for classification of interictal epileptiform discharges (IEDs) in EEG recordings from patients with epilepsy is proposed. After dimension reduction of the recorded EEG signal, using dynamical component analysis (DyCA) or principal component analysis (PCA), a persistent homology analysis of the resulting phase space trajectories is performed. Features are extracted from the persistent homology analysis and used to train and evaluate a support vector machine (SVM). Classification results based on these persistent features are compared with statistical features of the dimension-reduced signals and combinations of all of these features. Combining the persistent and statistical features improves the results (accuracy 94.7 %) compared to using only statistical feature extraction, whereas applying only persistent features does not achieve sufficient performance. For this classification example the choice of the dimension reduction technique does not significantly influence the classification performance of the algorithm.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124225333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sign Language Avatar Animation Search: An Ani2Ani Search Application 手语头像动画搜索:Ani2Ani搜索应用程序
Mathew Huerta-Enochian, Chang-Seog Ko
{"title":"Sign Language Avatar Animation Search: An Ani2Ani Search Application","authors":"Mathew Huerta-Enochian, Chang-Seog Ko","doi":"10.1109/ICASSPW59220.2023.10193345","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193345","url":null,"abstract":"To improve sign language animation asset management, we developed a general animation search system supporting multiple input modalities. The system reads animations as pose sequences, embeds them into fixed-length representation vectors using engineered or learned features, and scores animation similarity using distance in the embedding space. We present an overview of the system, use-case scenarios, and feedback from deaf users.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123194465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Truedepth Measurements of Facial Expressions: Sensitivity to the Angle Between Camera and Face 面部表情的真深度测量:对相机和面部之间角度的敏感性
L. Esselink, M. Oomen, F. Roelofsen
{"title":"Truedepth Measurements of Facial Expressions: Sensitivity to the Angle Between Camera and Face","authors":"L. Esselink, M. Oomen, F. Roelofsen","doi":"10.1109/ICASSPW59220.2023.10193107","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193107","url":null,"abstract":"Facial expressions play an important role in communication, especially in sign languages. Linguistic analysis of the exact contribution of facial expressions, as well as the creation of realistic conversational avatars, especially sign language avatars, requires accurate measurements of the facial expressions of humans while engaged in linguistic interaction. Several recent projects have employed a TrueDepth camera to make such measurements. The present paper investigates how reliable this technique is. In particular, we consider the extent to which the obtained measurements are affected by the angle between the camera and the face. Overall, we find that there are generally significant, and often rather substantial differences between measurements from different angles. However, when the measured facial features are highly activated, measurements from different angles are generally strongly correlated.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG Source Estimation Using Deep Prior Without a Subject’s Individual Lead Field 无个体导联场的深度先验脑电源估计
Naoki Hojo, H. Yano, R. Takashima, T. Takiguchi, Seiji Nakagawa
{"title":"EEG Source Estimation Using Deep Prior Without a Subject’s Individual Lead Field","authors":"Naoki Hojo, H. Yano, R. Takashima, T. Takiguchi, Seiji Nakagawa","doi":"10.1109/ICASSPW59220.2023.10193746","DOIUrl":"https://doi.org/10.1109/ICASSPW59220.2023.10193746","url":null,"abstract":"Estimating current sources in the brain using an electroencephalogram (EEG) is affected by the accuracy of the lead field, which represents signal propagation from the cortical sources to the scalp. To accurately compute the lead field, one must know the subject’s head structure. However, imaging methods for brain structure require large-scale equipment. In this paper, we propose a novel method of EEG source estimation that does not require the lead field of each subject obtained in advance. The current sources in the brain and the lead field are simultaneously estimated using implicit prior distributions expressed by an untrained convolutional neural network (CNN), namely Deep Prior, and a pre-trained CNN using the lead field of an average subject, respectively. The proposed method requires only a noisy EEG observation and the lead field of the average subject. We showed that the proposed method was more accurate than the conventional methods, and was also as accurate as the Deep Prior-based method with the lead field of each subject.","PeriodicalId":158726,"journal":{"name":"2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信