2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)最新文献

筛选
英文 中文
Model-Free Learning of Optimal Deterministic Resource Allocations in Wireless Systems via Action-Space Exploration 基于动作空间探索的无线系统最优确定性资源分配的无模型学习
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-08-23 DOI: 10.1109/mlsp52302.2021.9596327
Hassaan Hashmi, Dionysios S. Kalogerias
{"title":"Model-Free Learning of Optimal Deterministic Resource Allocations in Wireless Systems via Action-Space Exploration","authors":"Hassaan Hashmi, Dionysios S. Kalogerias","doi":"10.1109/mlsp52302.2021.9596327","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596327","url":null,"abstract":"Wireless systems resource allocation refers to perpetual and challenging nonconvex constrained optimization tasks, which are especially timely in modern communications and networking setups involving multiple users with heterogeneous objectives and imprecise or even unknown models and/or channel statistics. In this paper, we propose a technically grounded and scalable primal-dual deterministic policy gradient method for efficiently learning optimal parameterized resource allocation policies. Our method not only efficiently exploits gradient availability of popular universal policy representations, such as deep neural networks, but is also truly model-free, as it relies on consistent zeroth-order gradient approximations of the associated random network services constructed via low-dimensional perturbations in action space, thus fully bypassing any dependence on critics. Both theory and numerical simulations confirm the efficacy and applicability of the proposed approach, as well as its superiority over the current state of the art in terms of both achieving near-optimal performance and scalability.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133679231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Conditional Sound Generation Using Neural Discrete Time-Frequency Representation Learning 使用神经离散时频表示学习的条件声音生成
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-07-21 DOI: 10.1109/mlsp52302.2021.9596430
Xubo Liu, Turab Iqbal, Jinzheng Zhao, Qiushi Huang, Mark D. Plumbley, Wenwu Wang
{"title":"Conditional Sound Generation Using Neural Discrete Time-Frequency Representation Learning","authors":"Xubo Liu, Turab Iqbal, Jinzheng Zhao, Qiushi Huang, Mark D. Plumbley, Wenwu Wang","doi":"10.1109/mlsp52302.2021.9596430","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596430","url":null,"abstract":"Deep generative models have recently achieved impressive performance in speech and music synthesis. However, compared to the generation of those domain-specific sounds, generating general sounds (such as siren, gunshots) has received less attention, despite their wide applications. In previous work, the SampleRNN method was considered for sound generation in the time domain. However, SampleRNN is potentially limited in capturing long-range dependencies within sounds as it only back-propagates through a limited number of samples. In this work, we propose a method for generating sounds via neural discrete time-frequency representation learning, conditioned on sound classes. This offers an advantage in efficiently modelling long-range dependencies and retaining local fine-grained structures within sound clips. We evaluate our approach on the UrbanSound8K dataset, compared to SampleRNN, with the performance metrics measuring the quality and diversity of generated sounds. Experimental results show that our method offers comparable performance in quality and significantly better performance in diversity.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132444470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Hierarchical Graph Neural Nets can Capture Long-Range Interactions 层次图神经网络可以捕获远程交互
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-07-15 DOI: 10.1109/mlsp52302.2021.9596069
Ladislav Rampášek, Guy Wolf
{"title":"Hierarchical Graph Neural Nets can Capture Long-Range Interactions","authors":"Ladislav Rampášek, Guy Wolf","doi":"10.1109/mlsp52302.2021.9596069","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596069","url":null,"abstract":"Graph neural networks (GNNs) based on message passing between neighboring nodes are known to be insufficient for capturing long-range interactions in graphs. In this paper we study hierarchical message passing models that leverage a multi-resolution representation of a given graph. This facilitates learning of features that span large receptive fields without loss of local information, an aspect not studied in preceding work on hierarchical GNNs. We introduce Hierarchical Graph Net (HGNet), which for any two connected nodes guarantees existence of message-passing paths of at most logarithmic length w.r.t. the input graph size. Yet, under mild assumptions, its internal hierarchy maintains asymptotic size equivalent to that of the input graph. We observe that our HGNet outperforms conventional stacking of GCN layers particularly in molecular property prediction benchmarks. Finally, we propose two benchmarking tasks designed to elucidate capability of GNNs to leverage long-range interactions in graphs.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116851323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Affinity Mixup for Weakly Supervised Sound Event Detection 弱监督声音事件检测的亲和混淆
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-21 DOI: 10.1109/mlsp52302.2021.9596270
M. Izadi, R. Stevenson, L. Kloepper
{"title":"Affinity Mixup for Weakly Supervised Sound Event Detection","authors":"M. Izadi, R. Stevenson, L. Kloepper","doi":"10.1109/mlsp52302.2021.9596270","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596270","url":null,"abstract":"The weakly supervised sound event detection (WSSED) problem is the task of predicting the presence of sound events and their corresponding starting and ending points in a weakly labeled dataset. A weak dataset associates each training sample (a short recording) to one or more present sources. Networks that solely rely on convolutional and recurrent layers cannot directly relate multiple frames in a recording. Motivated by attention and graph neural networks, we introduce the concept of an affinity mixup (AM) to incorporate time-level similarities and make a connection between frames. This regularization technique mixes up features in different layers using an adaptive affinity matrix. Our proposed affinity mixup network (AMN) improves over state-of-the-art techniques event-F1 scores by 8.2%.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128427645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Transfer Bayesian Meta-Learning Via Weighted Free Energy Minimization 基于加权自由能最小化的迁移贝叶斯元学习
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-20 DOI: 10.1109/mlsp52302.2021.9596239
Yunchuan Zhang, Sharu Theresa Jose, O. Simeone
{"title":"Transfer Bayesian Meta-Learning Via Weighted Free Energy Minimization","authors":"Yunchuan Zhang, Sharu Theresa Jose, O. Simeone","doi":"10.1109/mlsp52302.2021.9596239","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596239","url":null,"abstract":"Meta-learning optimizes the hyperparameters of a training procedure, such as its initialization, kernel, or learning rate, based on data sampled from a number of auxiliary tasks. A key underlying assumption is that the auxiliary tasks - known as meta-training tasks - share the same generating distribution as the tasks to be encountered at deployment time - known as meta-test tasks. This may, however, not be the case when the test environment differ from the meta-training conditions. To address shifts in task generating distribution between meta-training and meta-testing phases, this paper introduces weighted free energy minimization (WFEM) for transfer meta-learning. We instantiate the proposed approach for non-parametric Bayesian regression and classification via Gaussian Processes (GPs). The method is validated on a toy sinusoidal regression problem, as well as on classification using miniImagenet and CUB data sets, through comparison with standard meta-learning of GP priors as implemented by PACOH.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Signal Representations for EEG Cross-Subject Channel Selection and Trial Classification 脑电跨主题通道选择与试验分类的信号表征学习
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-20 DOI: 10.1109/mlsp52302.2021.9596522
M. Massi, F. Ieva
{"title":"Learning Signal Representations for EEG Cross-Subject Channel Selection and Trial Classification","authors":"M. Massi, F. Ieva","doi":"10.1109/mlsp52302.2021.9596522","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596522","url":null,"abstract":"EEG is a non-invasive powerful system that finds applications in several domains and research areas. Most EEG systems are multi-channel in nature, but multiple channels might include noisy and redundant information and increase computational times of automated EEG decoding algorithms. To reduce the signal-to-noise ratio, improve accuracy and reduce computational time, one may combine channel selection with feature extraction and dimensionality reduction. However, as EEG signals present high inter-subject variability, we introduce a novel algorithm for subject-independent channel selection through representation learning of EEG recordings. The algorithm exploits channel-specific 1D-CNNs as supervised feature extractors to maximize class separability and reduces a high dimensional multi-channel signal into a unique 1-Dimensional representation from which it selects the most relevant channels for classification. The algorithm can be transferred to new signals from new subjects and obtain novel highly informative trial vectors of controlled dimensionality to be fed to any kind of classifier.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132468301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Privacy Assessment of Federated Learning Using Private Personalized Layers 使用私有个性化层的联邦学习隐私评估
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-15 DOI: 10.1109/mlsp52302.2021.9596237
T. Jourdan, A. Boutet, Carole Frindel
{"title":"Privacy Assessment of Federated Learning Using Private Personalized Layers","authors":"T. Jourdan, A. Boutet, Carole Frindel","doi":"10.1109/mlsp52302.2021.9596237","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596237","url":null,"abstract":"Federated Learning (FL) is a collaborative scheme to train a learning model across multiple participants without sharing data. While FL is a clear step forward towards enforcing users' privacy, different inference attacks have been developed. In this paper, we quantify the utility and privacy trade-off of a FL scheme using private personalized layers. While this scheme has been proposed as local adaptation to improve the accuracy of the model through local personalization, it has also the advantage to minimize the information about the model exchanged with the server. However, the privacy of such a scheme has never been quantified. Our evaluations using motion sensor dataset show that personalized layers speed up the convergence of the model and slightly improve the accuracy for all users compared to a standard FL scheme while better preventing both attribute and membership inferences compared to a FL scheme using local differential privacy.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133870535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
MLP Singer: Towards Rapid Parallel Korean Singing Voice Synthesis MLP歌手:朝快速平行韩国歌唱声音合成
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-15 DOI: 10.1109/MLSP52302.2021.9596184
Jaesung Tae, Hyeongju Kim, Younggun Lee
{"title":"MLP Singer: Towards Rapid Parallel Korean Singing Voice Synthesis","authors":"Jaesung Tae, Hyeongju Kim, Younggun Lee","doi":"10.1109/MLSP52302.2021.9596184","DOIUrl":"https://doi.org/10.1109/MLSP52302.2021.9596184","url":null,"abstract":"Recent developments in deep learning have significantly improved the quality of synthesized singing voice audio. However, prominent neural singing voice synthesis systems suffer from slow inference speed due to their autoregressive design. Inspired by MLP-Mixer, a novel architecture introduced in the vision literature for attention-free image classification, we propose MLP Singer, a parallel Korean singing voice synthesis system. To the best of our knowledge, this is the first work that uses an entirely MLP-based architecture for voice synthesis. Listening tests demonstrate that MLP Singer outperforms a larger autoregressive GAN-based system, both in terms of audio quality and synthesis speed. In particular, MLP Singer achieves a real-time factor of up to 200 and 3400 on CPUs and GPUs respectively, enabling order of magnitude faster generation on both environments.11Source code available at https://github.corn/neosapience/mlp-singer.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128355911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Two-Way Spectrum Pursuit for CUR Decomposition and its Application in Joint Column/Row Subset Selection CUR分解的双向频谱追踪及其在联合列/行子集选择中的应用
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-13 DOI: 10.1109/mlsp52302.2021.9596233
Ashkan Esmaeili, M. Joneidi, Mehrdad Salimitari, Umar Khalid, N. Rahnavard
{"title":"Two-Way Spectrum Pursuit for CUR Decomposition and its Application in Joint Column/Row Subset Selection","authors":"Ashkan Esmaeili, M. Joneidi, Mehrdad Salimitari, Umar Khalid, N. Rahnavard","doi":"10.1109/mlsp52302.2021.9596233","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596233","url":null,"abstract":"The problem of simultaneous column and row subset selection is addressed in this paper. The column space and row space of a matrix are spanned by its left and right singular vectors, respectively. However, the singular vectors are not within actual columns/rows of the matrix. In this paper, an iterative approach is proposed to capture the most structural information of columns/rows via selecting a subset of actual columns/rows. This algorithm is referred to as two-way spectrum pursuit (TWSP) which provides us with an efficient solution for the CUR matrix decomposition. TWSP is applicable in a wide range of applications since it enjoys a linear complexity w.r.t. number of original columns/rows. We demonstrated the application of TWSP for joint channel and sensor selection in cognitive radio networks and efficient supervised data reduction.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"23 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116341492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Unified PAC-Bayesian Framework for Machine Unlearning via Information Risk Minimization 基于信息风险最小化的机器学习统一pac -贝叶斯框架
2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP) Pub Date : 2021-06-01 DOI: 10.1109/mlsp52302.2021.9596170
Sharu Theresa Jose, O. Simeone
{"title":"A Unified PAC-Bayesian Framework for Machine Unlearning via Information Risk Minimization","authors":"Sharu Theresa Jose, O. Simeone","doi":"10.1109/mlsp52302.2021.9596170","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596170","url":null,"abstract":"Machine unlearning refers to mechanisms that can remove the influence of a subset of training data upon request from a trained model without incurring the cost of re-training from scratch. This paper develops a unified PAC-Bayesian framework for machine unlearning that recovers the two recent design principles - variational unlearning [1] and forgetting Lagrangian [2] as information risk minimization problems [3]. Accordingly, both criteria can be interpreted as PAC-Bayesian upper bounds on the test loss of the unlearned model that take the form of free energy metrics.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121126783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信