{"title":"Generalizing Graph Signal Processing: High Dimensional Spaces, Models and Structures","authors":"Xingchao Jian, Feng Ji, Wee Peng Tay","doi":"10.1561/2000000119","DOIUrl":"https://doi.org/10.1561/2000000119","url":null,"abstract":"","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"61 1","pages":"209-290"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84946119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Introduction to Quantum Machine Learning for Engineers","authors":"O. Simeone","doi":"10.48550/arXiv.2205.09510","DOIUrl":"https://doi.org/10.48550/arXiv.2205.09510","url":null,"abstract":"In the current noisy intermediate-scale quantum (NISQ) era, quantum machine learning is emerging as a dominant paradigm to program gate-based quantum computers. In quantum machine learning, the gates of a quantum circuit are parameterized, and the parameters are tuned via classical optimization based on data and on measurements of the outputs of the circuit. Parameterized quantum circuits (PQCs) can efficiently address combinatorial optimization problems, implement probabilistic generative models, and carry out inference (classification and regression). This monograph provides a self-contained introduction to quantum machine learning for an audience of engineers with a background in probability and linear algebra. It first describes the necessary background, concepts, and tools necessary to describe quantum operations and measurements. Then, it covers parameterized quantum circuits, the variational quantum eigensolver, as well as unsupervised and supervised quantum machine learning formulations.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"27 1","pages":"1-223"},"PeriodicalIF":0.0,"publicationDate":"2022-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78875524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Signal Decomposition Using Masked Proximal Operators","authors":"Bennet E. Meyers, Stephen P. Boyd","doi":"10.1561/9781638281030","DOIUrl":"https://doi.org/10.1561/9781638281030","url":null,"abstract":"We consider the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse. We describe a simple and general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints). When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, we give two distributed optimization methods for computing the decomposition, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. We derive tractable methods for evaluating the masked proximal operators of some loss functions that, to our knowledge, have not appeared in the literature.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"11 1","pages":"1-78"},"PeriodicalIF":0.0,"publicationDate":"2022-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75350048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
João B. O. Souza Filho, Lan-Da Van, T. Jung, Paulo S. R. Diniz
{"title":"Online Component Analysis, Architectures and Applications","authors":"João B. O. Souza Filho, Lan-Da Van, T. Jung, Paulo S. R. Diniz","doi":"10.1561/2000000112","DOIUrl":"https://doi.org/10.1561/2000000112","url":null,"abstract":"","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"20 1","pages":"224-429"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84281818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henrik Hellström, J. M. B. D. Silva, M. Amiri, Mingzhe Chen, Viktoria Fodor, H. Poor, C. Fischione
{"title":"Wireless for Machine Learning: A Survey","authors":"Henrik Hellström, J. M. B. D. Silva, M. Amiri, Mingzhe Chen, Viktoria Fodor, H. Poor, C. Fischione","doi":"10.1561/2000000114","DOIUrl":"https://doi.org/10.1561/2000000114","url":null,"abstract":"","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"138 1","pages":"290-399"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85581373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bilevel Methods for Image Reconstruction","authors":"Caroline Crockett, J. Fessler","doi":"10.1561/2000000111","DOIUrl":"https://doi.org/10.1561/2000000111","url":null,"abstract":"This review discusses methods for learning parameters for image reconstruction problems using bilevel formulations. Image reconstruction typically involves optimizing a cost function to recover a vector of unknown variables that agrees with collected measurements and prior assumptions. State-of-the-art image reconstruction methods learn these prior assumptions from training data using various machine learning techniques, such as bilevel methods. One can view the bilevel problem as formalizing hyperparameter optimization, as bridging machine learning and cost function based optimization methods, or as a method to learn variables best suited to a specific task. More formally, bilevel problems attempt to minimize an upper-level loss function, where variables in the upper-level loss function are themselves minimizers of a lower-level cost function. This review contains a running example problem of learning tuning parameters and the coefficients for sparsifying filters used in a regularizer. Such filters generalize the popular total variation regularization method, and learned filters are closely related to convolutional neural networks approaches that are rapidly gaining in popularity. Here, the lower-level problem is to reconstruct an image using a regularizer with learned sparsifying filters; the corresponding upper-level optimization problem involves a measure of reconstructed image quality based on training data. This review discusses multiple perspectives to motivate the use of bilevel methods and to make them more easily accessible to different audiences. We then turn to ways to optimize the bilevel problem, providing pros and cons of the variety of proposed approaches. Finally we overview bilevel applications in image reconstruction. 1 ar X iv :2 10 9. 09 61 0v 1 [ m at h. O C ] 2 0 Se p 20 21","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"270 1","pages":"121-289"},"PeriodicalIF":0.0,"publicationDate":"2021-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87081518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Operating Characteristics for Classical and Quantum Binary Hypothesis Testing","authors":"Catherine Medlock, A. Oppenheim","doi":"10.1561/2000000106","DOIUrl":"https://doi.org/10.1561/2000000106","url":null,"abstract":"This monograph addresses operating characteristics for binary hypothesis testing in both classical and quantum settings and overcomplete quantum measurements for quantum binary state discrimination. We specifically explore decision and measurement operating characteristics defined as the tradeoff between probability of detection and probability of false alarm as parameters of the pre-decision operator and the binary decision rule are varied. In the classical case we consider in detail the Neyman-Pearson optimality of the operating characteristics when they are generated using threshold tests on a scalar score variable rather than threshold tests on the likelihood ratio. In the quantum setting, informationally overcomplete POVMs are explored to provide robust quantum binary state discrimination. We focus on equal trace rank one POVMs which can be specified by arrangements of points on a sphere that we refer to as an Etro sphere. Catherine A. Medlock and Alan V. Oppenheim (2021), “Operating Characteristics for Classical and Quantum Binary Hypothesis Testing”, Foundations and Trends® in Signal Processing: Vol. 15, No. 1, pp 1–120. DOI: 10.1561/2000000106. Full text available at: http://dx.doi.org/10.1561/2000000106","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"150 1","pages":"1-120"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83499864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-Driven Multi-Microphone Speaker Localization on Manifolds","authors":"Bracha Laufer-Goldshtein, R. Talmon, S. Gannot","doi":"10.1561/2000000098","DOIUrl":"https://doi.org/10.1561/2000000098","url":null,"abstract":"","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"23 1","pages":"1-161"},"PeriodicalIF":0.0,"publicationDate":"2020-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77314943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compressed Sensing with Applications in Wireless Networks","authors":"Markus Leinonen, M. Codreanu, G. Giannakis","doi":"10.1561/2000000107","DOIUrl":"https://doi.org/10.1561/2000000107","url":null,"abstract":"Many natural signals possess only a few degrees of freedom. For instance, the occupied radio spectrum may be intermittently concentrated to only a few frequency bands of the system bandwidth. This special structural feature – signal sparsity – is conducive in designing efficient signal processing techniques for wireless networks. In particular, the signal sparsity can be leveraged by the recently emerged joint sampling and compression paradigm, compressed sensing (CS). This monograph reviews several recent CS advancements in wireless networks with an aim to improve the quality of signal reconstruction or detection while reducing the use of energy, radio, and computation resources. The monograph covers a diversity of compressive data reconstruction, gathering, and detection frameworks in cellular, cognitive, and wireless sensor networking systems. The monograph first gives an overview of the principles of CS for the readers unfamiliar with the topic. For the researchers knowledgeable in CS, the monograph provides in-depth reviews of several interesting CS advancements in designing tailored CS reconstruction techniques for wireless applications. The monograph can serve as a basis for the researchers intended to start working in the field, and altogether, lays a foundation for further research in the covered areas.","PeriodicalId":12340,"journal":{"name":"Found. Trends Signal Process.","volume":"68 1","pages":"1-282"},"PeriodicalIF":0.0,"publicationDate":"2019-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81236996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}