{"title":"Estimating Partially Observed Graph Signals by Learning Spectrally Concentrated Graph Kernels","authors":"Gülce Turhan, Elif Vural","doi":"10.1109/mlsp52302.2021.9596282","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596282","url":null,"abstract":"Graph models provide flexible tools for the representation and analysis of signals defined over irregular domains such as social or sensor networks. However, in real applications data observations are often not available over the whole graph, due to practical problems such as sensor failure or connection loss. In this paper, we study the estimation of partially observed graph signals on multiple graphs. We learn a sparse representation of partially observed graph signals over spectrally concentrated graph dictionaries. Our dictionary model consists of several sub-dictionaries each of which is generated from a Gaussian kernel centered at a certain graph frequency in order to capture a particular spectral component of the graph signals at hand. The problem of jointly learning the spectral kernels and the sparse codes is solved with an alternating optimization approach. Finally, the incomplete entries of the given graph signals are estimated using the learnt dictionaries and the sparse coefficients. Experimental results on synthetic and real graph data sets suggest that the proposed method yields promising performance in comparison to reference solutions.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129219603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recognizing Activities from Egocentric Images with Appearance and Motion Features","authors":"Yanhua Chen, Mingtao Pei, Z. Nie","doi":"10.1109/mlsp52302.2021.9596178","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596178","url":null,"abstract":"With the development of wearable cameras, recognizing activities from egocentric images has attracted the interest of many researchers. The motion of the camera wearer is an important cue for the activity recognition, and is either explicitly used by optical flow for videos or implicitly used by fusing several images for images. In this paper, based on the observation that the two consecutive images captured by the wearable camera contain the motion information of the camera wearer, we propose to use the camera wearer's rotation and translation computed from the two consecutive images as the motion features. The motion features are combined with appearance features extracted by a CNN as the activity features, and the activity is classified by a random decision forest. We test our method on two egocentric image datasets. The experimental results show that by adding the motion information, the accuracy of activity recognition has been significantly improved.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"28 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134011978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Low-Dimensional Spectral Image Representation for Compressive Spectral Reconstruction","authors":"Brayan Monroy, Jorge Bacca, H. Arguello","doi":"10.1109/mlsp52302.2021.9596541","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596541","url":null,"abstract":"Model-based deep learning techniques are the state-of-the-art in compressive spectral imaging reconstruction. These methods integrate deep neural networks (DNN) as spectral image representation used as prior information in the optimization problem, showing optimal results at the expense of increasing the dimensionality of the non-linear representation, i.e., the number of parameters to be recovered. This paper proposes an autoencoder-based network that guarantees a low-dimensional spectral representation through feature reduction, which can be used as prior in the compressive spectral imaging reconstruction. Additionally, based on the experimental observation that the obtained low dimensional spectral representation preserves the spatial structure of the scene, this work exploits the sparsity in the generated feature space by using the Wavelet basis to reduce even more the dimensionally of the inverse problem. The proposed method shows improvements up to 2 dB against state-of-the-art methods.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"28 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131237036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Trained Video Anomaly Detection Based on Teacher-Student Model","authors":"Xusheng Wang, Mingtao Pei, Z. Nie","doi":"10.1109/mlsp52302.2021.9596140","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596140","url":null,"abstract":"Anomaly detection in videos is a challenging problem in computer vision. Most existing methods need supervised information to train their models, which limits their applications in real world scenario. Therefore, self-trained methods which do not need manually labels receive increasing attentions recently. In this paper, we propose a novel self-trained video anomaly detection method based on teacher-student model. The teacher-student architecture can significantly improve the performance of self-trained video anomaly detection by utilizing the unlabeled samples. We test our method on two surveillance datasets. Experiment results show that our method achieves better performance than state-of-the-art unsupervised methods on both datasets and achieves comparable performance as semi-supervised methods, which experimentally proves the effectiveness of our method.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132658232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodal Graph Coarsening for Interpretable, MRI-Based Brain Graph Neural Network","authors":"","doi":"10.1109/mlsp52302.2021.9596100","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596100","url":null,"abstract":"","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129544324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minxu Peng, Mertcan Cokbas, U. D. Gallastegi, P. Ishwar, J. Konrad, B. Kulis, V. Goyal
{"title":"Convolutional Neural Network Denoising of Focused Ion Beam Micrographs","authors":"Minxu Peng, Mertcan Cokbas, U. D. Gallastegi, P. Ishwar, J. Konrad, B. Kulis, V. Goyal","doi":"10.1109/mlsp52302.2021.9596272","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596272","url":null,"abstract":"Most research on deep learning algorithms for image denoising has focused on signal-independent additive noise. Focused ion beam (FIB) microscopy with direct secondary electron detection has an unusual Neyman Type A (compound Poisson) measurement model, and sample damage poses fundamental challenges in obtaining training data. Model-based estimation is difficult and ineffective because of the nonconvexity of the negative log likelihood. In this paper, we develop deep learning-based denoising methods for FIB micrographs using synthetic training data generated from natural images. To the best of our knowledge, this is the first attempt in the literature to solve this problem with deep learning. Our results show that the proposed methods slightly outperform a total variation-regularized model-based method that requires time-resolved measurements that are not conventionally available. Improvements over methods using conventional measurements and less accurate noise modeling are dramatic - around 10 dB in peak signal-to-noise ratio.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"106 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113961743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyuntak Lim, Si-Dong Roh, Sangki Park, Ki-Seok Chung
{"title":"Robustness-Aware Filter Pruning for Robust Neural Networks Against Adversarial Attacks","authors":"Hyuntak Lim, Si-Dong Roh, Sangki Park, Ki-Seok Chung","doi":"10.1109/mlsp52302.2021.9596121","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596121","url":null,"abstract":"Today, neural networks show remarkable performance in various computer vision tasks, but they are vulnerable to adversarial attacks. By adversarial training, neural networks may improve robustness against adversarial attacks. However, it is a time-consuming and resource-intensive task. An earlier study analyzed adversarial attacks on the image features and proposed a robust dataset that would contain only features robust to the adversarial attack. By training with the robust dataset, neural networks can achieve a decent accuracy under adversarial attacks without carrying out time-consuming adversarial perturbation tasks. However, even if a network is trained with the robust dataset, it may still be vulnerable to adversarial attacks. In this paper, to overcome this limitation, we propose a new method called Robustness-aware Filter Pruning (RFP). To the best of our knowledge, it is the first attempt to utilize a filter pruning method to enhance the robustness against the adversarial attack. In the proposed method, the filters that are involved with non-robust features are pruned. With the proposed method, 52.1 % accuracy against one of the most powerful adversarial attacks is achieved, which is 3.8% better than the previous robust dataset training while maintaining clean image test accuracy. Also, our method achieves the best performance when compared with the other filter pruning methods on robust dataset.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125927668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online DOA Estimation for Noninteger Linear Antenna Arrays in Coarray Domain","authors":"Yitian Chen, H. Nosrati, E. Aboutanios","doi":"10.1109/mlsp52302.2021.9596114","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596114","url":null,"abstract":"We study low complexity direction of arrival (DOA) estimation in noninteger nonuniform antenna arrays with the same number as or more uncorrelated sources than sensors. We employ the maximum entropy (ME) method to solve the matrix completion problem that arises due to having an incomplete set of lags in the coarray. In order to decrease the computational complexity associated with the determinant maximization in the ME completion method, we present a projection-free online convex optimization (OCO) based on the conditional gradient method. We then frame the problem as a sequence of DOA estimation scenarios with varying directions in which a tight bound on total regret minimization is guaranteed by the employed unsupervised learning technique. We evaluate the performance using numerical examples and demonstrate that the proposed method decreases the root mean squared error (RMSE) as the iterations increase. Furthermore, our method approaches the RMSE of the offline method, exhibiting the same saturation behavior as the CRB.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"51 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130064400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Constrained Linearly Involved Generalized Moreau Enhanced Model and Its Proximal Splitting Algorithm","authors":"Wataru Yata, M. Yamagishi, I. Yamada","doi":"10.1109/mlsp52302.2021.9596347","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596347","url":null,"abstract":"In this paper, we propose a constrained LiGME (cLiGME) model by incorporating newly multiple convex constraints into the LiGME (Linearly involved Generalized Moreau Enhanced) model which was established recently for many scenarios in sparsity-rank-aware least squares estimation. The cLiGME model can exploit flexibly a priori knowledge on the target to be estimated while keeping the advantage of the LiGME model, i.e., mathematically sound mechanism for nonconvex enhancements of linearly involved convex regularizers. For the cLiGME model, we present a new proximal splitting type algorithm of guaranteed convergence to a global minimizer and demonstrate its effectiveness with a simple numerical experiment.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126509991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object Detection in SAR Via Generative Knowledge Transfer","authors":"Xin Lou, Han Wang","doi":"10.1109/mlsp52302.2021.9596254","DOIUrl":"https://doi.org/10.1109/mlsp52302.2021.9596254","url":null,"abstract":"To address the data acquisition and labeling problem for object detection in SAR images, a generative transfer learning framework consists with a knowledge transfer network and a object detection network is proposed. The knowledge transfer network generates pseudo SAR images whose spatial distribution are consistent with labeled optical images and feature distribution are similar to SAR images. These pseudo SAR images are further used to improve generalization performance of convolutional neural network based detection models. Experimental results on SAR SHIP Detection Datasets (SSDD) and AIR-SARShip-1.0 datasets confirm that the pseudo SAR images generated by our method can benefit the final detection prediction even no labeled SAR image is given at the training stage.","PeriodicalId":156116,"journal":{"name":"2021 IEEE 31st International Workshop on Machine Learning for Signal Processing (MLSP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133342084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}