{"title":"Automatic Quantification of Facial Asymmetry Using Facial Landmarks","authors":"A. M. N. Taufique, A. Savakis, J. Leckenby","doi":"10.1109/WNYIPW.2019.8923078","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923078","url":null,"abstract":"One-sided facial paralysis causes uneven movements of facial muscles on the sides of the face. Physicians currently assess facial asymmetry in a subjective manner based on their clinical experience. This paper proposes a novel method to provide an objective and quantitative asymmetry score for frontal faces. Our metric has the potential to help physicians for diagnosis as well as monitoring the rehabilitation of patients with one-sided facial paralysis. A deep learning based landmark detection technique is used to estimate style invariant facial landmark points and dense optical flow is used to generate motion maps from a short sequence of frames. Six face regions are considered corresponding to the left and right parts of the forehead, eyes, and mouth. Motion is computed and compared between the left and the right parts of each region of interest to estimate the symmetry score. For testing, asymmetric sequences are synthetically generated from a facial expression dataset. A score equation is developed to quantify symmetry in both symmetric and asymmetric face sequences.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122802829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yashowardhan Soni, Cecilia Ovesdotter Alm, Reynold J. Bailey
{"title":"Affective Video Recommender System","authors":"Yashowardhan Soni, Cecilia Ovesdotter Alm, Reynold J. Bailey","doi":"10.1109/WNYIPW.2019.8923087","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923087","url":null,"abstract":"Video recommendation is the task of providing users with customized media content conventionally done by considering historical user ratings. We develop classifiers that learn from human faces toward a video recommender system that utilizes displayed emotional reactions to previously seen videos for predicting preferences. We use a dataset collected from subjects who watched videos selected to elicit different emotions, to model two related problems: (1) prediction of user rating and (2) whether a user would recommend a particular video. The classifiers are trained on two forms of face-based features: facial expressions and skin-estimated pulse. In addition, the impact of data augmentation and instance size are studied.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124218408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Lucas, Cecilia Ovesdotter Alm, Reynold Bailey
{"title":"Understanding Human and Predictive Moderation of Online Science Discourse","authors":"Elizabeth Lucas, Cecilia Ovesdotter Alm, Reynold Bailey","doi":"10.1109/WNYIPW.2019.8923109","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923109","url":null,"abstract":"Manual moderation activities can be fatiguing, emotionally exhausting, and potentially traumatizing, yet moderation is essential to the health of the discussion community. Communities, therefore, can benefit from automated moderation systems. We report on a study with a survey about moderation behaviors and an annotation task involving forum comments to aid curating a deeper understanding of moderation towards predictive support. We also create models for distinguishing between acceptable and unacceptable scientific forum comments and discuss results given moderators responses.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129369917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reduced Complexity Tree-Search Detector for Hybrid Space-Time Codes","authors":"G. González, J. Cortez, M. Bazdresch","doi":"10.1109/WNYIPW.2019.8923103","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923103","url":null,"abstract":"Space-time block codes are a well-known technique to exploit the capacity of multiple-input, multiple-output wireless channels. A space-time code specifies how the transmitted symbols are spread over both space and time. The resulting spatial- and time-diversity are exploited by the receiver to overcome deep fades and to increase the data rate. The search for receiver algorithms with high performance, low complexity, and applicability to a variety of codes is an active research area. In this paper, we present a receiver algorithm for hybrid space-time codes, which combine layers from purely spatial codes with layers from orthogonal codes, and obtains both diversity and multiplexing gains. The proposed decoder works in systems with three or four transmit antennas, and at least two receive antennas. The decoder performs a low-complexity tree search and achieves a bit error rate within a fraction of a decibel of the optimum. The low complexity results from adopting an improved search stop criteria, used in recent sphere decoders. The reduction is especially significant in correlated channels.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"254 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133990671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel Domingue, Rohan Narendra Dhamdhere, Naga Durga Harish Kanamarlapudi, Sunand Raghupathi, R. Ptucha
{"title":"Evolution of Graph Classifiers","authors":"Miguel Domingue, Rohan Narendra Dhamdhere, Naga Durga Harish Kanamarlapudi, Sunand Raghupathi, R. Ptucha","doi":"10.1109/WNYIPW.2019.8923110","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923110","url":null,"abstract":"Architecture design and hyperparameter selection for deep neural networks often involves guesswork. The parameter space is too large to try all possibilities, meaning one often settles for a suboptimal solution. Some works have proposed automatic architecture and hyperparameter search, but are constrained to image applications. We propose an evolution framework for graph data which is extensible to generic graphs. Our evolution mutates a population of neural networks to search the architecture and hyperparameter space. At each stage of the neuroevolution process, neural network layers can be added or removed, hyperparameters can be adjusted, or additional epochs of training can be applied. Probabilities of the mutation selection based on recent successes help guide the learning process for efficient and accurate learning. We achieve state-of-the-art on MUTAG protein classification from a small population of 10 networks and gain interesting insight into how to build effective network architectures incrementally.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116522231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Stankowicz, Josh Robinson, Joseph M. Carmack, Scott Kuzdeba
{"title":"Complex Neural Networks for Radio Frequency Fingerprinting","authors":"J. Stankowicz, Josh Robinson, Joseph M. Carmack, Scott Kuzdeba","doi":"10.1109/WNYIPW.2019.8923089","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923089","url":null,"abstract":"We use deep learning to design a radio frequency (RF) fingerprint algorithm that takes complex-valued wireless signals as input, and outputs the identity of the device that transmitted the signal. We study how performance accuracy varies due to changes in input representation, choices of labels, and treatment of complex values. We report sensitivity to number of devices, training set size, signal-to-noise ratio, and environmental channel. Training data are real-time transmissions from thousands of devices.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116587636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hashemi, H. Veisi, E. Jafarzadehpur, R. Rahmani, Zeinabolhoda Heshmati
{"title":"A CCA Approach for Multiview Analysis to Detect Rigid Gas Permeable Lens Base Curve","authors":"S. Hashemi, H. Veisi, E. Jafarzadehpur, R. Rahmani, Zeinabolhoda Heshmati","doi":"10.1109/WNYIPW.2019.8923063","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923063","url":null,"abstract":"Multi-view learning has been one of the focuses in medical image analysis in recent years. The combination of various image properties for medical decision making has had a high impact in the medical field. The Pentacam four refractive is one of the sources for detecting Rigid Gas Permeable (RGP) lenses properties for irregular astigmatism patients. We present a radial-sectoral segmentation approach to analyze the Pentacam four refractive maps individually. Canonical Correlation Analysis (CCA) and a two hidden layer neural network is applied as a means of multi-view learning and base curve identification. The combination of the segmentation method with CCA combinatory feature vector, results in a 0.970 coefficient of determination in RGP base curve identification. This result considerably improves current findings and confirms optometrist findings based on the importance of the image maps. The proposed method has a great impact on reducing patient chair time and optometrist and patient satisfaction.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133109038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast Detection Based on Customized Complex Valued Convolutional Neural Network for Generalized Spatial Modulation Systems","authors":"Akram Marseet, Taissir Y. Elganimi","doi":"10.1109/WNYIPW.2019.8923057","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923057","url":null,"abstract":"In this paper, a customized Auto-Encoder Complex Valued Convolutional Neural Network (AE-CV-CNN) that has been developed in a prior work is applied to Single Symbol Generalized Spatial Modulation (SS-GSM) scheme with new extracted features. The achieved reductions in the computational complexity at the receiver is at least 63.64% for M-PSK schemes compared to the complexity of Maximum Likelihood (ML) detection algorithm. This Fast detection algorithm is based on a proposed Low Complexity ML (LC-ML) detector that affords a complexity reduction of at least 40.91%. With these proposed algorithms, the complexity is reduced as the spatial constellation size increases. Furthermore, in comparison to other sub optimal detection algorithms, the computational complexity in terms of real valued multiplications of the AE-CV-CNN applied to LC-ML is independent of the spatial spectrum efficiency which means that the total spectrum efficiency increases with larger spatial constellation size at no additional complexity.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114597059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"System Signals Monitoring and Processing for Colluded Application Attacks Detection in Android OS","authors":"I. Khokhlov, Michael Perez, L. Reznik","doi":"10.1109/WNYIPW.2019.8923113","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923113","url":null,"abstract":"This paper investigates a novel colluded application attack's influence on the system's technological signals of an Android OS smartphone. This attack requires two or more applications to collaborate in order to bypass permission restriction mechanisms and leak private data. We implement this attack on a real stock Android OS smartphone and record such technological signals as overall memory consumption, CPU utilization, and CPU frequency. These recordings are studied in order to investigate the feasibility of their employment in building the attack classifiers. In developing those classifiers, we employed various machine learning techniques processing these technological signals. Such machine learning techniques as a feed-forward and long-short term memory neural networks were investigated and compared against each other. The results achieved are presented and analyzed.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128514027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bao Thai, Robert Jimerson, Dominic Arcoraci, Emily Tucker Prud'hommeaux, R. Ptucha
{"title":"Synthetic Data Augmentation for Improving Low-Resource ASR","authors":"Bao Thai, Robert Jimerson, Dominic Arcoraci, Emily Tucker Prud'hommeaux, R. Ptucha","doi":"10.1109/WNYIPW.2019.8923082","DOIUrl":"https://doi.org/10.1109/WNYIPW.2019.8923082","url":null,"abstract":"Although the application of deep learning to automatic speech recognition (ASR) has resulted in dramatic reductions in word error rate for languages with abundant training data, ASR for languages with few resources has yet to benefit from deep learning to the same extent. In this paper, we investigate various methods of acoustic modeling and data augmentation with the goal of improving the accuracy of a deep learning ASR framework for a low-resource language with a high baseline word error rate. We compare several methods of generating synthetic acoustic training data via voice transformation and signal distortion, and we explore several strategies for integrating this data into the acoustic training pipeline. We evaluate our methods on an indigenous language of North America with minimal training resources. We show that training initially via transfer learning from an existing high-resource language acoustic model, refining weights using a heavily concentrated synthetic dataset, and finally fine-tuning to the target language using limited synthetic data reduces WER by 15% over just transfer learning using deep recurrent methods. Further, we show improvements over traditional frameworks by 19% using a similar multistage training with deep convolutional approaches.","PeriodicalId":275099,"journal":{"name":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131106639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}