2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)最新文献

筛选
英文 中文
Bilateral hemiface feature representation learning for pose robust facial expression recognition 面向姿态鲁棒性面部表情识别的双侧半脸特征表征学习
Wissam J. Baddar, Yong Man Ro
{"title":"Bilateral hemiface feature representation learning for pose robust facial expression recognition","authors":"Wissam J. Baddar, Yong Man Ro","doi":"10.1109/APSIPA.2016.7820781","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820781","url":null,"abstract":"We propose a bilateral hemiface feature representation learning via convolutional neural networks (CNNs) for pose robust facial expression recognition. The proposed method considers two characteristics of facial expressions. First, features from local patches are more robust to pose variations. Second, human faces are bilaterally symmetrical on left and right hemifaces. To incorporate those characteristics, a CNN is devised to learn feature representations from local patches. Then, feature representations are learned from each hemiface separately. To reduce the effect of self-occlusion, a shared feature representation is learned by combining both hemiface feature representations. The shared feature representation adaptively learns to utilize the hemiface feature representations according to the head pose. Experiments conducted on the Multi-PIE dataset showed that the proposed bilateral hemiface feature representation is pose robust and compares favorably to state-of-the-art methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132541595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust blind deconvolution for PMMW images with sparsity presentation 稀疏表示PMMW图像的鲁棒盲反卷积
Tingting Liu, Zengzhao Chen, Hai Liu, Sanya Liu, Zhaoli Zhang, Taihe Cao
{"title":"Robust blind deconvolution for PMMW images with sparsity presentation","authors":"Tingting Liu, Zengzhao Chen, Hai Liu, Sanya Liu, Zhaoli Zhang, Taihe Cao","doi":"10.1109/APSIPA.2016.7820680","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820680","url":null,"abstract":"Passive millimeter-wave images (PMMW) often suffer from issues such as low resolution, noise, and blurring. In this paper, we proposed a blind image deconvolution method for the passive millimeter-wave images. The purpose of the proposed method is to simultaneously solve the point spread function (PSF) and restoration image. In this method, the data fidelity item is constructed based on Gaussian noise assuming, and the regularization item is constructed as the hyper-Laplace function ‖x‖0.6, which is fitted according to the high-resolution PMMW images. Moreover, a data-selected matrix is proposed to select the regions that are helpful for estimating the accurate PSF. The proposed method has been applied to simulated and real PMMW image experiments. Comparative results demonstrate that the proposed method significantly outperforms the state-of-the-art deconvolution methods on both qualitative and quantitative assessments.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127290142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Investigation of noun-verb dissociation based on EEG source reconstruction 基于脑电源重构的名动分离研究
Bin Zhao, J. Dang, Gaoyan Zhang
{"title":"Investigation of noun-verb dissociation based on EEG source reconstruction","authors":"Bin Zhao, J. Dang, Gaoyan Zhang","doi":"10.1109/APSIPA.2016.7820817","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820817","url":null,"abstract":"To clarify whether grammatical category or semantic meaning is the underlying determinant of noun-verb dissociation in the brain topography, this study recorded 128-channel electroencephalographic (EEG) signals from the scalps of 22 subjects when they listened to auditory (i) unambiguous nouns (UN), (ii) unambiguous verbs (UV), (iv) noun-biased ambiguous words (AN) and (v) verb-biased ambiguous words (AV). Then the current density source reconstruction algorithm with a standardized low-resolution electromagnetic tomography constraint was applied to the EEG signals to uncover the brain dynamics during the word processing. In our results, the noun-verb dissociation appeared in the periods of 150–250 ms and 380–450 ms, during which activation differences in the visual occipital cortex and motor frontal cortex were observed in both UN-UV and AN-AV contrasts. The results suggest that semantic differences might lead to the noun-verb dissociation.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124935494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Quality preserving depth estimation in sequential stereo images 连续立体图像中保持质量的深度估计
Ji-Hun Mun, Yo-Sung Ho
{"title":"Quality preserving depth estimation in sequential stereo images","authors":"Ji-Hun Mun, Yo-Sung Ho","doi":"10.1109/APSIPA.2016.7820869","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820869","url":null,"abstract":"Computational complexity of the local stereo matching method is affected by disparity ranges. In case of depth estimation in sequential stereo images, high computational complexity is a problem in terms of real-time processing. In this paper, we propose a temporal correlation based stereo matching method in sequential images. Using temporal information in a sequential stereo matching method provides inaccurate disparity ranges, since the estimated depth map accuracy is gradually degraded. To preserve the depth map quality in temporal stereo matching procedure, we adopt the guided image filtering for matching cost aggregation. Since the guided image filtering has a similar structure with bilateral filter, it preserves an object boundary region even in restricted disparity search ranges. Inaccurately estimated disparity values from the temporal correlation are compensated by filtering based cost aggregation method. From the experiment results, we check that the proposed depth map acquisition method preserves the depth map quality in temporal domain stereo matching.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125010178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving BLSTM RNN based Mandarin speech recognition using accent dependent bottleneck features 基于语音瓶颈特征的BLSTM RNN普通话语音识别改进
Jiangyan Yi, Hao Ni, Zhengqi Wen, J. Tao
{"title":"Improving BLSTM RNN based Mandarin speech recognition using accent dependent bottleneck features","authors":"Jiangyan Yi, Hao Ni, Zhengqi Wen, J. Tao","doi":"10.1109/APSIPA.2016.7820723","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820723","url":null,"abstract":"This paper proposes an approach to perform accent adaptation by using accent dependent bottleneck (BN) features to improve the performance of multi-accent Mandarin speech recognition system. The architecture of the adaptation uses two neural networks. First, deep neural network (DNN) acoustic model acts as a feature extractor which is used to extract accent dependent BN (BN-DNN) features. The input features of the BN-DNN model are MFCC features appended with i-vectors features. Second, bidirectional long short term memory (BLSTM) recurrent neural network (RNN) based acoustic model is used to perform accent-specific adaptation. The input features of the BLSTM RNN model are accent dependent BN features appended with MFCC features. Experiments on RASC863 and CASIA regional accent speech corpus show that the proposed method obtains obvious improvement compared with the BLSTM RNN baseline model.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125053036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Analysis of adaptation rate of the FXLMS algorithm FXLMS算法的自适应率分析
Kiyonori Terauchi, K. Motonaka, Y. Kajikawa, S. Miyoshi
{"title":"Analysis of adaptation rate of the FXLMS algorithm","authors":"Kiyonori Terauchi, K. Motonaka, Y. Kajikawa, S. Miyoshi","doi":"10.1109/APSIPA.2016.7820696","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820696","url":null,"abstract":"We analyze the behaviors of active noise control using a statistical-mechanical method. The principal assumption used in the analysis is that the impulse responses of the primary path and adaptive filter are sufficiently long. In particular, in this paper we analyze the adaptation rate of the mean square error (MSE) using two measures. The first measure is the MSE initial decreasing rate. The second measure is an adaptation constant. This is defined by the negative of the maximum eigenvalue of the coefficient matrix of differential equations that describe the dynamical behaviors of the macroscopic variables. Introducing these two measures, we theoretically show that the optimal step size depends on whether we focus on the rate of decrease in the MSE at the initial stage or the MSE after sufficient adaptation time.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123267162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A noise masking method with adaptive thresholds based on CASA 基于CASA的自适应阈值噪声掩蔽方法
Feng Bao, W. Abdulla
{"title":"A noise masking method with adaptive thresholds based on CASA","authors":"Feng Bao, W. Abdulla","doi":"10.1109/APSIPA.2016.7820880","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820880","url":null,"abstract":"In this paper, we propose a novel noise masking method based on Computational Auditory Scene Analysis by using an adaptive factor. Although it has succeeded in the field of speech separation and speech enhancement to some extent, the usage of fixed thresholds used for segregation and labeling heavily affects the processing performance. Focusing on this issue, the proposed method utilizes the Normalized Cross-Correlation Coefficients between the power spectra of noisy speech and pure noise to find an adaptive threshold, so that the pitch contour and Time-Frequency units can be obtained more accurately. Then, a revised algorithm is used to smooth the current binary mask value by checking the Time-Frequency units within adjacent frames and neighbor channels around the current Time-Frequency unit in order to remove the erroneous local masks. Two kinds of Signal to Noise Ratio test results show that the performance of the proposed method outperforms conventional spectral subtractive, Wiener Filtering and Computational Auditory Scene Analysis methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114927222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Head pose estimation using random forest and texture analysis 基于随机森林和纹理分析的头部姿态估计
Min-Joo Kang, Hana Lee, Jewon Kang
{"title":"Head pose estimation using random forest and texture analysis","authors":"Min-Joo Kang, Hana Lee, Jewon Kang","doi":"10.1109/APSIPA.2016.7820742","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820742","url":null,"abstract":"In this paper, we propose a new head pose estimation technique based on Random Forest (RF) and Multi-scale Block Local Block Pattern (MB-LBP) features. In the proposed technique we aim to learn a randomized tree with useful attributes to improve the estimation accuracy and tolerance of occlusions and illumination. Precisely, a number of MB-LBP feature spaces are generated from a face image, and random inputs and random features such as the MB-LBP scale parameter and the block coordinate in the pool are used for building the tree. Furthermore we develop a split function considering the properties of the uniform LBP, applied to each internal node of the tree to maximize the information gain at that node. The randomized trees put together in RF are used for the final decision in a Maximum-A-Posteriori criterion. Experimental results demonstrate that the proposed technique provides impressive performance in the head pose estimation in various conditions of illumination, poses, expressions, and facial occlusions.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115192411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Canine emotional states assessment with heart rate variability 犬类情绪状态评估与心率变异性
Eri Nakahara, Yuki Maruno, Takatomi Kubo, Rina Ouchi, Maki Katayama, K. Fujiwara, M. Nagasawa, T. Kikusui, K. Ikeda
{"title":"Canine emotional states assessment with heart rate variability","authors":"Eri Nakahara, Yuki Maruno, Takatomi Kubo, Rina Ouchi, Maki Katayama, K. Fujiwara, M. Nagasawa, T. Kikusui, K. Ikeda","doi":"10.1109/APSIPA.2016.7820868","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820868","url":null,"abstract":"Emotions of a person affect the person's performance in a task and so do emotions of a rescue dog that works after disasters. Hence, estimating emotions of a rescue dog by the handler can improve its performance and welfare. Emotions also appear in physiological signals such as heart rate variability (HRV). In fact, HRV has information of emotions in both cases of human and dogs. To make emotion estimation more practical, we proposed a method for emotion estimation from HRV of dogs and evaluated its performance using real data. The method classified positive, negative, and neutral emotions with 88% accuracy within each subject and 72% over all subjects. These accuracies are high enough for practical use in rescue dogs.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115447628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fast HEVC screen content coding by skipping unnecessary checking of intra block copy mode based on CU activity and gradient 快速HEVC屏幕内容编码跳过不必要的检查内块复制模式基于CU活动和梯度
Sik-Ho Tsang, Wei Kuang, Yui-Lam Chan, W. Siu
{"title":"Fast HEVC screen content coding by skipping unnecessary checking of intra block copy mode based on CU activity and gradient","authors":"Sik-Ho Tsang, Wei Kuang, Yui-Lam Chan, W. Siu","doi":"10.1109/APSIPA.2016.7820900","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820900","url":null,"abstract":"The Intra Block Copy (IntraBC) mode is a very efficient coding tool for the screen content coding (SCC) extension in High Efficiency Video Coding (HEVC) by finding the repeating patterns within the same frame. Yet, it also brings along impractically high computational complexity for SCC, which can be a double of the conventional HEVC, as exhaustive block matching is done within the same frame even though there are already some constraints applied to the IntraBC mode. To reduce the complexity, we propose to skip the unnecessary IntraBC mode checking based on the activity and gradient within the coding unit (CU). With our proposed methods, the increased encoding time compared with the conventional HEVC is reduced from 90.0% to 62.2% on average while the coding efficiency can still be maintained with only negligible bitrate increased.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115579303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信