2016 International Conference on Audio, Language and Image Processing (ICALIP)最新文献

筛选
英文 中文
Research of tax assessment based on improved Fuzzy Neural Network 基于改进模糊神经网络的税收评估研究
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846649
Jingjing Wang, Xiaoqing Yu, Pengfei Li
{"title":"Research of tax assessment based on improved Fuzzy Neural Network","authors":"Jingjing Wang, Xiaoqing Yu, Pengfei Li","doi":"10.1109/ICALIP.2016.7846649","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846649","url":null,"abstract":"Recently, establishment and maintenance of the tax assessment indicators system is still in the stage of manual operation. The accuracy of tax assessment depends on the officials' judgment and analysis which bring them huge amount of work. Furthermore, the evaluation results are affected by manual factors and not reliable. To improve tax assessment, this paper proposes a tax assessment model based on PSO-FNN-Adaboost. In this proposed model, PSO (Particle Swarm Optimization) is used to optimize FNN (Fuzzy Neural Network) weak classifier, and then Adaboost is utilized to combine multiple PSO-FNN weak classifiers into a strong classifier. The experiment is designed to validate the proposed model. The original model based on PSO-FNN-Adaboost method is trained to get the classification model of the assessment levels. Then the classification model is tested. The experimental results show that the proposed model improved the prediction performance of tax assessment. Compared with single PSO-FNN weak classifier, the accuracy of PSO-FNN-Adaboost strong classifier is increased by 5%.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130767281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analysis on the influence of number of loudspeakers on the error in binarual pressure spectra for spatial Ambisonics reproduction 扬声器数量对空间立体声再现双声压谱误差的影响分析
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846605
Jiang Jianliang, Mai Haiming, Xie Bosun, Rao Dan, Liu Yang
{"title":"Analysis on the influence of number of loudspeakers on the error in binarual pressure spectra for spatial Ambisonics reproduction","authors":"Jiang Jianliang, Mai Haiming, Xie Bosun, Rao Dan, Liu Yang","doi":"10.1109/ICALIP.2016.7846605","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846605","url":null,"abstract":"Ambisonics is a series of spatial sound reproduction system with flexible number of loudspeakers. Based on spatial harmonics decomposition and truncation at various orders, it aims at approximate reconstruction of target sound field within a certain region and below a certain frequency limit. The region and frequency limit are determined by spatial sampling theorem of sound field. The present work evaluates the influence of number of loudspeakers on the errors in binaural pressure spectra for spatial Ambisonics reproduction. Binaural pressures for target sound field and Ambisonics reproduction are calculated and normalized square amplitude error in binaural pressures is used as error index. The results indicate that, for each order Ambisonics reproduction, and when the number of loudspeakers exceeds the low limit, increasing the number of loudspeakers reduces, or at least does not increase the errors in binaural pressures. Around the frequency limit given by spatial sampling theorem, the reduction on the errors in binaural pressures by increasing number of loudspeakers is significant. On the other hand, below or above that frequency limit moderately, the reduction is insignificant in terms of auditory perception.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130772243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An improved artificial bee colony algorithm for range image registration 一种改进的人工蜂群测距图像配准算法
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846584
Xiao Lu, TaiFeng Li, Liang Gao, H. Qiu
{"title":"An improved artificial bee colony algorithm for range image registration","authors":"Xiao Lu, TaiFeng Li, Liang Gao, H. Qiu","doi":"10.1109/ICALIP.2016.7846584","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846584","url":null,"abstract":"Range image registration is an attractive topic in image processing field. It aims at finding an optimal transformation or correspondence between images captured from different views. Iterative closest point (ICP) algorithm is the most well-known method for registration. However, it needs a pre-alignment typically provided by the user. To overcome this drawback of ICP algorithm, many intelligent algorithms have been introduced to solve the registration problem. In this paper, we present an improved artificial bee colony (ABC) algorithm for range image registration. Inspired by particle swarm optimization algorithm, a new solution-updating strategy is proposed and introduced into the ABC. Consequently, the search ability of ABC algorithm is obviously enhanced while more precise results are obtained. Dozens of experiments are conducted to compare the performance of the proposed algorithm and other registration methods. The results demonstrate that the improved algorithm outperforms other algorithms.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126922205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Text line segmentation using Viterbi algorithm for the palm leaf manuscripts of Dai 基于Viterbi算法的傣族棕榈叶手稿文本线分割
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846561
Ge Peng, Pengfei Yu, Haiyan Li, Lesheng He
{"title":"Text line segmentation using Viterbi algorithm for the palm leaf manuscripts of Dai","authors":"Ge Peng, Pengfei Yu, Haiyan Li, Lesheng He","doi":"10.1109/ICALIP.2016.7846561","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846561","url":null,"abstract":"The text line segmentation process is a key step in an optical character recognition (OCR) system. Several common approaches, such as projection-based methods and stochastic methods, have been put forward to fulfill this task. However, most of existing methods cannot be directly applied to process the palm leaf manuscripts of Dai which the images have poor quality and include smudges, creases, stroke deformation and character touching. To solve this problem, an improved Viterbi algorithm based on Hidden Markov Model (HMM) is proposed to find all possible segmentation paths firstly. And then, a path filtering method is used to detect the optimal paths for the segmented text blocks. The performance of the method is compared with relevant methods and the experimental results demonstrate the effectiveness of the proposed method.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114457704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
An event extraction method by semantic role analysis 一种基于语义角色分析的事件提取方法
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846566
Zhang Shun-rui, Xu Yu-qing, Zhou Xin-jian, Yue Hui, Zhu Xiao-wen
{"title":"An event extraction method by semantic role analysis","authors":"Zhang Shun-rui, Xu Yu-qing, Zhou Xin-jian, Yue Hui, Zhu Xiao-wen","doi":"10.1109/ICALIP.2016.7846566","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846566","url":null,"abstract":"The paper does study on event extraction from news on the internet by the method of semantic role analysis. We annotate the sentence in the news from argument annotator, extract the argument structure of the head verb, convert the arguments to specific semantic roles of the verb, and then match the semantic roles to the event elements. This paper puts forward and studies on how to use VerbNet and SemLink resources to match the verb's arguments and event elements. The experiment was carried out on the 1000 news corpus crawled from the web, and the result shows that the F value is up to 70.6% and has certain application value.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121840516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction of distortion mask speech based on parameter estimation of AR model 基于AR模型参数估计的失真掩模语音校正
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846621
Wang Guang-yan, Zhao Chen-Yu, Xue Xiaozhen, Zhang Jing, Zhao Xiao-qun
{"title":"Correction of distortion mask speech based on parameter estimation of AR model","authors":"Wang Guang-yan, Zhao Chen-Yu, Xue Xiaozhen, Zhang Jing, Zhao Xiao-qun","doi":"10.1109/ICALIP.2016.7846621","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846621","url":null,"abstract":"The generation model of speech signal has been regarded as an all-pole AR model. Distortion will happen when normal speech is disturbed or interfered. In this paper, we proposed a new signal model excited by the non-white noise signal to represent transfer function of a closed oxygen mask. Using LPC method to find the parameters of the all-pole signal model from the practical distortion signal, the prediction model is in accordance with the theoretical estimated of AR model Consequently, we can design the transfer function of the inverse filter with respect to the transfer function of the estimated model. The inverse filter is in series connection with the distortion filter in order to correct the distortion speech recorded by wearing the mask. By comparing the waveforms, normalized spectrums and spectrograms among the normal speech, the distortion speech, and the corrected speech using the proposed method, the experiment results indicate the feasibility and availability of the proposed method.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image segmentation based on PCNN model combined with automatic wave and synaptic integration 基于自动波突触整合的PCNN模型图像分割
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846616
Caihong Zhu, Shiyang Chen, Jinyong Gao, Wang Xia
{"title":"Image segmentation based on PCNN model combined with automatic wave and synaptic integration","authors":"Caihong Zhu, Shiyang Chen, Jinyong Gao, Wang Xia","doi":"10.1109/ICALIP.2016.7846616","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846616","url":null,"abstract":"A PCNN model combined with synaptic integration and automatic wave is presented in this paper. The fired neurons and unfired neurons in neighborhood are taken as excitatory and inhibitory synapses respectively, and the result of synaptic integration serves as the PCNN linking input; the firing map of the image spreads in decaying automatic wave, then the segmentation result is obtained when the map turn to be stable. The experimental results demonstrate the proposed model perform well in edge areas and restrains the over segmentation phenomenon, the shape measure and the contrast measure are improved at the same time.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129799667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptable P2P panoramic video streaming system based on WebRTC and WebGL 基于webbrtc和WebGL的自适应P2P全景视频流系统
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846552
Zhenhua Hao
{"title":"Adaptable P2P panoramic video streaming system based on WebRTC and WebGL","authors":"Zhenhua Hao","doi":"10.1109/ICALIP.2016.7846552","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846552","url":null,"abstract":"Panoramic video has existed for a relatively long time. Different from panorama pictures, panoramic video requires front-end interaction. The view of dynamic scenes can be controlled via both interactive devices and smart phones. In this paper, we describe the initiative of a convenient browser-to-browser application for streaming video using WebRTC tested on a peer-to-peer module. With optional adaption to various kinds of panoramic cameras, this web-based system permits the user to experience interactively web-based panoramic video streams.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128992986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive anchor frame detection algorithm based on background detection for news video analysis 一种基于背景检测的新闻视频自适应锚帧检测算法
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846669
Ruilin Xu, Chun-Yu Tsai, J. Kender
{"title":"An adaptive anchor frame detection algorithm based on background detection for news video analysis","authors":"Ruilin Xu, Chun-Yu Tsai, J. Kender","doi":"10.1109/ICALIP.2016.7846669","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846669","url":null,"abstract":"When analyzing news videos, finding an efficient way of extracting visual memes is very important. Videos might be very long and visual meme extraction itself is computationally expensive, so it is essential to make this process as efficient as possible. A way to do this is to eliminate as many key frames as possible even before extracting the visual memes. Since anchor person frames contribute little to the content of the news videos, we should remove these frames. This paper proposes an efficient and effective algorithm to detect anchor frames from videos, significantly improving the efficiency of visual meme extraction.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128304200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extracting topic keywords from Sina Weibo text sets 从新浪微博文本集中提取主题关键词
2016 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2016-07-01 DOI: 10.1109/ICALIP.2016.7846663
S. Xu, Juncai Guo, Xue Chen
{"title":"Extracting topic keywords from Sina Weibo text sets","authors":"S. Xu, Juncai Guo, Xue Chen","doi":"10.1109/ICALIP.2016.7846663","DOIUrl":"https://doi.org/10.1109/ICALIP.2016.7846663","url":null,"abstract":"Sina Weibo is one of the most popular microblogging website in China. It has more than 500 million registered users and the daily production of posters is over 100 million, with a market penetration similar to Twitter. Mining the useful information from large volume of fragmented short texts is a fundamental but very challenging research work. This paper proposes a method LET(LDA&Entropy&Tex-trank) to extract topic keywords from Sina Weibo topics text sets. LET considers both topic influence of keywords and topic discrimination of keyword that combines the merits of LDA, Entropy and TextRank. In addition, we design a new standard evaluation method KESS (topic KEywords Sta-ndard Sequence). Based on KESS, we can compute the offset loss scores for the four different keywords extraction methods. Extensive simulations show that LET is a comparatively efficient and effective method to obtain topic words from hot topics of Sina Weibo.","PeriodicalId":184170,"journal":{"name":"2016 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信