2018 International Conference on Audio, Language and Image Processing (ICALIP)最新文献

筛选
英文 中文
Offshore Oil Slicks Extraction by Landsat Data Based on eCognition Software in South China Sea 基于识别软件的南海海上浮油Landsat数据提取
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455845
Xu Xing, Jinli Shen
{"title":"Offshore Oil Slicks Extraction by Landsat Data Based on eCognition Software in South China Sea","authors":"Xu Xing, Jinli Shen","doi":"10.1109/ICALIP.2018.8455845","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455845","url":null,"abstract":"Extraction of oil slicks from remote sensing data is a basic work for investigation of oil spill and natural oil leakage. It is time-consuming and laborious when the traditional method of artificial interpretation is applied to tens of thousands of remote sensing data. In order to improve the efficiency of oil slicks extraction, eCognition, object-oriented image analysis software, is introduced to set up a rule set which can realize the batch processing of oil slick extraction. Landsat TM data of known oil seep site from Gulf of Mexico is used as training data to set up the rule set. Following the several processes such as image segmentation, hierarchical classification, training Sample selection, feature space optimization and nearest neighbor classification, oil slicks and several other sea surface targets were extracted in the format of shape within ten minutes. The result shows that the oil extraction based on eCognition rule set is faster and more accurate than manual interpretation. When the rule set was applied to the sea area of South China Sea, there were two significant discoveries. One is that the pollution of the white tiger oil field is very serious. Oil slicks can be detected on almost all available Landsat TM data. Another is that the first natural oil and gas seep is come to light in South China Sea.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134497650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Chinese Image Caption Based on Deep Learning 基于深度学习的中文图像标题
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455856
Ziyue Luo, Huixian Kang, P. Yao, W. Wan
{"title":"Chinese Image Caption Based on Deep Learning","authors":"Ziyue Luo, Huixian Kang, P. Yao, W. Wan","doi":"10.1109/ICALIP.2018.8455856","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455856","url":null,"abstract":"Generating a sentence when given an image automatically, which is described as image caption, has attracted more and more attention from relative researchers in computer vision area. In this paper, we present a method to generate Chinese captions for images. We adopt a Chinese dataset called ICC dataset and propose an improved language generating model. In the experiment, we show that using ICC datasets and our model to generate Chinese captions is reasonable and efficient.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114558308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating the Effect of Transitions on the Viewing Experience for VR Video 过渡对VR视频观看体验的影响评估
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455352
Tingting Zhang, Feng Tian, Xiaofei Hou, Qirong Xie, Fei Yi
{"title":"Evaluating the Effect of Transitions on the Viewing Experience for VR Video","authors":"Tingting Zhang, Feng Tian, Xiaofei Hou, Qirong Xie, Fei Yi","doi":"10.1109/ICALIP.2018.8455352","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455352","url":null,"abstract":"The traditional transition effect has begun to be applied to VR movie, but there are few studies on the evaluation of VR movie transition. We design and evaluate the use of different scene transitions in VR movies and the configuration experiments of Region of Interest(ROI), as well as their impact on the experience of users' viewing of the VR movies. We studied their influences of perceived continuity, spatial awareness, presence and comfort on viewers when watching VR movies. Our results indicate that the use of different scene transition and the configuration of ROI have different influence on users, and the ROI deviation less than 55 degrees is suggested. Also, under the conditions of 0 degrees, cut, iris wipes, gradient wipes and spherical blurs, under the 55 degree, iris wipes, spherical blurs and Mobius zooms have excellent performance in the subjective and objective evaluation balance.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129517258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
ICALIP 2018
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/icalip.2018.8455275
{"title":"ICALIP 2018","authors":"","doi":"10.1109/icalip.2018.8455275","DOIUrl":"https://doi.org/10.1109/icalip.2018.8455275","url":null,"abstract":"Cover of Proceedings","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130895724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ICALIP2018 Table of Contents ICALIP2018目录
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/icalip.2018.8455739
Danbei Wang, Zhengzheng Cui, Huiming, Ding, Shuqi Yan, Zhifeng Xie, Kaiyue Li, Lizhen Shi, Qunfei Zhao, Yun Long, Yuzhang Wang, Feng Tian, Yunwen Zhu, Ruiwen Hu, Zhao, Chen, Xingxing, Yu, Guangchen, Junfeng Zhou, Zhuochen Lei, Xiaoqing Yu
{"title":"ICALIP2018 Table of Contents","authors":"Danbei Wang, Zhengzheng Cui, Huiming, Ding, Shuqi Yan, Zhifeng Xie, Kaiyue Li, Lizhen Shi, Qunfei Zhao, Yun Long, Yuzhang Wang, Feng Tian, Yunwen Zhu, Ruiwen Hu, Zhao, Chen, Xingxing, Yu, Guangchen, Junfeng Zhou, Zhuochen Lei, Xiaoqing Yu","doi":"10.1109/icalip.2018.8455739","DOIUrl":"https://doi.org/10.1109/icalip.2018.8455739","url":null,"abstract":"Oral Session I Image and Video Processing","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116735677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Template Oriented Text Summarization via Knowledge Graph 基于知识图谱的模板文本摘要
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455241
Pin Wu, Quan Zhou, Zhidan Lei, Weijian Qiu, Xiaoqiang Li
{"title":"Template Oriented Text Summarization via Knowledge Graph","authors":"Pin Wu, Quan Zhou, Zhidan Lei, Weijian Qiu, Xiaoqiang Li","doi":"10.1109/ICALIP.2018.8455241","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455241","url":null,"abstract":"People are flooded with massive semi-structured and unstructured texts in their daily work life. The fast-paced lifestyle has forced us to get more focused information from these large amounts of text more quickly. So people urgently need a technology that can automatically extract abstracts from text. The traditional extractive automatic abstract method can only extract keywords or key sentences. Although the current popular sequence-to-sequence extraction methods have greatly improved compared with the traditional methods, they cannot be combined with the background information to obtain higher level abstraction. Therefore, we propose a method based on knowledge graph technology to automatically extract abstract texts. This method can not only obtain higher-level extraction from the text, but also can select template and question and answer to obtain a personalized abstract. We experimented on the CNN DAILYMAIL dataset. The results show that the abstract obtained by this method can reflect more textual information, and more in line with human reading habits, and can achieve personalized extraction, and can obtain close to the best ROUGE index results.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126109682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Outdoor Navigation with Handheld Augmented Reality 手持增强现实户外导航
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455285
Lianyao Wu, Xiaoqing Yu
{"title":"Outdoor Navigation with Handheld Augmented Reality","authors":"Lianyao Wu, Xiaoqing Yu","doi":"10.1109/ICALIP.2018.8455285","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455285","url":null,"abstract":"Recently, outdoor augmented reality (AR) has become readily available. Many smart phone based navigation applications provide AR capabilities. However, most of them are to show virtual points of interest (POIs) in the map or add virtual information to the real scenes captured by the camera. We found these applications are helpful when users are looking for interesting places, but it is inconvenient for users who intent to go to the destination. Also, arrows for indicating directions on navigation maps are not intuitive when users are on the way. In this paper, we proposed an outdoor navigation system combined with Baidu map using AR technique where a virtual model can guide users to their destinations.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123599147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Post-Secondary Filtering Improvement of GSC Beamforming Algorithm GSC波束形成算法的二次后滤波改进
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455622
Lu Zhang, Mingjiang Wang, Qiquan Zhang, Hu Chen
{"title":"Post-Secondary Filtering Improvement of GSC Beamforming Algorithm","authors":"Lu Zhang, Mingjiang Wang, Qiquan Zhang, Hu Chen","doi":"10.1109/ICALIP.2018.8455622","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455622","url":null,"abstract":"In this paper, we propose an innovative post-secondary filtering improvement algorithm on the time domain generalized side-lobe canceller (GSC) by incorporating human acoustic perception. This secondary filter apply minimum controlled recursive averaging (MCRA) algorithm as the decision unit to choose proper noise reduction approach on the suppression of pure noise signals or noisy speech signals. Either way, its central aim is to reduce the noise below the human hearing and the experimental results also demonstrate that our improved algorithm does help to eliminate the residual noises clearly, included point noise and diffused noise. Besides, the improved algorithm also further strengthens the advantage of GSC on the suppression of speech interference.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122031834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Region-Based Convolutional Neural Networks for Profiled Fiber Recognition 基于区域的卷积神经网络用于轮廓纤维识别
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455689
Zhao Chen, Xinxin Wang, Yuxin Zheng, Yan Wan
{"title":"Region-Based Convolutional Neural Networks for Profiled Fiber Recognition","authors":"Zhao Chen, Xinxin Wang, Yuxin Zheng, Yan Wan","doi":"10.1109/ICALIP.2018.8455689","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455689","url":null,"abstract":"Profiled fiber recognition using electron-microscopic (EM) images is one of the main tasks in textile quality inspections. However, many existing methods require fiber samples being located and segmented from their EM image before fed to classifiers manually. Delicate spatial patterns such as sample overlap and shape variability among the same kind of fibers cannot be dealt with by classic pattern recognition methods. As an advanced object detection method, region-based convolutional neural networks (R-CNN) shows great potentials in profiled fiber recognition. Adapting the framework of R-CNN, we carry out the task in three major steps: hierarchical segmentation by selective search, automatic generation of fiber samples by self-designed selection rules and cascade classification by CNN along with some other classifiers. Within the system, the selection rules of region proposals recommended by the selective search method are tailored to spatial features of fiber samples, which are fed to a fine-tuned deep-structured classification network to recognize multiple shapes of fibers. The experimental results indicate the proposed method outperforms the backpropagation network (BP) and the support vector machine (SVM), especially when cascade classification is employed.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127193516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient 3D Reconstruction System for Chinese Ancient Architectures 一种高效的中国古建筑三维重建系统
2018 International Conference on Audio, Language and Image Processing (ICALIP) Pub Date : 2018-07-01 DOI: 10.1109/ICALIP.2018.8455527
Hao Xu, Yanjun Jin, W. Wan
{"title":"An Efficient 3D Reconstruction System for Chinese Ancient Architectures","authors":"Hao Xu, Yanjun Jin, W. Wan","doi":"10.1109/ICALIP.2018.8455527","DOIUrl":"https://doi.org/10.1109/ICALIP.2018.8455527","url":null,"abstract":"Multi-View 3D reconstruction has been studied by researchers from various countries for decades, but still is a heat point in the field of computer vision. Generally 3D reconstruction system can be realized by the method of incremental Structure-from-Motion, which consists of several procedures such as feature detection, feature matching and triangulation. In this paper, we propose a speed-up 3D reconstruction system that associates several efficient algorithms, which can recover 3D structure form a set of unordered images of the ancient architectures in China. The experiment result shows that the system has a good performance in efficiency and final reconstruction appearance of kinds of buildings.","PeriodicalId":241122,"journal":{"name":"2018 International Conference on Audio, Language and Image Processing (ICALIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121344137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信