2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)最新文献

筛选
英文 中文
Analyzing Gully Planform Changes in GIS Based on Multi-level Topological Relations 基于多层次拓扑关系的GIS沟壑平台变化分析
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177461
Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang
{"title":"Analyzing Gully Planform Changes in GIS Based on Multi-level Topological Relations","authors":"Feng Guoqiang, Leng Liang, Ye Yinghui, Han Dong-liang","doi":"10.1109/ICIVC50857.2020.9177461","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177461","url":null,"abstract":"Topological relations can be used to describe qualitative geometric position relations between spatial objects in geospatial world, which plays important roles in spatial query, spatial analysis and spatial reasoning. People can apply topological relations to describe the morphological changes of real objects, such as changes of cadastral parcels, rivers, water systems, etc. Gully planform changes (GPCs) reflect the state of surface soil erosion, so it is important and valuable to describe GPCs in detail. In this paper, based on a hierarchical topological relation description method and combined with the features of GPCs in GIS, we propose a simple hierarchical topological relationship description method to describe GPCs. This method can be used to completely describe GPCs, and is more concise and efficient than the former hierarchical topological relation description method in describing GPCs.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"29 1","pages":"292-295"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82513028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Sparse Topics Mining for Temporal Event Summarization 利用稀疏主题挖掘进行时间事件摘要
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177457
Zhen Yang, Yingzhe Yao, Shanshan Tu
{"title":"Exploiting Sparse Topics Mining for Temporal Event Summarization","authors":"Zhen Yang, Yingzhe Yao, Shanshan Tu","doi":"10.1109/ICIVC50857.2020.9177457","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177457","url":null,"abstract":"Information explosion, both in cyberspace world and real world, nowadays has brought about pressing needs for comprehensive summary of information. The challenge for constructing a quality one lies in filtering out information of low relevance and mining out highly sparse relevant topics in the vast sea of data. It is a typical imbalanced learning task and we need to achieve a precise summary of temporal event via an accurate description and definition of the useful information and redundant information. In response to such challenge, we introduced: (1) a uniform framework of temporal event summarization with minimal residual optimization matrix factorization as its key part; and (2) a novel neighborhood preserving semantic measure (NPS) to capture the sparse candidate topics under that low-rank matrix factorization model. To evaluate the effectiveness of the proposed solution, a series of experiments are conducted on an annotated KBA corpus. The results of these experiments show that the solution proposed in this study can improve the quality of temporal summarization as compared with the established baselines.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"074 1","pages":"322-331"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89799160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Straw Burning Detection Method Based on Improved Frame Difference Method and Deep Learning 基于改进帧差法和深度学习的秸秆燃烧检测方法
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177456
Shiwei Wang, Feng Yu, Changlong Zhou, Minghua Jiang
{"title":"Straw Burning Detection Method Based on Improved Frame Difference Method and Deep Learning","authors":"Shiwei Wang, Feng Yu, Changlong Zhou, Minghua Jiang","doi":"10.1109/ICIVC50857.2020.9177456","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177456","url":null,"abstract":"Straw burning has serious pollution to the air. Only by finding the location of straw burning can we stop the pollution caused by straw burning. The detection of straw burning can start from two aspects: flame and smoke. Because straw burning is usually accompanied by strong smoke, we decide to determine whether there is straw burning through smoke. The existing smoke detection methods all has various shortcomings, such as not using the dynamic characteristics of smoke, and inefficient and complex processing. Therefore, this paper proposes a smoke detection method based on improved frame difference method and Faster R-CNN. For smoke detection, first uses the improved frame difference method to extracts candidate regions, and then uses the Faster R-CNN model for smoke detection. For the extracted candidate areas, this paper proposes a variety of schemes to expands the candidate areas to ensure that the complete smoke information could be obtained to the maximum extent. Through the experiment, we get the best expansion scheme. Experiments shows that the improved frame difference method has obvious effects, compared to Faster R-CNN model method, the maximum accuracy rate has improved by 10.6%.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"37 1","pages":"29-33"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88345097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A New Method for Polygon Detection Based on Hough Parameter Space and USAN Region 基于Hough参数空间和USAN区域的多边形检测新方法
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177469
Li Shupei, Z. Hui, Zhang Zhisheng, Xia Zhijie
{"title":"A New Method for Polygon Detection Based on Hough Parameter Space and USAN Region","authors":"Li Shupei, Z. Hui, Zhang Zhisheng, Xia Zhijie","doi":"10.1109/ICIVC50857.2020.9177469","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177469","url":null,"abstract":"This paper propose a new approach that combine Hough Transform (HT) and corner detection to detect polygons, which consider integrated characteristics not the individual characteristics. We establish a Polygon Parameter Space (PPS) to fit and characterize polygons, which consist of angles, coordinates, USAN values and every two lines of intersections. Firstly, canny operator is used to extract edges map, applied HT to detect line along edges of polygon shape and compute PPS. Secondly, corner detection among intersections is realized by comparing USAN value with angle of intersections, an adaptive threshold and adjusted brightness of nucleus of USAN is introduced to obtain accurate vertices from corners. Finally, we propose an algorithm based on Deep First Search (DFS) to fit the set of vertices regardless convex polygons (CVPs) or concave polygons (CCPs) according to parameters in PPS. The experimental results show that the proposed approach can effectively detect polygons with a less running time and higher accuracy, and shows the advantage of detecting the CVP and CCP shapes of broken vertices.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"44-49"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76291715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Properties of Points Generation Network 点生成网络的性质探讨
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177470
Di Chen, Yi Wu
{"title":"Exploring the Properties of Points Generation Network","authors":"Di Chen, Yi Wu","doi":"10.1109/ICIVC50857.2020.9177470","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177470","url":null,"abstract":"With the development of deep learning, learning-based 3D reconstruction has attracted a substantial amount of attention and various single-image 3D reconstruction networks have been proposed. However, due to self-occlusion, the information captured in a single image is highly limited, resulting in inaccuracy and instability in reconstruction results. In this paper, a feature combination module is proposed to enable existing single-image 3D reconstruction networks to perform 3D reconstruction from multiview images. In addition, we study the impact of the number of the input multiview images as well as the network output points on reconstruction quality, in order to determine the required number of the input multiview images and the output points for reasonable reconstruction. In experiment, point cloud generations with different number of input images and output points are conducted. Experimental results show that the Chamfer distance decreases by 20%∼30% with the optimal number of input multiview images of five and at least 1000 output points.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"39 6 1","pages":"272-277"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80911067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Three-Frame-Difference Algorithm for Infrared Moving Target 红外运动目标的改进三帧差分算法
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177468
X. Luo, Ke-bin Jia, Pengyu Liu, Daoquan Xiong, Xiuchen Tian
{"title":"Improved Three-Frame-Difference Algorithm for Infrared Moving Target","authors":"X. Luo, Ke-bin Jia, Pengyu Liu, Daoquan Xiong, Xiuchen Tian","doi":"10.1109/ICIVC50857.2020.9177468","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177468","url":null,"abstract":"An improved three-frame-difference algorithm is proposed. The algorithm contains two sub-algorithm‐‐‐‐double-layer three-frame-difference algorithm, and history location data statistics and analysis algorithm. Double-layer three-frame-difference algorithm can fill the incomplete parts, and the history location data statistics and analysis algorithm can eliminate noise. Two examples containing one target (with size about 10*40 pixels) and two targets (with size about 80*160 pixels) respectively are chosen. Results of them prove that the improved three-frame-difference algorithm can resolve the problems of traditional one-layer three-frame-difference algorithm, and get the accurate results.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"66 1","pages":"108-112"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80218848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel 3D Surface Reconstruction Method with Posterior Constraints of Edge Detection 一种基于边缘检测后验约束的三维曲面重建方法
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177450
Hongtao Wu, Ying Meng, Bingqing Niu
{"title":"A Novel 3D Surface Reconstruction Method with Posterior Constraints of Edge Detection","authors":"Hongtao Wu, Ying Meng, Bingqing Niu","doi":"10.1109/ICIVC50857.2020.9177450","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177450","url":null,"abstract":"This paper proposed a novel 3D surface reconstruction method with posterior constraints of edge detection applying to general digital camera. The intrinsic parameters are calibrated with Zhang calibration method. After matching the images taken at two different orientations, the fundamental matrix and corresponding motion parameters by two different orientations are estimated, selecting optical center coordinate system of left camera as world coordinate system, and the projection matrix corresponding the two orientations is obtained. At last, the 3D coordinates of object feature point is computed and object surface is displayed with VRML technology. This system is simple, in addition, the proposed method is suit for general digital camera.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"441 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82918862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition Method for Underwater Acoustic Target Based on DCGAN and DenseNet 基于DCGAN和DenseNet的水声目标识别方法
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177493
Yingjie Gao, Yuechao Chen, Fangyong Wang, Yalong He
{"title":"Recognition Method for Underwater Acoustic Target Based on DCGAN and DenseNet","authors":"Yingjie Gao, Yuechao Chen, Fangyong Wang, Yalong He","doi":"10.1109/ICIVC50857.2020.9177493","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177493","url":null,"abstract":"The scarcity and access difficulty of labeled underwater acoustic samples have created a bottleneck in introducing deep learning methods into recognition tasks of underwater acoustic targets. In this paper, a recognition method based on the combination of Deep Convolutional Generative Adversarial Network (DCGAN) and Densely Connected Convolutional Networks (DenseNet) for underwater acoustic targets is proposed aiming at these problems. On the basis of meeting the adaption requirements of the deep learning model for the input form, the sample set of wavelet time-frequency graph for the underwater acoustic target was constructed, combined with the prior knowledge of conventional sonar signal processing. The DCGAN model for generation of underwater acoustic sample and the DenseNet model for recognition of underwater acoustic target are designed, and the quality of generated samples is optimized through three stages of iterative training, thus expanding the training set, and improving the recognition effect of underwater acoustic target.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"215-221"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82970421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improved Digital Orthorectification Map Generation Approach Using the Integrating of ZY3 and GF3 Image 基于ZY3和GF3图像集成的改进数字正射影地图生成方法
2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC) Pub Date : 2020-07-01 DOI: 10.1109/ICIVC50857.2020.9177473
Li Guo, Xia Wang, Mingyu Yue
{"title":"Improved Digital Orthorectification Map Generation Approach Using the Integrating of ZY3 and GF3 Image","authors":"Li Guo, Xia Wang, Mingyu Yue","doi":"10.1109/ICIVC50857.2020.9177473","DOIUrl":"https://doi.org/10.1109/ICIVC50857.2020.9177473","url":null,"abstract":"The integrating of ZY3 and GF3 satellite image can achieved long-term acquisition of DOM product, however, the selection of point in SAR image was a difficult task, and the edge error cannot be guaranteed if the optical and SAR images were produced individually, an improved DOM product generation approach was proposed in order to ensure the accuracy and efficiency of DOM generation. 29 scenes of ZY3 image and 20 scenes of GF3 image in Hainan island were selected as experimental image, and the results showed that the horizontal accuracy of 73.47% of orthorectification result were better than 1 pixel (10 m), and 26.53% of orthorectification result were better than 2 pixels (20 m), which can meet the horizontal accuracy requirement of 1:50000 scale surveying and mapping in China. At the same time, an improved approach using the dodging and mosaic line editing was proposed to integrate the orthrectification image, it can be seen from this article that the color transition of ZY3 data region and the hue transition of GF3 data region was more natural, and the manual editing was not big. Therefore, the efficiency and accuracy of the improved approach of DOM generation proposed in this paper can be guaranteed in large areas, which can be used as a reference for users when generating the DOM using the integrating image of ZY3 and GF3.","PeriodicalId":6806,"journal":{"name":"2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC)","volume":"50 1","pages":"82-85"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84158370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信