Proceedings of the 2nd International Conference on Vision, Image and Signal Processing最新文献

筛选
英文 中文
Autonomous Intelligence for FMV ISR Sensors With A Human In The Loop Decision Support System 基于人在环决策支持系统的FMV ISR传感器自主智能
Rory A. Lewis
{"title":"Autonomous Intelligence for FMV ISR Sensors With A Human In The Loop Decision Support System","authors":"Rory A. Lewis","doi":"10.1145/3271553.3271572","DOIUrl":"https://doi.org/10.1145/3271553.3271572","url":null,"abstract":"As the United States Air Force moves towards autonomous labelling of FMV from ISR sensors, it has experienced unforeseen technical and legal challenges. In terms of the technical challenges, this research effort identifies these obstacles and presents solutions for them with detailed step-by-step analysis of the processes, its testing and prototypes. In terms of the legal challenges, the USAF's goals of infusing artificial intelligence into autonomous labelling of FMV is also being challenged by a formidable, looming legal threat of new laws that will force the USAF to include 'humans in the loop' of its artificial intelligence and machine learning systems [20], [7], [15]. Again, we analyze these legal threats and present solutions to allow inclusion of a human in the loop. It is important to note that our solution to these technical and legal challenges form a two-pronged solution that yields a Bench to Battlefield, Government off-the-shelf (GOTS) autonomous FMV labelling system that will, as time goes by, learn and grow in its ISR identification abilities.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127763900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hiding MP3 in Colour Image using Whale Optimization 隐藏MP3在彩色图像使用鲸鱼优化
A. Chaudhary, M. K. Chaube
{"title":"Hiding MP3 in Colour Image using Whale Optimization","authors":"A. Chaudhary, M. K. Chaube","doi":"10.1145/3271553.3271588","DOIUrl":"https://doi.org/10.1145/3271553.3271588","url":null,"abstract":"Information is the key in today's world and secure transmission is a must. The cryptography is about making cipher and keys management. They assume cipher can be detected and must be strong enough to decipher, but what if the cipher/information is not visible? The cipher would not be detected. The steganography makes this enable. This paper discusses how to hide an MP3 file in a digital image such that it becomes difficult to conclude about the existence of the hidden audio data. Here we utilize the k-LSB of pixels in an image to embed audio data bits into a selected pixel. The pixels are so chosen that the distortion in the image would be minimized due to embedding. This requires comparing all the possible permutations of pixel values, which may lead to exponential time complexity. For faster computation Whale optimization algorithm was used to find the most optimal solution.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133491882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ISR-Brain Machine Intelligence for Unmanned Aircraft Systems 无人机系统的isr -脑机智能
Rory A. Lewis
{"title":"ISR-Brain Machine Intelligence for Unmanned Aircraft Systems","authors":"Rory A. Lewis","doi":"10.1145/3271553.3271594","DOIUrl":"https://doi.org/10.1145/3271553.3271594","url":null,"abstract":"This paper presents a system for extrapolating knowledge and classification rules from existing ISR FMV and creating an ISR-Brain. As combat operations have grown to depend upon assured, live ISR support during operations, US forces are presented with formidable challenges to integrate artificial intelligence (AI) capabilities with existing ISR systems. The common challenge being the variance at which advances in commercial and academic AI are deployed compared to rate of speed that innovative AI systems are developed and utilized in military domains. ISR, USAF and SOCOM need to develop a means to seamlessly integrate military and commercial state-of-the-art systems. The ISR-Brain presented will be capable of converting classifiers in existing ISR FMV to machine learning rules for real time ISR sensor, multi-source, multi-enclave data and adaptable with ongoing research efforts with A2, SOCOM, JIEDO, MITRE and Project MAVEN to develop and test and ISR-Brain to enable the system to integrate with all ISR sensors and predict future Troops in Contact events (TIC) and IED events.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"XCIX 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131386278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Object Detection Through Dynamic Incorporation of Dempster-Shafer Fusion for Robotic Applications 基于Dempster-Shafer融合的机器人主动目标检测
A. S.PouryaHoseini, M. Nicolescu, M. Nicolescu
{"title":"Active Object Detection Through Dynamic Incorporation of Dempster-Shafer Fusion for Robotic Applications","authors":"A. S.PouryaHoseini, M. Nicolescu, M. Nicolescu","doi":"10.1145/3271553.3271564","DOIUrl":"https://doi.org/10.1145/3271553.3271564","url":null,"abstract":"Employing multiple sensing capabilities in a robotic platform offers significant advantages in increasing the recognition abilities of robots. Specifically, for vision-based object detection in a real-world environment, acquiring information from different viewpoints might be decisive for correct classifications in the presence of occlusions or to disambiguate between similar objects. For this reason, an active vision object detection system is proposed in this paper. It is implemented on a robotic environment that uses a 3D camera mounted on the robot head and an RGB camera on its hand. The system tries to detect and recognize objects being seen from the head camera, while computing a confidence score on the classification. In the case of an unreliable classification, another stage of object recognition is dynamically requested, but this time from the viewpoint of the hand camera. The objects detected from the two cameras are matched and their classification decisions are fused through a novel fusion approach based on the Dempster-Shafer evidence theory. Experimental results show sizable improvements in object recognition performance compared to a traditional singlecamera configuration, as well as applicability to handling partial occlusions.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133924775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Modified Retinex Algorithm for Visualization of High Dynamic Range Images 用于高动态范围图像可视化的改进Retinex算法
Yongjun Zhuang, Lei Liang, Dongqun Xu
{"title":"A Modified Retinex Algorithm for Visualization of High Dynamic Range Images","authors":"Yongjun Zhuang, Lei Liang, Dongqun Xu","doi":"10.1145/3271553.3271557","DOIUrl":"https://doi.org/10.1145/3271553.3271557","url":null,"abstract":"To solve the problem of high dynamic range (HDR) image visualization in consumer electronics products, this paper proposes an improved Retinex algorithm. The proposed algorithm has three aspects of improving the traditional center/surround Retinex algorithm: First, fast bilateral filter is used instead of Gaussian filter, which not only avoids the halo artifact, but also improves the computing speed. Second, a semi-automatic gain/offset method independent of color channels is developed, which can automatically calculate the position of the clip off in the histogram based on the image content, and effectively improve the contrast of the final image. Finally, unique user parameters can be obtained based on the integration of a large number of parameters in the traditional Retinex, and this parameter allows the user to achieve a balance between computational speed and image quality. Compared with other types of HDR image visualization algorithms, the proposed algorithm has excellent visibility detail and color fidelity. It can be used in consumer digital cameras, monitors, and image post-rendering software.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"395 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132067140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast Robust Image Feature Matching Algorithm Improvement and Optimization 快速鲁棒图像特征匹配算法改进与优化
Peiyu Chen, Y. Li, Guanghong Gong
{"title":"Fast Robust Image Feature Matching Algorithm Improvement and Optimization","authors":"Peiyu Chen, Y. Li, Guanghong Gong","doi":"10.1145/3271553.3271585","DOIUrl":"https://doi.org/10.1145/3271553.3271585","url":null,"abstract":"This paper quantitatively analyzes different types of image changes according to the characteristics of each algorithm, and put forward different optimal algorithms for different types of pictures. Firstly, four classical matching algorithms are selected and compared for scale, photometric and rotational robustness. In order to solve the limitation of the robustness of single algorithm, three improved algorithms are proposed. Based on the combination of SURF and ORB algorithms and one or more feature point screening, the improved algorithm is used to improve accuracy. Secondly, the improved algorithm is tested by using images with multiple types of changes at the same time. It is concluded that the improved algorithm has strong robustness and can effectively improve image matching accuracy. Finally, the simulation result shows that the selection of the optimal algorithm according to the features of the picture maximizes the advantages of different algorithms to meet the quantity of matching points and the matching accuracy.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Estimating AP Location Using Crowdsourced Wi-Fi Fingerprints with Inaccurate Location Labels 使用不准确位置标签的众包Wi-Fi指纹估计AP位置
Changmin Sung, Seungwoo Chae, Daehyun Kang, Dongsoo Han
{"title":"Estimating AP Location Using Crowdsourced Wi-Fi Fingerprints with Inaccurate Location Labels","authors":"Changmin Sung, Seungwoo Chae, Daehyun Kang, Dongsoo Han","doi":"10.1145/3271553.3271582","DOIUrl":"https://doi.org/10.1145/3271553.3271582","url":null,"abstract":"The fundamental property of Wi-Fi signals for Wi-Fi positioning is that the signal strength of radio signals decreases as radio signals propagate. Since access points (APs) are sources of each Wi-Fi signal, knowing AP locations can help to exploit Wi-Fi fingerprints. However, investigating AP locations is a demanding task. Measuring absolute coordinates of AP locations is necessary for the extensive investigation, but manual measurement of absolute coordinates costs excessive human labor. In this paper, we propose an AP location estimating method using Wi-Fi fingerprints labeled with coordinates obtained from commercial location providers on smartphones. Both Wi-Fi fingerprints and location labels are implicitly collected from smartphone users. We apply nonlinear regression with log-distance path loss (LDPL) model on collected location-labeled Wi-Fi fingerprints to estimate AP locations. Estimated AP locations can accelerate radio map matching, simplify radio map construction, and assist AP location-based Wi-Fi positioning system.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126171443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Similarity-Aware Kanerva Coding for On-Line Reinforcement Learning 基于相似性感知的Kanerva编码在线强化学习
Wei Li, W. Meleis
{"title":"Similarity-Aware Kanerva Coding for On-Line Reinforcement Learning","authors":"Wei Li, W. Meleis","doi":"10.1145/3271553.3271609","DOIUrl":"https://doi.org/10.1145/3271553.3271609","url":null,"abstract":"A major challenge in reinforcement learning (RL) is use of a tabular representation to represent learned policies with a large number of states or state-action pairs. Function approximation is a promising tool to overcome this deficiency. This approach uses parameterized functions instead of a table to represent learned knowledge and enables generalization. However, existing schemes cannot solve realistic RL problems, with their rapidly increasing demands for approximating accuracy and efficiency. In this paper, we extend the architecture of Sparse Distributed Memories (SDMs) and propose a novel on-line methodology, similarity-aware Kanerva coding (SAK), that closely represents the learned knowledge for very large-scale problems with significantly fewer parameterized components. SAK directly measures the state variables' real distances in all dimensions and reformulates a new state similarity metric with an improved definition of state closeness. As a result, our scheme accurately distributes and generalizes knowledge among related states. We further enhance SAK's efficiency by allowing a limited number of prototype states that have certain similarities to be activated for value approximation so that the risk of over-generalization is hindered. In addition, SAK eliminates size tuning and prototype reallocation for the prototype set, resulting in not only broadened scalability but also significant savings in the amount of necessary prototypes and computational overhead needed for RL. Our extensive experimental results show that SAK achieves more than 48% improvements over existing schemes in learning quality, and reveal that SAK is able to consistently learn good policies for RL with small overhead and short training times, even given roughly tuned scheme parameters.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125535792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image Resolution Enhancement for Remote Sensing Applications 遥感应用的图像分辨率增强
C. Kwan
{"title":"Image Resolution Enhancement for Remote Sensing Applications","authors":"C. Kwan","doi":"10.1145/3271553.3271590","DOIUrl":"https://doi.org/10.1145/3271553.3271590","url":null,"abstract":"We present a brief overview of recent image resolution enhancement algorithms with emphasis on remote sensing applications. Because resolution may have different meanings, we emphasize that our focus in this paper is on spatial, spectral, and temporal resolution enhancement algorithms. We will discuss and review recent and representative algorithms in enhancing spatial resolution, spectral resolution, spatial-spectral resolution, and spatio-temporal resolution of remote sensing images. Several interesting applications related to the fusion of Landsat and MODIS images, the fusion of color and hyperspectral images, and the fusion of Mars rover images will be presented. Finally, some future directions in this research area will be highlighted.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134589062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Improvement of Embedding Channel-Wise Activation in Soft-Attention Neural Image Captioning 软注意神经图像字幕中嵌入通道激活的改进
Yanke Li
{"title":"Improvement of Embedding Channel-Wise Activation in Soft-Attention Neural Image Captioning","authors":"Yanke Li","doi":"10.1145/3271553.3271592","DOIUrl":"https://doi.org/10.1145/3271553.3271592","url":null,"abstract":"The paper dives into the topic of image captioning with the soft attention algorithm. We first review relevant works on the captioned topic in terms of background introduction and then explains the original model in details. On top of the plain soft attention model, we propose two approaches for further improvements: SE attention model which adds an extra channel-wise activation layer, and bi-directional attention model that explores two-way attention order feasibility. We implement both methods under limited experiment conditions and in addition swap the original encoder with state-of-art structure. Quantitative results and example demonstrations show that our proposed methods have achieved better performance than baselines. In the end, some suggestions of future work on top of proposed are summarized for a purpose of completeness.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"227 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134375219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信