2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Camera Auto-calibration for Planar Aerial Imagery, Supported by Camera Metadata 平面航空图像的相机自动校准,由相机元数据支持
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457959
Abdullah Akay, H. Aliakbarpour, K. Palaniappan, G. Seetharaman
{"title":"Camera Auto-calibration for Planar Aerial Imagery, Supported by Camera Metadata","authors":"Abdullah Akay, H. Aliakbarpour, K. Palaniappan, G. Seetharaman","doi":"10.1109/AIPR.2017.8457959","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457959","url":null,"abstract":"UAVs in both civil and defense aviation have found numerous application areas in the last decade which resulted in sophisticated systems that extracts high level information utilizing the visual data from UAVs. Camera auto-calibration is always the first step if the intrinsic parameters of the cameras are not available. However, this process is not trivial as the aerial imagery mostly contains planar scenes which constitute a degenerate condition for conventional methods. In this paper, we propose a hybrid approach which incorporates circular point and camera position constraints as a single optimization term to automatically calibrate the camera of the UAV on planar scenes. The experimental results show that our proposed hybrid method is more robust and accurate than the conventional counterparts.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122751805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vector Learning for Cross Domain Representations 跨域表示的向量学习
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457957
Shagan Sah, Chi Zhang, Thang Nguyen, D. Peri, Ameya Shringi, R. Ptucha
{"title":"Vector Learning for Cross Domain Representations","authors":"Shagan Sah, Chi Zhang, Thang Nguyen, D. Peri, Ameya Shringi, R. Ptucha","doi":"10.1109/AIPR.2017.8457957","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457957","url":null,"abstract":"Recently, generative adversarial networks have gained a lot of popularity for image generation tasks. However, such models are associated with complex learning mechanisms and demand very large relevant datasets. This work borrows concepts from image and video captioning models to form an image generative framework. The model is trained in a similar fashion as recurrent captioning model and uses the learned weights for image generation. This is done in an inverse direction, where the input is a caption and the output is an image. The vector representation of the sentence and frames are extracted from an encoder-decoder model which is initially trained on similar sentence and image pairs. Our model conditions image generation on a natural language caption. We leverage a sequence-to-sequence model to generate synthetic captions that have the same meaning for having a robust image generation. One key advantage of our method is that the traditional image captioning datasets can be used for synthetic sentence paraphrases. Results indicate that images generated through multiple captions are better at capturing the semantic meaning of the family of captions.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130197743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Addressing supply chain risks of microelectronic devices through computer vision 通过计算机视觉解决微电子设备的供应链风险
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457956
Zhenhua Chen, Tingyi Wanyan, Ramya Rao, Benjamin Cutilli, J. Sowinski, David J. Crandall, R. Templeman
{"title":"Addressing supply chain risks of microelectronic devices through computer vision","authors":"Zhenhua Chen, Tingyi Wanyan, Ramya Rao, Benjamin Cutilli, J. Sowinski, David J. Crandall, R. Templeman","doi":"10.1109/AIPR.2017.8457956","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457956","url":null,"abstract":"Microelectronics are at the heart of nearly all modern devices, ranging from small embedded integrated circuits (ICs) inside household products to complex microprocessors that power critical infrastructure systems. Devices often consist of numerous ICs from a variety of different manufacturers and procured through different vendors, all of whom may be trusted to varying degrees. Ensuring the quality, safety, and security of these components is a critical challenge. One possible solution is to use automated imaging techniques to check devices' physical appearance against known reference models in order to detect counterfeit or malicious components. This analysis can be performed at both a macro level (i.e., ensuring that the packaging of the IC appears legitimate and undamaged) and a micro level (i.e., comparing microscopic, transistor-level imagery of the circuit itself to detect suspicious deviations from a reference model). This latter analysis in particular is very challenging, considering that modern devices can contain billions of transistors. In this paper, we review the problem of microelectronics counterfeiting, discuss the potential application of computer vision to microelectronics inspection, present initial results, and recommend directions for future work.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123816202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Analysis of Attitude Jitter on the Performance of Feature Detection 姿态抖动对特征检测性能的影响分析
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457963
H. Sharif, Borja Martinez Calvo, Christian Pfaab
{"title":"Analysis of Attitude Jitter on the Performance of Feature Detection","authors":"H. Sharif, Borja Martinez Calvo, Christian Pfaab","doi":"10.1109/AIPR.2017.8457963","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457963","url":null,"abstract":"This study explored a vision algorithm's performance during a shaker test to help reproduce the effects of vibration caused by the reaction wheels of a spacecraft. In this paper, we analyze the robustness of the feature detection technique by submitting the thermal and visible imaging cameras to sinusoidal vibrations as they simultaneously execute feature detection of the target.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133735493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TraffickCam: Crowdsourced and Computer Vision Based Approaches to Fighting Sex Trafficking TraffickCam:打击性交易的众包和计算机视觉方法
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457947
Abby Stylianou, Jesse T. Schreier, Richard Souvenir, Robert Pless
{"title":"TraffickCam: Crowdsourced and Computer Vision Based Approaches to Fighting Sex Trafficking","authors":"Abby Stylianou, Jesse T. Schreier, Richard Souvenir, Robert Pless","doi":"10.1109/AIPR.2017.8457947","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457947","url":null,"abstract":"According to a 2016 study by researchers at the University of New Hampshire, over sixty percent of child sex trafficking survivors were at one point advertised online [13]. These advertisements often include photos of the victim posed provocatively in a hotel room. It is imperative that law enforcement be able to quickly identify where these photos were taken to determine where a trafficker moves their victims. In previous work, we proposed a system to crowdsource the collection of hotel room photos that could be searched using different local feature and image descriptors. In this work, we present the fully realized crowd-sourcing platform, called TraffickCam, report on its usage by the public, and present a production system for fast national search by image, based on features extracted from a neural network trained explicitly for this purpose.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134480406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Using Machine Learning techniques for identification of Chronic Traumatic Encephalopathy related Spectroscopic Biomarkers 使用机器学习技术识别慢性创伤性脑病相关的光谱生物标志物
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457949
Marcia S. Louis, M. Alosco, B. Rowland, HuiHun Liao, Joseph Wang, I. Koerte, M. Shenton, R. Stern, A. Joshi, A. Lin
{"title":"Using Machine Learning techniques for identification of Chronic Traumatic Encephalopathy related Spectroscopic Biomarkers","authors":"Marcia S. Louis, M. Alosco, B. Rowland, HuiHun Liao, Joseph Wang, I. Koerte, M. Shenton, R. Stern, A. Joshi, A. Lin","doi":"10.1109/AIPR.2017.8457949","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457949","url":null,"abstract":"Contact sports athletes, military personnel, and civilians that suffer from multiple head traumas have the potential to develop Chronic Traumatic Encephalopathy (CTE), a progressive, degenerative brain disease diagnosed only postmortem by characteristic tau deposition in the brain. There is, therefore, a need for in-vivo diagnosis for CTE to diagnose and manage this disease, while the individual is still alive. However, there is no definitive in-vivo diagnosis because of heterogeneous clinical symptoms that often overlap with other neurodegenerative diseases. Magnetic Resonance Spectroscopy (MRS) can be a suitable candidate for CTE diagnosis as multiple head trauma changes the neurochemicals in the brain that can be detected using MRS. These changes can be subtle, and group differences are not sufficient for clinical diagnosis. This paper proposes a machine learning based approach to capture the neuro-spectroscopic signatures corresponding to CTE-related impairments in NFL players. The classification model uses concentration estimates of metabolites to classify between ‘Impaired and ‘Non-impaired players. The model using the metabolite concentrations of creatine, choline, N-acetyl-aspartate, glutamate, and macromolecules achieved Area Under the Curve (AUC) of 0.72 and prediction accuracy of 75%. While these metabolites have been shown to be altered in previous concussion studies, other metabolites may improve the diagnostic accuracy. In order to include more metabolites, two-dimensional correlated spectroscopy (L-COSY), which resolves overlapping metabolites, was also acquired. The L-COSY model which included 15 metabolites, increased prediction accuracy to 87 % with AUC of 0.83. With the aid of machine learning, these metabolites may serve as potential biomarkers that correspond to the CTE-related impairments that will allow for CTE diagnostics in athletes prior to their death.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116670580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Trading spatial resolution for improved accuracy in remote sensing imagery: an empirical study using synthetic data 利用空间分辨率提高遥感图像的精度:使用合成数据的实证研究
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457961
Jordan M. Malof, Sravya Chelikani, L. Collins, Kyle Bradbury
{"title":"Trading spatial resolution for improved accuracy in remote sensing imagery: an empirical study using synthetic data","authors":"Jordan M. Malof, Sravya Chelikani, L. Collins, Kyle Bradbury","doi":"10.1109/AIPR.2017.8457961","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457961","url":null,"abstract":"We consider the problem of detecting objects (such as trees, rooftops, roads, or cars) in remote sensing data including, for example, color or hyperspectral imagery. Many detection algorithms applied to this problem operate by assigning a decision statistic to all, or a subset, of spatial locations in the imagery for classification purposes. In this work we investigate a recently proposed method, called Local Averaging for Improved Predictions (LAIP), which can be used for trading off the classification accuracy of detector decision statistics with their spatial precision. We explore the behaviors of LAIP on controlled synthetic data, as we vary several experimental conditions: (a) the difficulty of the detection problem, (b) the spatial area over which LAIP is applied, and (c) how it behaves when the estimated ROC curve of the detector becomes increasingly inaccurate. These results provide basic insights about the conditions under which LAIP is effective.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130518996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Informativeness of Degraded Data in Training a Classification System 分类系统训练中退化数据的信息量
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457972
N. Ronquillo, Josh Harguess
{"title":"Informativeness of Degraded Data in Training a Classification System","authors":"N. Ronquillo, Josh Harguess","doi":"10.1109/AIPR.2017.8457972","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457972","url":null,"abstract":"Many recent solutions have been proposed to mitigate the vulnerability of machine learning models when they are subject to limited or degraded data. However, the effects of using degraded data for purposes of training or testing a classification system are not fundamentally studied. In this work, we propose a methodology for studying the effects of degradations (due to additive noise, compression artifacts, and blur) that is based on the active learning framework for studying the informativeness of data samples. We provide experimental results using the action recognition video dataset UCF101 to validate its utility. We shed light on the importance of studying the effects of degraded data by showing to which extent degraded samples can be more informative than unedited high quality samples in training a classification system.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129305137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Learning Method for Plant and Leaf Segmentation 植物和叶片分割的无监督学习方法
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457935
Noor M. Al-Shakarji, Yasmin M. Kassim, K. Palaniappan
{"title":"Unsupervised Learning Method for Plant and Leaf Segmentation","authors":"Noor M. Al-Shakarji, Yasmin M. Kassim, K. Palaniappan","doi":"10.1109/AIPR.2017.8457935","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457935","url":null,"abstract":"Plant phenotyping is a recent application of computer vision in agriculture and food security. To automatically recognize plants species, we need first to extract the plant and associated substructures. Manual segmentation of plant structures is tedious, error prone and expensive. Automatic plant segmentation is useful for leaf extraction, identification, and counting. We have developed a robust and fast unsupervised approach for plant extraction and leaf detection. K-means based mask (of the pot) followed by Expectation Maximization (EM) algorithm is adapted to estimate a mixture model for identifying the foreground area for the plant. We utilized the EM with 3 RGB channels to identify the foreground verses background for plant localization. K-means has been used to extract the circular plant can as one of the intermediate result to fuse it with EM results for noise removal since the images suffered from contrast and illumination variations. For leaf segmentation, we utilized distance transform and watershed segmentation to localize the leaves individually followed by stem link algorithm to connect the stem with corresponding leaves. The results have been evaluated by the same algorithms that have been used in the contest of plant phenotyping [1]. In our work, we used A1 and A2 datasets1 to test our algorithm. We achieved promissing score in some evaluation metrics and comparable in the others.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122931340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
ROS Navigation Stack for Smart Indoor Agents 面向智能室内代理的ROS导航堆栈
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457966
Rasika Kangutkar, Jacob Lauzon, Alexander Synesael, Nicholas Jenis, Kruthika Simha, R. Ptucha
{"title":"ROS Navigation Stack for Smart Indoor Agents","authors":"Rasika Kangutkar, Jacob Lauzon, Alexander Synesael, Nicholas Jenis, Kruthika Simha, R. Ptucha","doi":"10.1109/AIPR.2017.8457966","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457966","url":null,"abstract":"Advances in compute power, sensor technology, and machine learning have facilitated a plethora of assistive and personal agents. These agents are poised to make our life more efficient, safer, feature rich, and more enjoyable. With so much activity in this area, there has been a lot of progress developing algorithms for localization, path planning, path guiding, and obstacle avoidance. Similarly, numerous frameworks for human computer interaction, obstacle recognition, object tracking, and advanced reasoning have been introduced. This research introduces a navigation stack written in Python using the Robot Operating System for modular indoor agent development. The localization system makes use of deep learning and particle filters and is easily trained to localize in new environments. The obstacle avoidance system can be changed to reflect the agents size, required safety margin, sensor properties and behavior. Different path planning algorithms can be substituted and used in the path guiding system. The created navigation stack was tested on an assistive technology wheelchair, exhibiting state of the art localization, collision avoidance, and navigation in complex scenarios.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126067692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信