2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Internet of Things Anomaly Detection using Machine Learning 使用机器学习的物联网异常检测
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-10-01 DOI: 10.1109/AIPR47015.2019.9174569
L. Njilla, Larry Pearlstein, Xin-Wen Wu, Adam Lutz, Soundararajan Ezekiel
{"title":"Internet of Things Anomaly Detection using Machine Learning","authors":"L. Njilla, Larry Pearlstein, Xin-Wen Wu, Adam Lutz, Soundararajan Ezekiel","doi":"10.1109/AIPR47015.2019.9174569","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9174569","url":null,"abstract":"In recent years, an increasing number of devices are being connected to the Internet that encompasses more than just traditional devices. Internet of Things integrates real-world sensors such as smart devices or environment sensors with the Internet allowing for real}-time monitoring of conditions. IoT devices are often constrained in their resources as the sensors involved are designed for specific purposes. Due to these constraints, typical methods of intrusion and anomaly detection cannot be used. Also, due to the amount of raw input data from these sensors, detecting anomalies among the noise and other background data can be computationally intensive. A possible solution to this is by using machine learning models that are trained on both normal and abnormal behavior to detect when anomalies occur. By using techniques such as autoencoders, models can be trained that have learned normal operating conditions. In this study, we explore the use of machine learning techniques such as autoencoders to effectively handle the high dimensionality of sensor datasets while consequently learning their normal operating conditions. Autoencoders are a type of neural network which attempts to reconstruct its input data by combining two NNs, an encoder, and a decoder network. The encoder learns its input by encoding it into a lower-dimensional space while capturing the interactions and correlations between variables. In this paper, we explore the use of techniques such as autoencoders to create a lower-dimensional representation of high dimensional sensor input. Autoencoders encode the data allowing for the network to learn the interactions between parameters in normal conditions which when reconstructed with the decoder represents non-anomalous behavior. When data containing anomalies are input into the network errors will occur within the reconstruction. The error between the reconstructions can be measured using a distance function to determine if an observation is anomalous.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114990031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
What’s the Point? Using Extended Feature Sets For Semantic Segmentation in Point Clouds 有什么意义?基于扩展特征集的点云语义分割
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-10-01 DOI: 10.1109/AIPR47015.2019.9174600
Nina M. Varney, V. Asari
{"title":"What’s the Point? Using Extended Feature Sets For Semantic Segmentation in Point Clouds","authors":"Nina M. Varney, V. Asari","doi":"10.1109/AIPR47015.2019.9174600","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9174600","url":null,"abstract":"A recent focus on expanding deep learning to use non-traditional input data has seen a high growth in research of deep learning on point sets. Due to its high collection cost and lack of available labeled data, there is an absence of research into deep learning with aerial LiDAR. In this paper, we present a new benchmark labeled dataset, called “Surrey Aerial 3 for evaluating networks on aerial LiDAR data”. The dataset covers over 6km2 and has three classes in multiple environments. We provide our architecture, “Curvature Weighted PointNet++” that eliminates PointNet++’s random batch selection and provides a way to select batches based on key points of interest selected from the Eigen feature space. We extend the hierarchical feature space to add additional layers of context to address the need for an extended field of view in aerial LiDAR.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131996193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration of Carbon Nanotube Forest Synthesis-Structure Relationships Using Physics-Based Simulation and Machine Learning 利用物理模拟和机器学习探索碳纳米管森林合成-结构关系
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-10-01 DOI: 10.1109/AIPR47015.2019.9316542
T. Hajilounezhad, Zakariya A. Oraibi, Ramakrishna Surya, F. Bunyak, M. Maschmann, P. Calyam, K. Palaniappan
{"title":"Exploration of Carbon Nanotube Forest Synthesis-Structure Relationships Using Physics-Based Simulation and Machine Learning","authors":"T. Hajilounezhad, Zakariya A. Oraibi, Ramakrishna Surya, F. Bunyak, M. Maschmann, P. Calyam, K. Palaniappan","doi":"10.1109/AIPR47015.2019.9316542","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9316542","url":null,"abstract":"The parameter space of CNT forest synthesis is vast and multidimensional, making experimental and/or numerical exploration of the synthesis prohibitive. We propose a more practical approach to explore the synthesis-process relationships of CNT forests using machine learning (ML) algorithms to infer the underlying complex physical processes. Currently, no such ML model linking CNT forest morphology to synthesis parameters has been demonstrated. In the current work, we use a physics-based numerical model to generate CNT forest morphology images with known synthesis parameters to train such a ML algorithm. The CNT forest synthesis variables of CNT diameter and CNT number densities are varied to generate a total of 12 distinct CNT forest classes. Images of the resultant CNT forests at different time steps during the growth and self-assembly process are then used as the training dataset. Based on the CNT forest structural morphology, multiple single and combined histogram-based texture descriptors are used as features to build a random forest (RF) classifier to predict class labels based on correlation of CNT forest physical attributes with the growth parameters. The machine learning model achieved an accuracy of up to 83.5% on predicting the synthesis conditions of CNT number density and diameter. These results are the first step towards rapidly characterizing CNT forest attributes using machine learning. Identifying the relevant process-structure interactions for the CNT forests using physics-based simulations and machine learning could rapidly advance the design, development, and adoption of CNT forest applications with varied morphologies and properties.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132731155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Improving Industrial Safety Gear Detection through Re-ID conditioned Detector 利用Re-ID条件检测器改进工业安全齿轮检测
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-10-01 DOI: 10.1109/AIPR47015.2019.9174597
Manikandan Ravikiran, Shibashish Sen
{"title":"Improving Industrial Safety Gear Detection through Re-ID conditioned Detector","authors":"Manikandan Ravikiran, Shibashish Sen","doi":"10.1109/AIPR47015.2019.9174597","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9174597","url":null,"abstract":"Industrial safety gears such as hardhats, vests, gloves and goggles are vital in safety of workers. With the advancement of vision technologies, most industries are moving towards automatic safety monitoring systems for its enforcement. However, most of the industrial safety monitoring systems are plagued by the following problems. To begin with, object detection which is the principal component of this system suffers from the problem of false detections and missed detections which are extremely costly resulting in wrong safety monitoring alerts and safety hazards. Further, while video object detection has seen a large traction through ImagenetDet and MOT17Det challenges, to the best of our knowledge there is no work till date in the context of industrial safety. Finally, unlike existing areas of object detection where there is the availability of large datasets, best of existing research works in detecting industrial safety gears is restricted to mostly hardhats due to lack of large datasets. In this work, we address these previously mentioned challenges by presenting a unified industrial safety system. As part of this developed system, we firstly introduce safety gear detection dataset consisting of 5k images with the previously mentioned classes of safety gears and present exhaustive benchmark on state-of-the-art single frame object detection. Secondly, to address wrong/missed detections we propose to exploit temporal information from contiguous frames by conditioning the object detection in the current frame on results of re-identification of objects computed in prior frames. Finally, we conduct extensive experiments using the developed Re-ID conditioned object detection system with various state-of-the-art object detectors to show that the proposed system produces mAP of 85%, 87%, 92% and 78% with average improvements of 5% mAP across the previously mentioned safety gears under complex conditions of illumination, posture and occlusions.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114434259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Surgery Task Classification Using Procrustes Analysis 基于Procrustes分析的手术任务分类
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-10-01 DOI: 10.1109/AIPR47015.2019.9174566
Safaa Albasri, M. Popescu, James Keller
{"title":"Surgery Task Classification Using Procrustes Analysis","authors":"Safaa Albasri, M. Popescu, James Keller","doi":"10.1109/AIPR47015.2019.9174566","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9174566","url":null,"abstract":"Recognizing surgical tasks is a crucial step toward automatic surgical training in robotic surgery training. In this work, we proposed and developed a classification framework for surgical task recognition. This approach is based on using three components: Dynamic Time Warping (DTW), Procrustes analysis (PA), and Fuzzy k- nearest neighbor (FkNN). First, the DTW method processes multi-channel motion trajectories with different lengths by stretching and compressing both signals such that their lengths become identical. Second, Procrustes analysis is used as a distance measure between two sequences based on shape similarity transformations: rotations, reflection, scaling, and translation. Finally, a Fuzzy k-nearest neighbor algorithm is applied to distinguish between different tasks by assigning a fuzzy class membership based on their distances. We evaluated our framework on a real raw kinematic surgical robotic dataset. Then, we validated the proposed model using Leave One Supertrial Out (LOSO) and Leave One User Out (LOUO) cross-validation schemes. Our results show improvements in the classification of the three different Robot-assisted minimally invasive surgery (RMIS) tasks: suturing, needle-passing, and knot-tying.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132313727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive Online Learning for Human-Robot Teaming in Dynamic Environments 动态环境下人机团队的自适应在线学习
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-10-01 DOI: 10.1109/AIPR47015.2019.9174572
Alexander D. Wissner-Gross, Noah Weston, Manuel M. Vindiola
{"title":"Adaptive Online Learning for Human-Robot Teaming in Dynamic Environments","authors":"Alexander D. Wissner-Gross, Noah Weston, Manuel M. Vindiola","doi":"10.1109/AIPR47015.2019.9174572","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9174572","url":null,"abstract":"Robotic and vehicular autonomy in contested, dynamic environments has historically been limited to teleoperation and simple programmed behaviors due to the low survivability of available AI and machine-learning techniques in the face of novel situations. Here we report that recent few-shot machine-learning models trained using interactive, human-centered, vehicular simulations can enable collaborative learning that is both adaptive (dynamically recognizing unfamiliar environmental conditions) and online (learning at each time step). Specifically, we show that our human-machine teaming approach enables simulated vehicles to anticipate novel adversities imposed in real time, both externally by their terrain and internally by their own mechanics, using only images captured by their front-facing cameras. We conclude by discussing the implications of our work for enhancing the future survivability of human-robot teams in large-scale, cluttered, contested environments.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128446563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
4-D Scene Alignment in Surveillance Video 监控视频中的4d场景对齐
2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2019-06-04 DOI: 10.1109/AIPR47015.2019.9174582
R. Wagner, Daniel E. Crispell, Patrick Feeney, J. Mundy
{"title":"4-D Scene Alignment in Surveillance Video","authors":"R. Wagner, Daniel E. Crispell, Patrick Feeney, J. Mundy","doi":"10.1109/AIPR47015.2019.9174582","DOIUrl":"https://doi.org/10.1109/AIPR47015.2019.9174582","url":null,"abstract":"Designing robust activity detectors for fixed camera surveillance video requires knowledge of the 3-D scene. This paper presents an automatic camera calibration process that provides a mechanism to reason about the spatial proximity between objects at different times. It combines a CNN-based camera pose estimator with a vertical scale provided by pedestrian observations to establish the 4-D scene geometry. Unlike some previous methods, the people do not need to be tracked nor do the head and feet need to be explicitly detected. It is robust to individual height variations and camera parameter estimation errors.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125821044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信