2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)最新文献

筛选
英文 中文
Blur vs. Block: Investigating the Effectiveness of Privacy-Enhancing Obfuscation for Images 模糊与块:调查图像隐私增强混淆的有效性
Yifang Li, Nishant Vishwamitra, Bart P. Knijnenburg, Hongxin Hu, Kelly E. Caine
{"title":"Blur vs. Block: Investigating the Effectiveness of Privacy-Enhancing Obfuscation for Images","authors":"Yifang Li, Nishant Vishwamitra, Bart P. Knijnenburg, Hongxin Hu, Kelly E. Caine","doi":"10.1109/CVPRW.2017.176","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.176","url":null,"abstract":"Computer vision can lead to privacy issues such as unauthorized disclosure of private information and identity theft, but it may also be used to preserve user privacy. For example, using computer vision, we may be able to identify sensitive elements of an image and obfuscate those elements thereby protecting private information or identity. However, there is a lack of research investigating the effectiveness of applying obfuscation techniques to parts of images as a privacy enhancing technology. In particular, we know very little about how well obfuscation works for human viewers or users' attitudes towards using these mechanisms. In this paper, we report results from an online experiment with 53 participants that investigates the effectiveness two exemplar obfuscation techniques: \"blurring\" and \"blocking\", and explores users' perceptions of these obfuscations in terms of image satisfaction, information sufficiency, enjoyment, and social presence. Results show that although \"blocking\" is more effective at de-identification compared to \"blurring\" or leaving the image \"as is\", users' attitudes towards \"blocking\" are the most negative, which creates a conflict between privacy protection and users' experience. Future work should explore alternative obfuscation techniques that could protect users' privacy and also provide a good viewing experience.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"65 1","pages":"1343-1351"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85714754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
The First Automatic Method for Mapping the Pothole in Seagrass 第一种海草坑洞自动测绘方法
M. Rahnemoonfar, M. Yari, Abdullah F. Rahman, Richard J. Kline
{"title":"The First Automatic Method for Mapping the Pothole in Seagrass","authors":"M. Rahnemoonfar, M. Yari, Abdullah F. Rahman, Richard J. Kline","doi":"10.1109/CVPRW.2017.39","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.39","url":null,"abstract":"There is a vital need to map seagrass ecosystems in order to determine worldwide abundance and distribution. Currently there is no established method for mapping the pothole or scars in seagrass. Detection of seagrass with optical remote sensing is challenged by the fact that light is attenuated as it passes through the water column and reflects back from the benthos. Optical remote sensing of seagrass is only possible if the water is shallow and relatively clear. In reality, coastal waters are commonly turbid, and seagrasses can grow under 10 meters of water or even deeper. One of the most precise sensors to map the seagrass disturbance is side scan sonar. Underwater acoustics mapping produces a high definition, two-dimensional sonar image of seagrass ecosystems. This paper proposes a methodology which detects seagrass potholes in sonar images. Side scan sonar images usually contain speckle noise and uneven illumination across the image. Moreover, disturbance presents complex patterns where most segmentation techniques will fail. In this paper, the quality of image is improved in the first stage using adaptive thresholding and wavelet denoising techniques. In the next step, a novel level set technique is applied to identify the pothole patterns. Our method is robust to noise and uneven illumination. Moreover it can detect the complex pothole patterns. We tested our proposed approach on a collection of underwater sonar images taken from Laguna Madre in Texas. Experimental results in comparison with the ground-truth show the efficiency of the proposed method.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"16 1","pages":"267-274"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85996780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Classification of Puck Possession Events in Ice Hockey 冰球控球事件的分类
Moumita Roy Tora, Jianhui Chen, J. Little
{"title":"Classification of Puck Possession Events in Ice Hockey","authors":"Moumita Roy Tora, Jianhui Chen, J. Little","doi":"10.1109/CVPRW.2017.24","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.24","url":null,"abstract":"Group activity recognition in sports is often challenging due to the complex dynamics and interaction among the players. In this paper, we propose a recurrent neural network to classify puck possession events in ice hockey. Our method extracts features from the whole frame and appearances of the players using a pre-trained convolutional neural network. In this way, our model captures the context information, individual attributes and interaction among the players. Our model requires only the player positions on the image and does not need any explicit annotations for the individual actions or player trajectories, greatly simplifying the input of the system. We evaluate our model on a new Ice Hockey Dataset. Experimental results show that our model produces competitive results on this challenging dataset with much simpler inputs compared with the previous work.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"9 1","pages":"147-154"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88687015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Exploration of Social and Web Image Search Results Using Tensor Decomposition 利用张量分解探索社交和网络图像搜索结果
Liuqing Yang, E. Papalexakis
{"title":"Exploration of Social and Web Image Search Results Using Tensor Decomposition","authors":"Liuqing Yang, E. Papalexakis","doi":"10.1109/CVPRW.2017.239","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.239","url":null,"abstract":"How do socially popular images differ from authoritative images indexed by web search engines? Empirically, social images on e.g., Twitter often tend to look more diverse and ultimately more \"personal\", contrary to images that are returned by web image search, some of which are so-called \"stock\" images. Are there image features, that we can automatically learn, which differentiate the two types of image search results, or features that the two have in common? This paper outlines the vision towards achieving this result. We propose a tensor-based approach that learns key features of social and web image search results, and provides a comprehensive framework for analyzing and understanding the similarities and differences between the two types types of content. We demonstrate our preliminary results on a small-scale study, and conclude with future research directions for this exciting and novel application.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"24 1","pages":"1915-1920"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84697351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FORMS-Locks: A Dataset for the Evaluation of Similarity Measures for Forensic Toolmark Images FORMS-Locks:用于评估法医工具标记图像相似性度量的数据集
M. Keglevic, Robert Sablatnig
{"title":"FORMS-Locks: A Dataset for the Evaluation of Similarity Measures for Forensic Toolmark Images","authors":"M. Keglevic, Robert Sablatnig","doi":"10.1109/CVPRW.2017.236","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.236","url":null,"abstract":"We present a toolmark dataset created using lock cylinders seized during criminal investigations of break-ins. A total number of 197 cylinders from 48 linked criminal cases were photographed under a comparison microscope used by forensic experts for toolmark comparisons. In order to allow an assessment of the influence of different lighting conditions, all images were captured using a ring light with 11 different lighting settings. Further, matching image regions in the toolmark images were manually annotated. In addition to the annotated toolmark images and the annotation tool, extracted toolmark patches are provided for training and testing to allow a quantitative comparison of the performance of different similarity measures. Finally, results from an evaluation using a publicly available state-of-the-art image descriptor based on deep learning are presented to provide a baseline for future publications.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"21 1","pages":"1890-1897"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87917495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust FEC-CNN: A High Accuracy Facial Landmark Detection System 鲁棒FEC-CNN:一种高精度人脸特征检测系统
Zhenliang He, Jie Zhang, Meina Kan, S. Shan, Xilin Chen
{"title":"Robust FEC-CNN: A High Accuracy Facial Landmark Detection System","authors":"Zhenliang He, Jie Zhang, Meina Kan, S. Shan, Xilin Chen","doi":"10.1109/CVPRW.2017.255","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.255","url":null,"abstract":"Facial landmark detection, as a typical and crucial task in computer vision, is widely used in face recognition, face animation, facial expression analysis, etc. In the past decades, many efforts are devoted to designing robust facial landmark detection algorithms. However, it remains a challenging task due to extreme poses, exaggerated facial expression, unconstrained illumination, etc. In this work, we propose an effective facial landmark detection system, recorded as Robust FEC-CNN (RFC), which achieves impressive results on facial landmark detection in the wild. Considering the favorable ability of deep convolutional neural network, we resort to FEC-CNN as a basic method to characterize the complex nonlinearity from face appearance to shape. Moreover, face bounding box invariant technique is adopted to reduce the landmark localization sensitivity to the face detector while model ensemble strategy is adopted to further enhance the landmark localization performance. We participate the Menpo Facial Landmark Localisation in-the-Wild Challenge and our RFC significantly outperforms the baseline approach APS. Extensive experiments on Menpo Challenge dataset and IBUG dataset demonstrate the superior performance of the proposed RFC.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"52 1","pages":"2044-2050"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82259484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Real-Time Driver Drowsiness Detection for Embedded System Using Model Compression of Deep Neural Networks 基于深度神经网络模型压缩的嵌入式系统驾驶员困倦实时检测
B. Reddy, Ye-Hoon Kim, Sojung Yun, Chanwon Seo, Junik Jang
{"title":"Real-Time Driver Drowsiness Detection for Embedded System Using Model Compression of Deep Neural Networks","authors":"B. Reddy, Ye-Hoon Kim, Sojung Yun, Chanwon Seo, Junik Jang","doi":"10.1109/CVPRW.2017.59","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.59","url":null,"abstract":"Driver’s status is crucial because one of the main reasons for motor vehicular accidents is related to driver’s inattention or drowsiness. Drowsiness detector on a car can reduce numerous accidents. Accidents occur because of a single moment of negligence, thus driver monitoring system which works in real-time is necessary. This detector should be deployable to an embedded device and perform at high accuracy. In this paper, a novel approach towards real-time drowsiness detection based on deep learning which can be implemented on a low cost embedded board and performs with a high accuracy is proposed. Main contribution of our paper is compression of heavy baseline model to a light weight model deployable to an embedded board. Moreover, minimized network structure was designed based on facial landmark input to recognize whether driver is drowsy or not. The proposed model achieved an accuracy of 89.5% on 3-class classification and speed of 14.9 frames per second (FPS) on Jetson TK1.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"16 1","pages":"438-445"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90194478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 158
Intel(R) RealSense(TM) Stereoscopic Depth Cameras Intel(R) RealSense(TM)立体深度相机
L. Keselman, J. Woodfill, A. Grunnet-Jepsen, A. Bhowmik
{"title":"Intel(R) RealSense(TM) Stereoscopic Depth Cameras","authors":"L. Keselman, J. Woodfill, A. Grunnet-Jepsen, A. Bhowmik","doi":"10.1109/CVPRW.2017.167","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.167","url":null,"abstract":"We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems. We discuss these systems' mode-of-operation, functional behavior and include models of their expected performance, shortcomings, and limitations. We provide information about the systems' optical characteristics, their correlation algorithms, and how these properties can affect different applications, including 3D reconstruction and gesture recognition. Our discussion covers the Intel RealSense R200 and RS400.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"13 1","pages":"1267-1276"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90313840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
Robust Hand Detection and Classification in Vehicles and in the Wild 车辆和野外的鲁棒手部检测与分类
T. Le, Kha Gia Quach, Chenchen Zhu, C. Duong, Khoa Luu, M. Savvides
{"title":"Robust Hand Detection and Classification in Vehicles and in the Wild","authors":"T. Le, Kha Gia Quach, Chenchen Zhu, C. Duong, Khoa Luu, M. Savvides","doi":"10.1109/CVPRW.2017.159","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.159","url":null,"abstract":"Robust hand detection and classification is one of the most crucial pre-processing steps to support human computer interaction, driver behavior monitoring, virtual reality, etc. This problem, however, is very challenging due to numerous variations of hand images in real-world scenarios. This work presents a novel approach named Multiple Scale Region-based Fully Convolutional Networks (MSRFCN) to robustly detect and classify human hand regions under various challenging conditions, e.g. occlusions, illumination, low-resolutions. In this approach, the whole image is passed through the proposed fully convolutional network to compute score maps. Those score maps with their position-sensitive properties can help to efficiently address a dilemma between translation-invariance in classification and detection. The method is evaluated on the challenging hand databases, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, Oxford hand dataset and compared against various recent hand detection methods. The experimental results show that our proposed MS-FRCN approach consistently achieves the state-of-the-art hand detection results, i.e. Average Precision (AP) / Average Recall (AR) of 95.1% / 94.5% at level 1 and 86.0% / 83.4% at level 2, on the VIVA challenge. In addition, the proposed method achieves the state-of-the-art results for left/right hand and driver/passenger classification tasks on the VIVA database with a significant improvement on AP/AR of ~7% and ~13% for both classification tasks, respectively. The hand detection performance of MS-RFCN reaches to 75.1% of AP and 77.8% of AR on Oxford database.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"32 1","pages":"1203-1210"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91384722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Investigating Nuisance Factors in Face Recognition with DCNN Representation 用DCNN表示研究人脸识别中的妨害因素
C. Ferrari, G. Lisanti, S. Berretti, A. Bimbo
{"title":"Investigating Nuisance Factors in Face Recognition with DCNN Representation","authors":"C. Ferrari, G. Lisanti, S. Berretti, A. Bimbo","doi":"10.1109/CVPRW.2017.86","DOIUrl":"https://doi.org/10.1109/CVPRW.2017.86","url":null,"abstract":"Deep learning based approaches proved to be dramatically effective to address many computer vision applications, including \"face recognition in the wild\". It has been extensively demonstrated that methods exploiting Deep Convolutional Neural Networks (DCNN) are powerful enough to overcome to a great extent many problems that negatively affected computer vision algorithms based on hand-crafted features. These problems include variations in illumination, pose, expression and occlusion, to mention some. The DCNNs excellent discriminative power comes from the fact that they learn low-and high-level representations directly from the raw image data. Considering this, it can be assumed that the performance of a DCNN are influenced by the characteristics of the raw image data that are fed to the network. In this work, we evaluate the effect of different bounding box dimensions, alignment, positioning and data source on face recognition using DCNNs, and present a thorough evaluation on two well known, public DCNN architectures.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"19 1","pages":"583-591"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81111976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信