{"title":"Trusting the Computer in Computer Vision: A Privacy-Affirming Framework","authors":"A. Chen, M. Biglari-Abhari, K. Wang","doi":"10.1109/CVPRW.2017.178","DOIUrl":null,"url":null,"abstract":"The use of surveillance cameras continues to increase, ranging from conventional applications such as law enforcement to newer scenarios with looser requirements such as gathering business intelligence. Humans still play an integral part in using and interpreting the footage from these systems, but are also a significant factor in causing unintentional privacy breaches. As computer vision methods continue to improve, we argue in this position paper that system designers should reconsider the role of machines in surveillance, and how automation can be used to help protect privacy. We explore this by discussing the impact of the human-in-the-loop, the potential for using abstraction and distributed computing to further privacy goals, and an approach for determining when video footage should be hidden from human users. We propose that in an ideal surveillance scenario, a privacy-affirming framework causes collected camera footage to be processed by computers directly, and never shown to humans. This implicitly requires humans to establish trust, to believe that computer vision systems can generate sufficiently accurate results without human supervision, so that if information about people must be gathered, unintentional data collection is mitigated as much as possible.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"65 1","pages":"1360-1367"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2017.178","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
The use of surveillance cameras continues to increase, ranging from conventional applications such as law enforcement to newer scenarios with looser requirements such as gathering business intelligence. Humans still play an integral part in using and interpreting the footage from these systems, but are also a significant factor in causing unintentional privacy breaches. As computer vision methods continue to improve, we argue in this position paper that system designers should reconsider the role of machines in surveillance, and how automation can be used to help protect privacy. We explore this by discussing the impact of the human-in-the-loop, the potential for using abstraction and distributed computing to further privacy goals, and an approach for determining when video footage should be hidden from human users. We propose that in an ideal surveillance scenario, a privacy-affirming framework causes collected camera footage to be processed by computers directly, and never shown to humans. This implicitly requires humans to establish trust, to believe that computer vision systems can generate sufficiently accurate results without human supervision, so that if information about people must be gathered, unintentional data collection is mitigated as much as possible.