评价视频传输中表达转移和匿名化的简单基线

Gabriel Haddon-Hill, Keerthy Kusumam, M. Valstar
{"title":"评价视频传输中表达转移和匿名化的简单基线","authors":"Gabriel Haddon-Hill, Keerthy Kusumam, M. Valstar","doi":"10.1109/aciiw52867.2021.9666292","DOIUrl":null,"url":null,"abstract":"Video-to-video synthesis methods provide increasingly accessible solutions for training models on privacy-sensitive and limited-size datasets frequently encountered in domains such as affect analysis. However, there are no existing baselines that explicitly measure the extent of reliable expression transfer or privacy preservation in the generated data. In this paper, we evaluate a general-purpose video transfer method, vid2vid, on these two key tasks: expression transfer and anonymisation of identities, as well as its suitability for training affect prediction models. We provide results that form a strong baseline for future comparisons, and further motivate the need for purpose-built methods for conducting expression-preserving video transfer. Our results indicate that a significant limitation of vid2vid's expression transfer arises from conditioning on facial landmarks and optical flow, which do not carry sufficient information to preserve facial expressions. Finally, we demonstrate that vid2vid can adequately anonymise videos in some cases, though not consistently, and that the anonymisation can be improved by applying random perturbations to input landmarks, at the cost of reduced expression transfer.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A simple baseline for evaluating Expression Transfer and Anonymisation in Video Transfer\",\"authors\":\"Gabriel Haddon-Hill, Keerthy Kusumam, M. Valstar\",\"doi\":\"10.1109/aciiw52867.2021.9666292\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Video-to-video synthesis methods provide increasingly accessible solutions for training models on privacy-sensitive and limited-size datasets frequently encountered in domains such as affect analysis. However, there are no existing baselines that explicitly measure the extent of reliable expression transfer or privacy preservation in the generated data. In this paper, we evaluate a general-purpose video transfer method, vid2vid, on these two key tasks: expression transfer and anonymisation of identities, as well as its suitability for training affect prediction models. We provide results that form a strong baseline for future comparisons, and further motivate the need for purpose-built methods for conducting expression-preserving video transfer. Our results indicate that a significant limitation of vid2vid's expression transfer arises from conditioning on facial landmarks and optical flow, which do not carry sufficient information to preserve facial expressions. Finally, we demonstrate that vid2vid can adequately anonymise videos in some cases, though not consistently, and that the anonymisation can be improved by applying random perturbations to input landmarks, at the cost of reduced expression transfer.\",\"PeriodicalId\":105376,\"journal\":{\"name\":\"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/aciiw52867.2021.9666292\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/aciiw52867.2021.9666292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

视频到视频合成方法为在影响分析等领域经常遇到的隐私敏感和有限大小的数据集上训练模型提供了越来越容易获得的解决方案。然而,目前还没有明确衡量生成数据中可靠的表达传递或隐私保护程度的基线。在本文中,我们评估了一种通用的视频传输方法,vid2vid,在这两个关键任务上:表达转移和身份匿名,以及它对训练影响预测模型的适用性。我们提供的结果为未来的比较提供了强有力的基线,并进一步激发了对用于进行保留表达的视频传输的专用方法的需求。我们的研究结果表明,vid2vid的表情传递受到面部标志和光流的限制,它们没有携带足够的信息来保存面部表情。最后,我们证明了vid2vid在某些情况下可以充分地匿名化视频,尽管不是一致的,并且可以通过对输入地标应用随机扰动来改进匿名化,以减少表达转移为代价。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A simple baseline for evaluating Expression Transfer and Anonymisation in Video Transfer
Video-to-video synthesis methods provide increasingly accessible solutions for training models on privacy-sensitive and limited-size datasets frequently encountered in domains such as affect analysis. However, there are no existing baselines that explicitly measure the extent of reliable expression transfer or privacy preservation in the generated data. In this paper, we evaluate a general-purpose video transfer method, vid2vid, on these two key tasks: expression transfer and anonymisation of identities, as well as its suitability for training affect prediction models. We provide results that form a strong baseline for future comparisons, and further motivate the need for purpose-built methods for conducting expression-preserving video transfer. Our results indicate that a significant limitation of vid2vid's expression transfer arises from conditioning on facial landmarks and optical flow, which do not carry sufficient information to preserve facial expressions. Finally, we demonstrate that vid2vid can adequately anonymise videos in some cases, though not consistently, and that the anonymisation can be improved by applying random perturbations to input landmarks, at the cost of reduced expression transfer.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信