{"title":"智能水印防御深度伪造图像操纵","authors":"Luochen Lv","doi":"10.1109/ICCCS52626.2021.9449287","DOIUrl":null,"url":null,"abstract":"Deepfake image manipulation has become a serious security threat to the social network. Currently, there are limited studies on protective methods that are against Deepfake image manipulation. To tackle this problem, we here propose an adversary attack based smart watermark model, which adds unperceptive watermarks to images so that the images become adversary examples to Deepfake models. When the Deepfake manipulates these watermarked images, the manipulated images become blur. The manipulation thus can be easily recognized by human and machines. Our experiments have shown that our model outperforms the SOTA and can be used to effectively prevent Deepfake manipulation.","PeriodicalId":376290,"journal":{"name":"2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Smart Watermark to Defend against Deepfake Image Manipulation\",\"authors\":\"Luochen Lv\",\"doi\":\"10.1109/ICCCS52626.2021.9449287\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deepfake image manipulation has become a serious security threat to the social network. Currently, there are limited studies on protective methods that are against Deepfake image manipulation. To tackle this problem, we here propose an adversary attack based smart watermark model, which adds unperceptive watermarks to images so that the images become adversary examples to Deepfake models. When the Deepfake manipulates these watermarked images, the manipulated images become blur. The manipulation thus can be easily recognized by human and machines. Our experiments have shown that our model outperforms the SOTA and can be used to effectively prevent Deepfake manipulation.\",\"PeriodicalId\":376290,\"journal\":{\"name\":\"2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCS52626.2021.9449287\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCS52626.2021.9449287","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Smart Watermark to Defend against Deepfake Image Manipulation
Deepfake image manipulation has become a serious security threat to the social network. Currently, there are limited studies on protective methods that are against Deepfake image manipulation. To tackle this problem, we here propose an adversary attack based smart watermark model, which adds unperceptive watermarks to images so that the images become adversary examples to Deepfake models. When the Deepfake manipulates these watermarked images, the manipulated images become blur. The manipulation thus can be easily recognized by human and machines. Our experiments have shown that our model outperforms the SOTA and can be used to effectively prevent Deepfake manipulation.