Xin Wang, S. Ni, Jie Wang, Yifan Shang, Linji Zhang
{"title":"脆弱的指纹,以保护视觉变压器的完整性","authors":"Xin Wang, S. Ni, Jie Wang, Yifan Shang, Linji Zhang","doi":"10.1109/AINIT59027.2023.10212509","DOIUrl":null,"url":null,"abstract":"Nowadays, with the rapidly development of deep learning, deep learning models have been widely deployed in various fields and generated significant commercial interest. Some technology companies upload their trained models to cloud servers and serve them to the end-users. Many works have shown that the convolutional neural networks are vulnerable to some model modification attacks, which raise concerns about integrity authentication of the convolutional neural models. Additionally, Transformers based on attention mechanism are now commonly used in computer vision applications, and the need to verify the integrity of ViTs arises if the ViT model is deployed in the safety critical systems. In this paper, we propose a fragile fingerprint method for verifying the integrity of the ViTs, which is based on the targeted adversarial examples. Compared with the existing works, the proposed fingerprint method does not modify the ViTs. We generate some fragile fingerprints, which are classified as the targeted label. In the verification stage, if the fingerprints are successfully classified as targeted label with 100% success rate, we can claim that the ViTs is not modified. Otherwise, when the fingerprint verification success rate is lower than 100%, we can claim that the integrity of ViTs is compromised. Experimental results demonstrate that the fingerprints can effectively verify the integrity of the ViTs when the ViTs is modified by model attacks, even though only a small number of weights of ViTs are changed.","PeriodicalId":276778,"journal":{"name":"2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fragile fingerprint for protecting the integrity of the Vision Transformer\",\"authors\":\"Xin Wang, S. Ni, Jie Wang, Yifan Shang, Linji Zhang\",\"doi\":\"10.1109/AINIT59027.2023.10212509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, with the rapidly development of deep learning, deep learning models have been widely deployed in various fields and generated significant commercial interest. Some technology companies upload their trained models to cloud servers and serve them to the end-users. Many works have shown that the convolutional neural networks are vulnerable to some model modification attacks, which raise concerns about integrity authentication of the convolutional neural models. Additionally, Transformers based on attention mechanism are now commonly used in computer vision applications, and the need to verify the integrity of ViTs arises if the ViT model is deployed in the safety critical systems. In this paper, we propose a fragile fingerprint method for verifying the integrity of the ViTs, which is based on the targeted adversarial examples. Compared with the existing works, the proposed fingerprint method does not modify the ViTs. We generate some fragile fingerprints, which are classified as the targeted label. In the verification stage, if the fingerprints are successfully classified as targeted label with 100% success rate, we can claim that the ViTs is not modified. Otherwise, when the fingerprint verification success rate is lower than 100%, we can claim that the integrity of ViTs is compromised. Experimental results demonstrate that the fingerprints can effectively verify the integrity of the ViTs when the ViTs is modified by model attacks, even though only a small number of weights of ViTs are changed.\",\"PeriodicalId\":276778,\"journal\":{\"name\":\"2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AINIT59027.2023.10212509\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 4th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AINIT59027.2023.10212509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fragile fingerprint for protecting the integrity of the Vision Transformer
Nowadays, with the rapidly development of deep learning, deep learning models have been widely deployed in various fields and generated significant commercial interest. Some technology companies upload their trained models to cloud servers and serve them to the end-users. Many works have shown that the convolutional neural networks are vulnerable to some model modification attacks, which raise concerns about integrity authentication of the convolutional neural models. Additionally, Transformers based on attention mechanism are now commonly used in computer vision applications, and the need to verify the integrity of ViTs arises if the ViT model is deployed in the safety critical systems. In this paper, we propose a fragile fingerprint method for verifying the integrity of the ViTs, which is based on the targeted adversarial examples. Compared with the existing works, the proposed fingerprint method does not modify the ViTs. We generate some fragile fingerprints, which are classified as the targeted label. In the verification stage, if the fingerprints are successfully classified as targeted label with 100% success rate, we can claim that the ViTs is not modified. Otherwise, when the fingerprint verification success rate is lower than 100%, we can claim that the integrity of ViTs is compromised. Experimental results demonstrate that the fingerprints can effectively verify the integrity of the ViTs when the ViTs is modified by model attacks, even though only a small number of weights of ViTs are changed.