{"title":"Deepfake Detection Based on Incompatibility Between Multiple Modes","authors":"Yu-xin Zhang, Jinyu Zhan, Wei Jiang, Zhufeng Fan","doi":"10.1109/ICITES53477.2021.9637096","DOIUrl":"https://doi.org/10.1109/ICITES53477.2021.9637096","url":null,"abstract":"We propose a multi-modal detection for deepfake videos, called the Incompatibility Between Multiple Modes (IBMM) detection. The detection algorithm can detect whether the video is real or fake, and may be embedded in the monitoring equipment in the future. The model adopts EfficientNet and simple 3D-CNN, and it identifies deepfake videos through three modes. In the facial motion mode and lip motion mode, we use the EfficientNet for feature learning. This network uses a series of fixed scaling coefficients to scale the dimensions of the network uniformly and achieves good results in learning image features. In the audio mode, we adopt 3D-CNN network to train the hot coding diagram of audio data. Besides, for a single mode, we use the cross-entropy loss to calculate the irrationality of the mode. For different modes, the contrastive loss is used to calculate the incongruity between the modes, such as incompatibility between lips and voice. Experimental results show that, compared with other existing fake detection methods, the method presented in this paper has higher accuracy (95.87%) on DFDC datasets. And compared with the existing methods, the accuracy increases by 5.21%.","PeriodicalId":370828,"journal":{"name":"2021 International Conference on Intelligent Technology and Embedded Systems (ICITES)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130430964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Cybersecurity Testing for In-vehicle Network","authors":"Feng Luo, Xuan Zhang, Shuo Hou","doi":"10.1109/ICITES53477.2021.9637070","DOIUrl":"https://doi.org/10.1109/ICITES53477.2021.9637070","url":null,"abstract":"The development of technologies such as Information Communication Technology (ICT), Internet of Vehicles (IoVs), and industrial intelligence has made automotive cybersecurity issues more prominent. Cybersecurity issues have gradually attracted widespread attention in the field of Intelligent Connected Vehicles (ICV). Cybersecurity testing is an effective means to ensure the cybersecurity of Cyber-Physical Systems (CPS). Fuzzing and penetration testing are both important methods of security testing. In SAE J3061 and the impending ISO/SAE 21434, it is clearly mentioned that fuzzing and penetration testing technologies should be applied in the development of automotive cybersecurity activities, but no specific testing details are involved. The WP.29 regulations also require security tests to verify the effectiveness of security measures when conducting type approval with regard to cybersecurity. There is neither a standardized method for how to conduct automotive cybersecurity testing, nor specific testing tools. In this paper, a brief overview of the applied security testing methods in the automotive domain is provided first. Then, we present a cybersecurity testing method, which extends the Penetration Testing Execution Standard (PTES) from the perspective of testing processes. Besides, we also design and develop a security testing tool for the in-vehicle network to assist security analysis. Finally, taking Controller Area Network with Flexible Data Rate (CAN FD) as an example, the proposed method is applied to the designed testbed.","PeriodicalId":370828,"journal":{"name":"2021 International Conference on Intelligent Technology and Embedded Systems (ICITES)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121374572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Expression Recognition and Classification Based on Jump-Layer Optimization of Convolutional Neural Network","authors":"Q. Hu, Ming Ye","doi":"10.1109/ICITES53477.2021.9637082","DOIUrl":"https://doi.org/10.1109/ICITES53477.2021.9637082","url":null,"abstract":"In multi-layer neural networks, high-level abstract features are extracted by convolutional layers at the end, which lack low-level detailed features. Meanwhile, the requirements of training parameters are also higher because of the deepening of network layers, which increases the difficulty of training and is more prone to the problems of gradient disappearance or explosion. In this paper, an optimized jump-layer convolutional neural network (JCCN) structure is proposed to modify facial expression recognition classification network model. This method effectively combines low-level detailed features and high-level abstract features through jump-layer connections. Gradient disappearance caused by too much network layers and back propagation parameter transfer are modify via the approach. The proposed method can reduce the risk of network training overfitting and enhance the nonlinearity of the data. At the same time, the 1*1 convolution kernel introduced reduces the training cost and the number of parameters effectively. The experimental results show that the network has a good performance on the data sets FER2013 and CK+. It is anticipated that facial expression recognition and classification methods based on convolutional networks would benefit from this paper.","PeriodicalId":370828,"journal":{"name":"2021 International Conference on Intelligent Technology and Embedded Systems (ICITES)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127033000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}