Qi-Xian Huang, Shu-Pei Shi, Guo-Shiang Lin, D. Shen, Hung-Min Sun
{"title":"A Co-Attention Method Based on Generative Adversarial Networks for Multi-view Images","authors":"Qi-Xian Huang, Shu-Pei Shi, Guo-Shiang Lin, D. Shen, Hung-Min Sun","doi":"10.1109/SNPD51163.2021.9704964","DOIUrl":null,"url":null,"abstract":"In this paper, we use Deep Convolutional Generative Adversarial Networks (DCGANs) method to generate more images with multiple views to increase our dataset diversity. We use 3D-model different views for training DCGAN to make interpolation between the leftest and rightest random vectors, which means it can generate leftest to rightest images. After producing many of multi-view images, we combine with CNN based modules called co-attention map generator to look for common features of the same class but in different views clothing. By applying the learned generator to all images, the corresponding co-attention maps are obtained. we can fluently apply the proposed method can function well for multi-view objects on different types of clothing classes.","PeriodicalId":235370,"journal":{"name":"2021 IEEE/ACIS 22nd International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACIS 22nd International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNPD51163.2021.9704964","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we use Deep Convolutional Generative Adversarial Networks (DCGANs) method to generate more images with multiple views to increase our dataset diversity. We use 3D-model different views for training DCGAN to make interpolation between the leftest and rightest random vectors, which means it can generate leftest to rightest images. After producing many of multi-view images, we combine with CNN based modules called co-attention map generator to look for common features of the same class but in different views clothing. By applying the learned generator to all images, the corresponding co-attention maps are obtained. we can fluently apply the proposed method can function well for multi-view objects on different types of clothing classes.