{"title":"Generating Sea Surface Object Image Using Image-to-Image Translation","authors":"Wenbin Yin, Jun Yu, Zhi-yi Hu","doi":"10.21307/ijanmc-2021-016","DOIUrl":null,"url":null,"abstract":"Abstract Sea objects training, the conditional adversarial networks require a large number of images to solve image-to-image translation problems. In the case of insufficient samples, it leads to network overfitting and poor training results. This project proposes a conditional adversarial generative model that retains the original background features in the absence of paired samples. The goal of this project is to reduce the deviation of the corresponding output from the original input. Firstly, the object images of different categories are labeled with color masks. Second, sea objects are generated randomly in the original background using model of this project. Finally, the generated results of this approach are compared with other approaches. The experimental results show that, compared with results from other conditional adversarial generative models, the generated object images using model of this project have the characteristics of richer texture and clearer structure.","PeriodicalId":193299,"journal":{"name":"International Journal of Advanced Network, Monitoring and Controls","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Network, Monitoring and Controls","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21307/ijanmc-2021-016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Sea objects training, the conditional adversarial networks require a large number of images to solve image-to-image translation problems. In the case of insufficient samples, it leads to network overfitting and poor training results. This project proposes a conditional adversarial generative model that retains the original background features in the absence of paired samples. The goal of this project is to reduce the deviation of the corresponding output from the original input. Firstly, the object images of different categories are labeled with color masks. Second, sea objects are generated randomly in the original background using model of this project. Finally, the generated results of this approach are compared with other approaches. The experimental results show that, compared with results from other conditional adversarial generative models, the generated object images using model of this project have the characteristics of richer texture and clearer structure.