基于自适应CycleGAN的图像雾霾去除

Yi-Fan Chen, A. Patel, Chia-Ping Chen
{"title":"基于自适应CycleGAN的图像雾霾去除","authors":"Yi-Fan Chen, A. Patel, Chia-Ping Chen","doi":"10.1109/APSIPAASC47483.2019.9023296","DOIUrl":null,"url":null,"abstract":"We introduce our machine-learning method to remove the fog and haze in image. Our model is based on CycleGAN, an ingenious image-to-image translation model, which can be applied to de-hazing task. The datasets that we used for training and testing are creatd according to the atmospheric scattering model. With the change of the adversarial loss from cross-entropy loss to hinge loss, and the change of the reconstruction loss from MAE loss to perceptual loss, we improve the performance measure of SSIM value from 0.828 to 0.841 on the NYU dataset. With the Middlebury stereo datasets, we achieve an SSIM value of 0.811, which is significantly better than the baseline CycleGAN model.","PeriodicalId":145222,"journal":{"name":"2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Image Haze Removal By Adaptive CycleGAN\",\"authors\":\"Yi-Fan Chen, A. Patel, Chia-Ping Chen\",\"doi\":\"10.1109/APSIPAASC47483.2019.9023296\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce our machine-learning method to remove the fog and haze in image. Our model is based on CycleGAN, an ingenious image-to-image translation model, which can be applied to de-hazing task. The datasets that we used for training and testing are creatd according to the atmospheric scattering model. With the change of the adversarial loss from cross-entropy loss to hinge loss, and the change of the reconstruction loss from MAE loss to perceptual loss, we improve the performance measure of SSIM value from 0.828 to 0.841 on the NYU dataset. With the Middlebury stereo datasets, we achieve an SSIM value of 0.811, which is significantly better than the baseline CycleGAN model.\",\"PeriodicalId\":145222,\"journal\":{\"name\":\"2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/APSIPAASC47483.2019.9023296\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSIPAASC47483.2019.9023296","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

我们介绍了我们的机器学习方法来去除图像中的雾霾。我们的模型基于CycleGAN,这是一种巧妙的图像到图像转换模型,可以应用于去雾化任务。我们用于训练和测试的数据集是根据大气散射模型创建的。随着对抗损失从交叉熵损失变为铰链损失,重构损失从MAE损失变为感知损失,我们将纽约大学数据集上的SSIM值从0.828提高到0.841。使用Middlebury立体数据集,我们获得了0.811的SSIM值,这明显优于基线CycleGAN模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Image Haze Removal By Adaptive CycleGAN
We introduce our machine-learning method to remove the fog and haze in image. Our model is based on CycleGAN, an ingenious image-to-image translation model, which can be applied to de-hazing task. The datasets that we used for training and testing are creatd according to the atmospheric scattering model. With the change of the adversarial loss from cross-entropy loss to hinge loss, and the change of the reconstruction loss from MAE loss to perceptual loss, we improve the performance measure of SSIM value from 0.828 to 0.841 on the NYU dataset. With the Middlebury stereo datasets, we achieve an SSIM value of 0.811, which is significantly better than the baseline CycleGAN model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信