2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA)最新文献

筛选
英文 中文
Learning Dense Correspondence from Synthetic Environments 从合成环境中学习密集对应
2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2022-03-24 DOI: 10.1109/DICTA56598.2022.10034586
Mithun Lal, Anthony Paproki, N. Habili, L. Petersson, Olivier Salvado, C. Fookes
{"title":"Learning Dense Correspondence from Synthetic Environments","authors":"Mithun Lal, Anthony Paproki, N. Habili, L. Petersson, Olivier Salvado, C. Fookes","doi":"10.1109/DICTA56598.2022.10034586","DOIUrl":"https://doi.org/10.1109/DICTA56598.2022.10034586","url":null,"abstract":"Estimation of human shape and pose from a single image is a challenging task. It is an even more difficult problem to map the identified human shape onto a 3D human model. Existing methods map manually labelled human pixels in real 2D images onto the 3D surface, which is prone to human error, and the sparsity of available annotated data often leads to sub-optimal results. We propose to solve the problem of data scarcity by training 2D-3D human mapping algorithms using automatically generated synthetic data for which exact and dense 2D-3D correspondence is known. Such a learning strategy using synthetic environments has a high generalisation potential towards real-world data. Using different camera parameter variations, background and lighting settings, we created precise ground truth data that constitutes a wider distribution. We evaluate the performance of models trained on synthetic using the Common Objects In Context (COCO) dataset and validation framework. Results show that training 2D-3D mapping network models on synthetic data is a viable alternative to using real data.","PeriodicalId":159377,"journal":{"name":"2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Attacks against a Satellite-borne Multispectral Cloud Detector 对星载多光谱云探测器的对抗性攻击
2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2021-12-03 DOI: 10.1109/DICTA56598.2022.10034592
Andrew Du, Yee Wei Law, M. Sasdelli, Bo Chen, Ken Clarke, M. Brown, Tat-Jun Chin
{"title":"Adversarial Attacks against a Satellite-borne Multispectral Cloud Detector","authors":"Andrew Du, Yee Wei Law, M. Sasdelli, Bo Chen, Ken Clarke, M. Brown, Tat-Jun Chin","doi":"10.1109/DICTA56598.2022.10034592","DOIUrl":"https://doi.org/10.1109/DICTA56598.2022.10034592","url":null,"abstract":"Data collected by Earth-observing (EO) satellites are often afflicted by cloud cover. Detecting the presence of clouds-which is increasingly done using deep learning-is crucial preprocessing in EO applications. In fact, advanced EO satellites perform deep learning-based cloud detection on board and downlink only clear-sky data to save bandwidth. In this paper, we highlight the vulnerability of deep learning-based cloud detection towards adversarial attacks. By optimising an adversarial pattern and superimposing it into a cloudless scene, we bias the neural network into detecting clouds in the scene. Since the input spectra of cloud detectors include the nonvisible bands, we generated our attacks in the multispectral domain. This opens up the potential of multi-objective attacks, specifically, adversarial biasing in the cloud-sensitive bands and visual camouflage in the visible bands. We also investigated mitigation strategies against the attacks.","PeriodicalId":159377,"journal":{"name":"2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133003012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Spatial Transformer Networks for Curriculum Learning 课程学习的空间转换网络
2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) Pub Date : 2021-08-22 DOI: 10.1109/DICTA56598.2022.10034595
Fatemeh Azimi, J. Nies, Sebastián M. Palacio, Federico Raue, Jörn Hees, A. Dengel
{"title":"Spatial Transformer Networks for Curriculum Learning","authors":"Fatemeh Azimi, J. Nies, Sebastián M. Palacio, Federico Raue, Jörn Hees, A. Dengel","doi":"10.1109/DICTA56598.2022.10034595","DOIUrl":"https://doi.org/10.1109/DICTA56598.2022.10034595","url":null,"abstract":"Curriculum learning is a bio-inspired training technique that is widely adopted in machine learning for improved optimization and better training of neural networks regarding the convergence rate or obtained accuracy. The main concept in curriculum learning is to start the training with simpler tasks and gradually increase the level of difficulty. Therefore, a natural question is how to determine or generate these simpler tasks. In this work, we take inspiration from Spatial Transformer Networks (STNs) in order to form an easy-to-hard curriculum. As STNs have been proved capable of removing the clutter from the input images and obtaining higher accuracy in image classification tasks, we hypothesize that images processed by STNs can be seen as easier tasks and utilized in the interest of curriculum learning. To this end, we study multiple strategies developed for shaping the training curriculum, using the data generated by STNs. We perform various experiments on cluttered MNIST and Fashion-MNIST datasets, where on the former, we obtain an improvement of 3.8pp in classification accuracy compared to the baseline, indicating that STNs can be considered as a tool for generating the easy-to-hard training schedule required for curriculum learning.","PeriodicalId":159377,"journal":{"name":"2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132703987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信