A. Nair, Luoluo Liu, Akshay Rangamani, Peter Chin, M. Bell, T. Tran
{"title":"部分观察图像的无重建深度卷积神经网络","authors":"A. Nair, Luoluo Liu, Akshay Rangamani, Peter Chin, M. Bell, T. Tran","doi":"10.1109/GLOBALSIP.2018.8646498","DOIUrl":null,"url":null,"abstract":"Conventional image discrimination tasks are performed on fully observed images. In challenging real imaging scenarios, where sensing systems are energy demanding or need to operate with limited bandwidth and exposure-time budgets, or defective pixels, where the data collected often suffers from missing information, and this makes the task extremely hard. In this paper, we leverage Convolutional Neural Networks (CNNs) to extract information from partially observed images. While pre-trained CNNs fail significantly even with such a small percentage of the input missing, our proposed framework demonstrates the ability to overcome it after training on fully-observed and partially-observed images at a few observation ratios. We demonstrate that our method is indeed reconstruction-free, retraining-free and generalizable to previously untrained-on observation ratios and it remains effective in two different visual tasks – image classification and object detection. Our framework performs well even for test images with only 10% of pixels available and outperforms the reconstruct-then-classify pipeline in these challenging scenarios for small observation fractions.","PeriodicalId":119131,"journal":{"name":"2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"RECONSTRUCTION-FREE DEEP CONVOLUTIONAL NEURAL NETWORKS FOR PARTIALLY OBSERVED IMAGES\",\"authors\":\"A. Nair, Luoluo Liu, Akshay Rangamani, Peter Chin, M. Bell, T. Tran\",\"doi\":\"10.1109/GLOBALSIP.2018.8646498\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Conventional image discrimination tasks are performed on fully observed images. In challenging real imaging scenarios, where sensing systems are energy demanding or need to operate with limited bandwidth and exposure-time budgets, or defective pixels, where the data collected often suffers from missing information, and this makes the task extremely hard. In this paper, we leverage Convolutional Neural Networks (CNNs) to extract information from partially observed images. While pre-trained CNNs fail significantly even with such a small percentage of the input missing, our proposed framework demonstrates the ability to overcome it after training on fully-observed and partially-observed images at a few observation ratios. We demonstrate that our method is indeed reconstruction-free, retraining-free and generalizable to previously untrained-on observation ratios and it remains effective in two different visual tasks – image classification and object detection. Our framework performs well even for test images with only 10% of pixels available and outperforms the reconstruct-then-classify pipeline in these challenging scenarios for small observation fractions.\",\"PeriodicalId\":119131,\"journal\":{\"name\":\"2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBALSIP.2018.8646498\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBALSIP.2018.8646498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
RECONSTRUCTION-FREE DEEP CONVOLUTIONAL NEURAL NETWORKS FOR PARTIALLY OBSERVED IMAGES
Conventional image discrimination tasks are performed on fully observed images. In challenging real imaging scenarios, where sensing systems are energy demanding or need to operate with limited bandwidth and exposure-time budgets, or defective pixels, where the data collected often suffers from missing information, and this makes the task extremely hard. In this paper, we leverage Convolutional Neural Networks (CNNs) to extract information from partially observed images. While pre-trained CNNs fail significantly even with such a small percentage of the input missing, our proposed framework demonstrates the ability to overcome it after training on fully-observed and partially-observed images at a few observation ratios. We demonstrate that our method is indeed reconstruction-free, retraining-free and generalizable to previously untrained-on observation ratios and it remains effective in two different visual tasks – image classification and object detection. Our framework performs well even for test images with only 10% of pixels available and outperforms the reconstruct-then-classify pipeline in these challenging scenarios for small observation fractions.