基于卷积神经网络的纹理分类任务间的迁移学习

Luiz G. Hafemann, Luiz Oliveira, P. Cavalin, R. Sabourin
{"title":"基于卷积神经网络的纹理分类任务间的迁移学习","authors":"Luiz G. Hafemann, Luiz Oliveira, P. Cavalin, R. Sabourin","doi":"10.1109/IJCNN.2015.7280558","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Networks (CNNs) have set the state-of-the-art in many computer vision tasks in recent years. For this type of model, it is common to have millions of parameters to train, commonly requiring large datasets. We investigate a method to transfer learning across different texture classification problems, using CNNs, in order to take advantage of this type of architecture to problems with smaller datasets. We use a Convolutional Neural Network trained on a source dataset (with lots of data) to project the data of a target dataset (with limited data) onto another feature space, and then train a classifier on top of this new representation. Our experiments show that this technique can achieve good results in tasks with small datasets, by leveraging knowledge learned from tasks with larger datasets. Testing the method on the the Brodatz-32 dataset, we achieved an accuracy of 97.04% - superior to models trained with popular texture descriptors, such as Local Binary Patterns and Gabor Filters, and increasing the accuracy by 6 percentage points compared to a CNN trained directly on the Brodatz-32 dataset. We also present a visual analysis of the projected dataset, showing that the data is projected to a space where samples from the same class are clustered together - suggesting that the features learned by the CNN in the source task are relevant for the target task.","PeriodicalId":6539,"journal":{"name":"2015 International Joint Conference on Neural Networks (IJCNN)","volume":"80 1","pages":"1-7"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":"{\"title\":\"Transfer learning between texture classification tasks using Convolutional Neural Networks\",\"authors\":\"Luiz G. Hafemann, Luiz Oliveira, P. Cavalin, R. Sabourin\",\"doi\":\"10.1109/IJCNN.2015.7280558\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional Neural Networks (CNNs) have set the state-of-the-art in many computer vision tasks in recent years. For this type of model, it is common to have millions of parameters to train, commonly requiring large datasets. We investigate a method to transfer learning across different texture classification problems, using CNNs, in order to take advantage of this type of architecture to problems with smaller datasets. We use a Convolutional Neural Network trained on a source dataset (with lots of data) to project the data of a target dataset (with limited data) onto another feature space, and then train a classifier on top of this new representation. Our experiments show that this technique can achieve good results in tasks with small datasets, by leveraging knowledge learned from tasks with larger datasets. Testing the method on the the Brodatz-32 dataset, we achieved an accuracy of 97.04% - superior to models trained with popular texture descriptors, such as Local Binary Patterns and Gabor Filters, and increasing the accuracy by 6 percentage points compared to a CNN trained directly on the Brodatz-32 dataset. We also present a visual analysis of the projected dataset, showing that the data is projected to a space where samples from the same class are clustered together - suggesting that the features learned by the CNN in the source task are relevant for the target task.\",\"PeriodicalId\":6539,\"journal\":{\"name\":\"2015 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"80 1\",\"pages\":\"1-7\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2015.7280558\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2015.7280558","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 31

摘要

近年来,卷积神经网络(cnn)在许多计算机视觉任务中发挥了最先进的作用。对于这种类型的模型,通常有数百万个参数需要训练,通常需要大型数据集。我们研究了一种使用cnn在不同纹理分类问题之间迁移学习的方法,以便利用这种类型的架构来解决具有较小数据集的问题。我们使用在源数据集(具有大量数据)上训练的卷积神经网络将目标数据集(具有有限数据)的数据投影到另一个特征空间,然后在此新表示的基础上训练分类器。我们的实验表明,通过利用从大型数据集的任务中学习到的知识,该技术可以在小型数据集的任务中取得良好的效果。在Brodatz-32数据集上测试该方法,我们获得了97.04%的准确率,优于使用流行的纹理描述符(如Local Binary Patterns和Gabor Filters)训练的模型,并且与直接在Brodatz-32数据集上训练的CNN相比,准确率提高了6个百分点。我们还展示了对投影数据集的可视化分析,显示数据被投影到一个空间,其中来自同一类的样本聚集在一起——这表明CNN在源任务中学习的特征与目标任务相关。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Transfer learning between texture classification tasks using Convolutional Neural Networks
Convolutional Neural Networks (CNNs) have set the state-of-the-art in many computer vision tasks in recent years. For this type of model, it is common to have millions of parameters to train, commonly requiring large datasets. We investigate a method to transfer learning across different texture classification problems, using CNNs, in order to take advantage of this type of architecture to problems with smaller datasets. We use a Convolutional Neural Network trained on a source dataset (with lots of data) to project the data of a target dataset (with limited data) onto another feature space, and then train a classifier on top of this new representation. Our experiments show that this technique can achieve good results in tasks with small datasets, by leveraging knowledge learned from tasks with larger datasets. Testing the method on the the Brodatz-32 dataset, we achieved an accuracy of 97.04% - superior to models trained with popular texture descriptors, such as Local Binary Patterns and Gabor Filters, and increasing the accuracy by 6 percentage points compared to a CNN trained directly on the Brodatz-32 dataset. We also present a visual analysis of the projected dataset, showing that the data is projected to a space where samples from the same class are clustered together - suggesting that the features learned by the CNN in the source task are relevant for the target task.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信