Jigsaw Self-Supervised Visual Representation Learning: An Applied Comparative Analysis Study

Yomna A. Kawashti, D. Khattab, M. Aref
{"title":"Jigsaw Self-Supervised Visual Representation Learning: An Applied Comparative Analysis Study","authors":"Yomna A. Kawashti, D. Khattab, M. Aref","doi":"10.1109/MIUCC55081.2022.9781725","DOIUrl":null,"url":null,"abstract":"Self-supervised learning has been gaining momentum in the computer vision community as a hopeful contender to replace supervised learning. It aims to leverage unlabeled data by training a network on a proxy task and using transfer learning for a downstream task. Jigsaw is one of the proxy tasks used for learning better feature representations in self-supervised learning. In this work, we comparably evaluated the transferability of jigsaw using different architectures and a different dataset for jigsaw training. The features extracted from each convolutional block were evaluated using a unified downstream task. The best performance was achieved by the shallower architecture of AlexNet where the 2nd block achieved better transferability with a mean average precision of 36.17. We conclude that this behavior could be attributed to the smaller scale of our used dataset, so features extracted from earlier and shallower blocks had higher transferability to a dataset of a different domain.","PeriodicalId":105666,"journal":{"name":"2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MIUCC55081.2022.9781725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Self-supervised learning has been gaining momentum in the computer vision community as a hopeful contender to replace supervised learning. It aims to leverage unlabeled data by training a network on a proxy task and using transfer learning for a downstream task. Jigsaw is one of the proxy tasks used for learning better feature representations in self-supervised learning. In this work, we comparably evaluated the transferability of jigsaw using different architectures and a different dataset for jigsaw training. The features extracted from each convolutional block were evaluated using a unified downstream task. The best performance was achieved by the shallower architecture of AlexNet where the 2nd block achieved better transferability with a mean average precision of 36.17. We conclude that this behavior could be attributed to the smaller scale of our used dataset, so features extracted from earlier and shallower blocks had higher transferability to a dataset of a different domain.
拼图自监督视觉表征学习:应用比较分析研究
自监督学习作为替代监督学习的有希望的竞争者,在计算机视觉社区中获得了动力。它旨在通过在代理任务上训练网络并在下游任务中使用迁移学习来利用未标记的数据。拼图是自监督学习中用于学习更好的特征表示的代理任务之一。在这项工作中,我们使用不同的架构和不同的拼图训练数据集来比较评估拼图的可转移性。从每个卷积块中提取的特征使用统一的下游任务进行评估。AlexNet较浅的架构实现了最好的性能,其中第二个区块实现了更好的可转移性,平均精度为36.17。我们得出的结论是,这种行为可能归因于我们使用的数据集规模较小,因此从较早和较浅的块中提取的特征具有更高的可移植性到不同领域的数据集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信