A Scalable Unsupervised and Back Propagation Free Learning With SACSOM: A Novel Approach to SOM-Based Architectures

Gaurav R. Hirani;Kevin I-Kai Wang;Waleed H. Abdulla
{"title":"A Scalable Unsupervised and Back Propagation Free Learning With SACSOM: A Novel Approach to SOM-Based Architectures","authors":"Gaurav R. Hirani;Kevin I-Kai Wang;Waleed H. Abdulla","doi":"10.1109/TAI.2024.3504479","DOIUrl":null,"url":null,"abstract":"The field of computer vision is predominantly driven by supervised models, which, despite their efficacy, are computationally expensive and often intractable for many applications. Recently, research has expedited alternative avenues such as self-organizing maps (SOM)-based architectures, which offer significant advantages such as tractability, the absence of back-propagation, and feed-forward unsupervised learning. However, these SOM-based approaches frequently suffer from lower accuracy and limited generalization capabilities. To address these shortcomings, we propose a novel model called split and concur SOM (SACSOM). SACSOM overcomes the limitations of closely related SOM-based algorithms by utilizing multiple parallel branches, each equipped with its own SOM modules that process data independently with varying patch sizes. Furthermore, by creating groups of classes and using respective training samples to train independent subbranches in each branch, our approach accommodates datasets with a large number of classes. SACSOM employs a simple yet effective labeling technique requiring minimal labeled samples. The outputs from each branch, filtered by a threshold, contribute to the final prediction. Experimental validation on MNIST-digit, Fashion-MNIST, CIFAR-10, and CIFAR-100 demonstrates that SACSOM achieves competitive accuracy with significantly reduced computation time. Furthermore, it exhibits superior performance and generalization capabilities, even in high-noise scenarios. The weights of the single-layered SACSOM provide meaningful insights into the patch-based learning pattern, enhancing its tractability and making it ideal from the perspective of explainable AI. This study addresses the limitations of current clustering techniques, such as K-means and traditional SOMs, by proposing a lightweight, manageable, and fast architecture that does not require a GPU, making it suitable for low-powered devices.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 4","pages":"955-967"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10768875/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The field of computer vision is predominantly driven by supervised models, which, despite their efficacy, are computationally expensive and often intractable for many applications. Recently, research has expedited alternative avenues such as self-organizing maps (SOM)-based architectures, which offer significant advantages such as tractability, the absence of back-propagation, and feed-forward unsupervised learning. However, these SOM-based approaches frequently suffer from lower accuracy and limited generalization capabilities. To address these shortcomings, we propose a novel model called split and concur SOM (SACSOM). SACSOM overcomes the limitations of closely related SOM-based algorithms by utilizing multiple parallel branches, each equipped with its own SOM modules that process data independently with varying patch sizes. Furthermore, by creating groups of classes and using respective training samples to train independent subbranches in each branch, our approach accommodates datasets with a large number of classes. SACSOM employs a simple yet effective labeling technique requiring minimal labeled samples. The outputs from each branch, filtered by a threshold, contribute to the final prediction. Experimental validation on MNIST-digit, Fashion-MNIST, CIFAR-10, and CIFAR-100 demonstrates that SACSOM achieves competitive accuracy with significantly reduced computation time. Furthermore, it exhibits superior performance and generalization capabilities, even in high-noise scenarios. The weights of the single-layered SACSOM provide meaningful insights into the patch-based learning pattern, enhancing its tractability and making it ideal from the perspective of explainable AI. This study addresses the limitations of current clustering techniques, such as K-means and traditional SOMs, by proposing a lightweight, manageable, and fast architecture that does not require a GPU, making it suitable for low-powered devices.
基于SACSOM的可扩展无监督和反向传播自由学习:一种基于SACSOM架构的新方法
计算机视觉领域主要是由监督模型驱动的,尽管它们很有效,但计算成本很高,而且对许多应用来说往往很棘手。最近,研究已经加速了替代途径,如基于自组织地图(SOM)的架构,它提供了显著的优势,如可追溯性、缺乏反向传播和前馈无监督学习。然而,这些基于som的方法往往存在精度较低和泛化能力有限的问题。为了解决这些缺点,我们提出了一种新的模型,称为分裂和一致SOM (SACSOM)。通过利用多个并行分支,SACSOM克服了密切相关的基于SOM的算法的局限性,每个分支都配备了自己的SOM模块,可以独立处理不同补丁大小的数据。此外,通过创建类组并使用各自的训练样本来训练每个分支中的独立子分支,我们的方法可以容纳具有大量类的数据集。SACSOM采用简单而有效的标记技术,需要最小的标记样品。每个分支的输出,通过阈值过滤,有助于最终的预测。在mist -digit、Fashion-MNIST、CIFAR-10和CIFAR-100上的实验验证表明,SACSOM在显著减少计算时间的情况下实现了具有竞争力的精度。此外,即使在高噪声情况下,它也表现出卓越的性能和泛化能力。单层SACSOM的权重为基于补丁的学习模式提供了有意义的见解,增强了其可溯性,并使其从可解释的人工智能的角度来看是理想的。本研究解决了当前聚类技术的局限性,如K-means和传统som,提出了一种轻量级、可管理和快速的架构,不需要GPU,使其适用于低功耗设备。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信