Understanding Vector-Valued Neural Networks and Their Relationship With Real and Hypercomplex-Valued Neural Networks: Incorporating intercorrelation between features into neural networks [Hypercomplex Signal and Image Processing]
IF 9.4 1区 工程技术Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
{"title":"Understanding Vector-Valued Neural Networks and Their Relationship With Real and Hypercomplex-Valued Neural Networks: Incorporating intercorrelation between features into neural networks [Hypercomplex Signal and Image Processing]","authors":"Marcos Eduardo Valle","doi":"10.1109/MSP.2024.3401621","DOIUrl":null,"url":null,"abstract":"Despite the many successful applications of deep learning models for multidimensional signal and image processing, most traditional neural networks process data represented by (multidimensional) arrays of real numbers. The intercorrelation between feature channels is usually expected to be learned from the training data, requiring numerous parameters and careful training. In contrast, vector-valued neural networks (referred to as \n<italic>V-nets</i>\n) are conceived to process arrays of vectors and naturally consider the intercorrelation between feature channels. Consequently, they usually have fewer parameters and often undergo more robust training than traditional neural networks. This article aims to present a broad framework for V-nets. In this context, hypercomplex-valued neural networks are regarded as vector-valued models with additional algebraic properties. Furthermore, this article explains the relationship between vector-valued and traditional neural networks. To be precise, a V-net can be obtained by placing restrictions on a real-valued model to consider the intercorrelation between feature channels. Finally, I show how V-nets, including hypercomplex-valued neural networks, can be implemented in current deep learning libraries as real-valued networks.","PeriodicalId":13246,"journal":{"name":"IEEE Signal Processing Magazine","volume":"41 3","pages":"49-58"},"PeriodicalIF":9.4000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Magazine","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10640345/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the many successful applications of deep learning models for multidimensional signal and image processing, most traditional neural networks process data represented by (multidimensional) arrays of real numbers. The intercorrelation between feature channels is usually expected to be learned from the training data, requiring numerous parameters and careful training. In contrast, vector-valued neural networks (referred to as
V-nets
) are conceived to process arrays of vectors and naturally consider the intercorrelation between feature channels. Consequently, they usually have fewer parameters and often undergo more robust training than traditional neural networks. This article aims to present a broad framework for V-nets. In this context, hypercomplex-valued neural networks are regarded as vector-valued models with additional algebraic properties. Furthermore, this article explains the relationship between vector-valued and traditional neural networks. To be precise, a V-net can be obtained by placing restrictions on a real-valued model to consider the intercorrelation between feature channels. Finally, I show how V-nets, including hypercomplex-valued neural networks, can be implemented in current deep learning libraries as real-valued networks.
尽管深度学习模型在多维信号和图像处理方面有许多成功应用,但大多数传统神经网络处理的数据都是由实数(多维)阵列表示的。特征通道之间的相互关系通常需要从训练数据中学习,因此需要大量参数和仔细的训练。与此相反,向量值神经网络(简称 V-网络)的设计理念是处理向量数组,并自然地考虑特征通道之间的相互关系。因此,与传统的神经网络相比,它们的参数通常较少,而且往往需要经过更稳健的训练。本文旨在为 V 型网络提供一个广泛的框架。在此背景下,超复值神经网络被视为具有额外代数特性的向量值模型。此外,本文还解释了向量值神经网络与传统神经网络之间的关系。准确地说,V-网络可以通过对实值模型施加限制来获得,以考虑特征通道之间的相互关系。最后,我展示了 V 型网络(包括超复值神经网络)如何在当前的深度学习库中作为实值网络来实现。
期刊介绍:
EEE Signal Processing Magazine is a publication that focuses on signal processing research and applications. It publishes tutorial-style articles, columns, and forums that cover a wide range of topics related to signal processing. The magazine aims to provide the research, educational, and professional communities with the latest technical developments, issues, and events in the field. It serves as the main communication platform for the society, addressing important matters that concern all members.