{"title":"TensorProjection layer: A tensor-based dimension reduction method in deep neural networks","authors":"Toshinari Morimoto , Su-Yun Huang","doi":"10.1016/j.neucom.2025.131695","DOIUrl":null,"url":null,"abstract":"<div><div>In this study, we propose a dimension reduction method for features with tensor structure, implemented as a neural network layer called the TensorProjection Layer. This layer applies mode-wise linear projections to the input tensor to reduce its dimensionality, with the projection directions treated as trainable parameters optimized during model training.</div><div>The method is particularly useful for image data, serving as an alternative to pooling layers that reduce spatial redundancy. It can also reduce channel dimensions, making it applicable to various forms of tensor compression. While especially effective for image-based tasks, its application is not limited to them—as long as the intermediate representation is a tensor. We also demonstrate its use in multi-channel time-series and language data, showcasing its flexibility across diverse modalities.</div><div>We evaluate the method by replacing specific layers in standard baseline models with TPL, across tasks including medical image classification and segmentation, classification of medical time-series signals, and classification of medical abstract texts. Experimental results suggest that, compared to conventional downsampling techniques such as pooling, the proposed layer offers improved generalization performance, making it a promising alternative for feature summarization in diverse neural network architectures.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"658 ","pages":"Article 131695"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225023677","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, we propose a dimension reduction method for features with tensor structure, implemented as a neural network layer called the TensorProjection Layer. This layer applies mode-wise linear projections to the input tensor to reduce its dimensionality, with the projection directions treated as trainable parameters optimized during model training.
The method is particularly useful for image data, serving as an alternative to pooling layers that reduce spatial redundancy. It can also reduce channel dimensions, making it applicable to various forms of tensor compression. While especially effective for image-based tasks, its application is not limited to them—as long as the intermediate representation is a tensor. We also demonstrate its use in multi-channel time-series and language data, showcasing its flexibility across diverse modalities.
We evaluate the method by replacing specific layers in standard baseline models with TPL, across tasks including medical image classification and segmentation, classification of medical time-series signals, and classification of medical abstract texts. Experimental results suggest that, compared to conventional downsampling techniques such as pooling, the proposed layer offers improved generalization performance, making it a promising alternative for feature summarization in diverse neural network architectures.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.