{"title":"VFF-Net: Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights","authors":"Gilha Lee, Jin Shin, Hyun Kim","doi":"10.1016/j.neunet.2025.107697","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP have been explored, the most well-known of which is the forward–forward network (FFN). We propose a visual forward–forward network (VFF-Net) that significantly improves FFNs for deeper networks, focusing on enhancing performance in convolutional neural network (CNN) training. VFF-Net utilizes a label-wise noise labeling method and cosine-similarity-based contrastive loss, which directly uses intermediate features to solve both the input information loss problem and the performance drop problem caused by the goodness function when applied to CNNs. Furthermore, VFF-Net is accompanied by layer grouping, which groups layers with the same output channel for application in well-known existing CNN-based models; this reduces the number of minima that need to be optimized and facilitates the transfer to CNN-based models by demonstrating the effects of ensemble training. VFF-Net improves the test error by up to 8.31% and 3.80% on a model consisting of four convolutional layers compared with the FFN model targeting a conventional CNN on CIFAR-10 and CIFAR-100, respectively. Furthermore, the fully connected layer-based VFF-Net achieved a test error of 1.70% on the MNIST dataset, which is better than that of the existing BP. In conclusion, the proposed VFF-Net significantly reduces the performance gap with BP by improving the FFN and shows the flexibility to be portable to existing CNN-based models.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107697"},"PeriodicalIF":6.0000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005775","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP have been explored, the most well-known of which is the forward–forward network (FFN). We propose a visual forward–forward network (VFF-Net) that significantly improves FFNs for deeper networks, focusing on enhancing performance in convolutional neural network (CNN) training. VFF-Net utilizes a label-wise noise labeling method and cosine-similarity-based contrastive loss, which directly uses intermediate features to solve both the input information loss problem and the performance drop problem caused by the goodness function when applied to CNNs. Furthermore, VFF-Net is accompanied by layer grouping, which groups layers with the same output channel for application in well-known existing CNN-based models; this reduces the number of minima that need to be optimized and facilitates the transfer to CNN-based models by demonstrating the effects of ensemble training. VFF-Net improves the test error by up to 8.31% and 3.80% on a model consisting of four convolutional layers compared with the FFN model targeting a conventional CNN on CIFAR-10 and CIFAR-100, respectively. Furthermore, the fully connected layer-based VFF-Net achieved a test error of 1.70% on the MNIST dataset, which is better than that of the existing BP. In conclusion, the proposed VFF-Net significantly reduces the performance gap with BP by improving the FFN and shows the flexibility to be portable to existing CNN-based models.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.