SpikeCLIP: A contrastive language–image pretrained spiking neural network

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Changze Lv , Tianlong Li , Wenhao Liu , Yufei Gu , Jianhan Xu , Cenyuan Zhang , Muling Wu , Xiaoqing Zheng , Xuanjing Huang
{"title":"SpikeCLIP: A contrastive language–image pretrained spiking neural network","authors":"Changze Lv ,&nbsp;Tianlong Li ,&nbsp;Wenhao Liu ,&nbsp;Yufei Gu ,&nbsp;Jianhan Xu ,&nbsp;Cenyuan Zhang ,&nbsp;Muling Wu ,&nbsp;Xiaoqing Zheng ,&nbsp;Xuanjing Huang","doi":"10.1016/j.neunet.2025.107475","DOIUrl":null,"url":null,"abstract":"<div><div>Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency. Despite these advancements, the integration of linguistic and visual features into a unified representation through spike trains poses a significant challenge, and the application of SNNs to multimodal scenarios remains largely unexplored. This paper presents SpikeCLIP, a novel framework designed to bridge the modality gap in spike-based computation. Our approach employs a two-step recipe: an “alignment pre-training” to align features across modalities, followed by a “dual-loss fine-tuning” to refine the model’s performance. Extensive experiments reveal that SNNs achieve results on par with ANNs while substantially reducing energy consumption across various datasets commonly used for multimodal model evaluation. Furthermore, SpikeCLIP maintains robust image classification capabilities, even when dealing with classes that fall outside predefined categories. This study marks a significant advancement in the development of energy-efficient and biologically plausible multimodal learning systems.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107475"},"PeriodicalIF":6.0000,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025003545","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Spiking Neural Networks (SNNs) have emerged as a promising alternative to conventional Artificial Neural Networks (ANNs), demonstrating comparable performance in both visual and linguistic tasks while offering the advantage of improved energy efficiency. Despite these advancements, the integration of linguistic and visual features into a unified representation through spike trains poses a significant challenge, and the application of SNNs to multimodal scenarios remains largely unexplored. This paper presents SpikeCLIP, a novel framework designed to bridge the modality gap in spike-based computation. Our approach employs a two-step recipe: an “alignment pre-training” to align features across modalities, followed by a “dual-loss fine-tuning” to refine the model’s performance. Extensive experiments reveal that SNNs achieve results on par with ANNs while substantially reducing energy consumption across various datasets commonly used for multimodal model evaluation. Furthermore, SpikeCLIP maintains robust image classification capabilities, even when dealing with classes that fall outside predefined categories. This study marks a significant advancement in the development of energy-efficient and biologically plausible multimodal learning systems.

Abstract Image

spikclip:一个对比语言图像预训练的尖峰神经网络
脉冲神经网络(snn)已经成为传统人工神经网络(ann)的一个有前途的替代品,在视觉和语言任务中都表现出相当的性能,同时提供了提高能源效率的优势。尽管取得了这些进步,但通过尖峰列车将语言和视觉特征整合到统一的表示中提出了重大挑战,snn在多模式场景中的应用在很大程度上仍未被探索。本文提出了spikclip,一个新颖的框架,旨在弥合基于峰值计算的模态差距。我们的方法采用两步配方:“对齐预训练”以跨模态对齐特征,然后是“双损失微调”以改进模型的性能。大量的实验表明,snn在获得与人工神经网络相当的结果的同时,大大降低了用于多模态模型评估的各种数据集的能耗。此外,SpikeCLIP保持了强大的图像分类功能,即使在处理预定义类别之外的类时也是如此。这项研究标志着节能和生物合理的多模态学习系统的发展取得了重大进展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信