Hongyang Zhao , Yi Liu , Yuhang Han , Xingdong Li , Yanan Guo , Jing Jin
{"title":"A zero-shot high-performance fire detection framework based on large language models","authors":"Hongyang Zhao , Yi Liu , Yuhang Han , Xingdong Li , Yanan Guo , Jing Jin","doi":"10.1016/j.neucom.2025.131403","DOIUrl":null,"url":null,"abstract":"<div><div>Fire detection is crucial for minimizing economic damage and safeguarding human lives. Existing methods, including advanced AI and ML techniques, face challenges such as detecting small fires in complex environments and relying on extensive labeled data for training. This paper proposes a novel zero-shot fire detection framework leveraging large language models (LLMs) and contrastive learning-based image–text pre-training models. The framework introduces an enhanced self-attention mechanism for optimizing image embeddings, diverse prompt generation using GPT-3.5 for improved generalization, and a dynamic threshold calculation method based on statistical analysis to enhance detection accuracy and reliability. The proposed method is tested on the public FLAME dataset and a self-collected dataset. Experimental results demonstrate that the proposed method outperforms state-of-the-art models in detecting small fires within complex backgrounds, achieving better detection performance without the need for any training data. This study highlights the potential of zero-shot learning in fire detection and provides a promising solution for real-world fire detection applications.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"657 ","pages":"Article 131403"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225020752","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Fire detection is crucial for minimizing economic damage and safeguarding human lives. Existing methods, including advanced AI and ML techniques, face challenges such as detecting small fires in complex environments and relying on extensive labeled data for training. This paper proposes a novel zero-shot fire detection framework leveraging large language models (LLMs) and contrastive learning-based image–text pre-training models. The framework introduces an enhanced self-attention mechanism for optimizing image embeddings, diverse prompt generation using GPT-3.5 for improved generalization, and a dynamic threshold calculation method based on statistical analysis to enhance detection accuracy and reliability. The proposed method is tested on the public FLAME dataset and a self-collected dataset. Experimental results demonstrate that the proposed method outperforms state-of-the-art models in detecting small fires within complex backgrounds, achieving better detection performance without the need for any training data. This study highlights the potential of zero-shot learning in fire detection and provides a promising solution for real-world fire detection applications.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.