整合注意力机制和视觉转换器,增强天文源分类能力

IF 1.8 4区 物理与天体物理 Q3 ASTRONOMY & ASTROPHYSICS
Srinadh Reddy Bhavanam, Sumohana S. Channappayya, Srijith P. K, Shantanu Desai
{"title":"整合注意力机制和视觉转换器,增强天文源分类能力","authors":"Srinadh Reddy Bhavanam,&nbsp;Sumohana S. Channappayya,&nbsp;Srijith P. K,&nbsp;Shantanu Desai","doi":"10.1007/s10509-024-04357-9","DOIUrl":null,"url":null,"abstract":"<div><p>Accurate classification of celestial objects is essential for advancing our understanding of the universe. MargNet is a recently developed deep learning-based classifier applied to the Sloan Digital Sky Survey (SDSS) Data Release 16 (DR16) dataset to segregate stars, quasars, and compact galaxies using photometric data. MargNet utilizes a stacked architecture, combining a Convolutional Neural Network (CNN) for image modelling and an Artificial Neural Network (ANN) for modelling photometric parameters. Notably, MargNet focuses exclusively on compact galaxies and outperforms other methods in classifying compact galaxies from stars and quasars, even at fainter magnitudes. In this study, we propose enhancing MargNet’s performance by incorporating attention mechanisms and Vision Transformer (ViT)-based models for processing image data. The attention mechanism allows the model to focus on relevant features and capture intricate patterns within images, effectively distinguishing between different classes of celestial objects. Additionally, we leverage ViTs, a transformer-based deep learning architecture renowned for exceptional performance in image classification tasks. We enhance the model’s understanding of complex astronomical images by utilizing ViT’s ability to capture global dependencies and contextual information. Our approach uses a curated dataset comprising 240,000 compact and 150,000 faint objects. The models learn classification directly from the data, minimizing human intervention. Furthermore, we explore ViT as a hybrid architecture that uses photometric features and images together as input to predict astronomical objects. Our results demonstrate that the proposed attention mechanism augmented CNN in MargNet marginally outperforms the traditional MargNet and the proposed ViT-based MargNet models. Additionally, the ViT-based hybrid model emerges as the most lightweight and easy-to-train model with classification accuracy similar to that of the best-performing attention-enhanced MargNet. This advancement in deep learning will contribute to greater success in identifying objects in upcoming surveys like the Vera C. Rubin Large Synoptic Survey Telescope.</p></div>","PeriodicalId":8644,"journal":{"name":"Astrophysics and Space Science","volume":"369 8","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhanced astronomical source classification with integration of attention mechanisms and vision transformers\",\"authors\":\"Srinadh Reddy Bhavanam,&nbsp;Sumohana S. Channappayya,&nbsp;Srijith P. K,&nbsp;Shantanu Desai\",\"doi\":\"10.1007/s10509-024-04357-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Accurate classification of celestial objects is essential for advancing our understanding of the universe. MargNet is a recently developed deep learning-based classifier applied to the Sloan Digital Sky Survey (SDSS) Data Release 16 (DR16) dataset to segregate stars, quasars, and compact galaxies using photometric data. MargNet utilizes a stacked architecture, combining a Convolutional Neural Network (CNN) for image modelling and an Artificial Neural Network (ANN) for modelling photometric parameters. Notably, MargNet focuses exclusively on compact galaxies and outperforms other methods in classifying compact galaxies from stars and quasars, even at fainter magnitudes. In this study, we propose enhancing MargNet’s performance by incorporating attention mechanisms and Vision Transformer (ViT)-based models for processing image data. The attention mechanism allows the model to focus on relevant features and capture intricate patterns within images, effectively distinguishing between different classes of celestial objects. Additionally, we leverage ViTs, a transformer-based deep learning architecture renowned for exceptional performance in image classification tasks. We enhance the model’s understanding of complex astronomical images by utilizing ViT’s ability to capture global dependencies and contextual information. Our approach uses a curated dataset comprising 240,000 compact and 150,000 faint objects. The models learn classification directly from the data, minimizing human intervention. Furthermore, we explore ViT as a hybrid architecture that uses photometric features and images together as input to predict astronomical objects. Our results demonstrate that the proposed attention mechanism augmented CNN in MargNet marginally outperforms the traditional MargNet and the proposed ViT-based MargNet models. Additionally, the ViT-based hybrid model emerges as the most lightweight and easy-to-train model with classification accuracy similar to that of the best-performing attention-enhanced MargNet. This advancement in deep learning will contribute to greater success in identifying objects in upcoming surveys like the Vera C. Rubin Large Synoptic Survey Telescope.</p></div>\",\"PeriodicalId\":8644,\"journal\":{\"name\":\"Astrophysics and Space Science\",\"volume\":\"369 8\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Astrophysics and Space Science\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10509-024-04357-9\",\"RegionNum\":4,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ASTRONOMY & ASTROPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Astrophysics and Space Science","FirstCategoryId":"101","ListUrlMain":"https://link.springer.com/article/10.1007/s10509-024-04357-9","RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ASTRONOMY & ASTROPHYSICS","Score":null,"Total":0}
引用次数: 0

摘要

对天体进行精确分类对于增进我们对宇宙的了解至关重要。MargNet是最近开发的基于深度学习的分类器,应用于斯隆数字巡天(SDSS)第16版数据集(DR16),利用测光数据对恒星、类星体和紧凑星系进行分类。MargNet 采用堆叠式架构,结合了用于图像建模的卷积神经网络(CNN)和用于光度参数建模的人工神经网络(ANN)。值得注意的是,MargNet 专注于紧凑星系,在从恒星和类星体中对紧凑星系进行分类方面优于其他方法,即使在较暗的星等下也是如此。在这项研究中,我们建议通过加入注意力机制和基于视觉转换器(ViT)的图像数据处理模型来提高 MargNet 的性能。注意力机制允许模型关注相关特征,捕捉图像中错综复杂的模式,从而有效区分不同类别的天体。此外,我们还利用了 ViTs,这是一种基于变换器的深度学习架构,因其在图像分类任务中的出色表现而闻名。我们利用 ViTs 捕捉全局依赖关系和上下文信息的能力,增强了模型对复杂天文图像的理解。我们的方法使用了一个经过策划的数据集,其中包括 240,000 个紧凑天体和 150,000 个暗弱天体。模型直接从数据中学习分类,最大程度地减少了人工干预。此外,我们还将 ViT 作为一种混合架构进行了探索,该架构将测光特征和图像一起作为输入来预测天体。我们的结果表明,MargNet 中的拟议注意力机制增强型 CNN 略优于传统的 MargNet 和拟议的基于 ViT 的 MargNet 模型。此外,基于 ViT 的混合模型是最轻便、最易训练的模型,其分类准确率与表现最好的注意力增强型 MargNet 相似。 深度学习的这一进步将有助于在即将开展的巡天观测(如维拉-鲁宾大型同步巡天望远镜)中更成功地识别天体。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Enhanced astronomical source classification with integration of attention mechanisms and vision transformers

Enhanced astronomical source classification with integration of attention mechanisms and vision transformers

Accurate classification of celestial objects is essential for advancing our understanding of the universe. MargNet is a recently developed deep learning-based classifier applied to the Sloan Digital Sky Survey (SDSS) Data Release 16 (DR16) dataset to segregate stars, quasars, and compact galaxies using photometric data. MargNet utilizes a stacked architecture, combining a Convolutional Neural Network (CNN) for image modelling and an Artificial Neural Network (ANN) for modelling photometric parameters. Notably, MargNet focuses exclusively on compact galaxies and outperforms other methods in classifying compact galaxies from stars and quasars, even at fainter magnitudes. In this study, we propose enhancing MargNet’s performance by incorporating attention mechanisms and Vision Transformer (ViT)-based models for processing image data. The attention mechanism allows the model to focus on relevant features and capture intricate patterns within images, effectively distinguishing between different classes of celestial objects. Additionally, we leverage ViTs, a transformer-based deep learning architecture renowned for exceptional performance in image classification tasks. We enhance the model’s understanding of complex astronomical images by utilizing ViT’s ability to capture global dependencies and contextual information. Our approach uses a curated dataset comprising 240,000 compact and 150,000 faint objects. The models learn classification directly from the data, minimizing human intervention. Furthermore, we explore ViT as a hybrid architecture that uses photometric features and images together as input to predict astronomical objects. Our results demonstrate that the proposed attention mechanism augmented CNN in MargNet marginally outperforms the traditional MargNet and the proposed ViT-based MargNet models. Additionally, the ViT-based hybrid model emerges as the most lightweight and easy-to-train model with classification accuracy similar to that of the best-performing attention-enhanced MargNet. This advancement in deep learning will contribute to greater success in identifying objects in upcoming surveys like the Vera C. Rubin Large Synoptic Survey Telescope.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Astrophysics and Space Science
Astrophysics and Space Science 地学天文-天文与天体物理
CiteScore
3.40
自引率
5.30%
发文量
106
审稿时长
2-4 weeks
期刊介绍: Astrophysics and Space Science publishes original contributions and invited reviews covering the entire range of astronomy, astrophysics, astrophysical cosmology, planetary and space science and the astrophysical aspects of astrobiology. This includes both observational and theoretical research, the techniques of astronomical instrumentation and data analysis and astronomical space instrumentation. We particularly welcome papers in the general fields of high-energy astrophysics, astrophysical and astrochemical studies of the interstellar medium including star formation, planetary astrophysics, the formation and evolution of galaxies and the evolution of large scale structure in the Universe. Papers in mathematical physics or in general relativity which do not establish clear astrophysical applications will no longer be considered. The journal also publishes topically selected special issues in research fields of particular scientific interest. These consist of both invited reviews and original research papers. Conference proceedings will not be considered. All papers published in the journal are subject to thorough and strict peer-reviewing. Astrophysics and Space Science features short publication times after acceptance and colour printing free of charge.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信