Vision Transformers for identifying asteroids interacting with secular resonances

IF 2.5 2区 物理与天体物理 Q2 ASTRONOMY & ASTROPHYSICS
{"title":"Vision Transformers for identifying asteroids interacting with secular resonances","authors":"","doi":"10.1016/j.icarus.2024.116346","DOIUrl":null,"url":null,"abstract":"<div><div>Currently, more than 1.4 million asteroids are known in the main belt. Future surveys, like those that the Vera C. Rubin Observatory will perform, may increase this number to up to 8 million. While in the past identification of asteroids interacting with secular resonances was performed by a visual analysis of images of resonant arguments, this method is no longer feasible in the age of big data. Deep learning methods based on Convolutional Neural Networks (CNNs) have been used in the recent past to automatically classify databases of several thousands of images of resonant arguments for resonances like the <span><math><msub><mrow><mi>ν</mi></mrow><mrow><mn>6</mn></mrow></msub></math></span>, the <span><math><mrow><mi>g</mi><mo>−</mo><mn>2</mn><msub><mrow><mi>g</mi></mrow><mrow><mn>6</mn></mrow></msub><mo>+</mo><msub><mrow><mi>g</mi></mrow><mrow><mn>5</mn></mrow></msub></mrow></math></span>, and the <span><math><mrow><mi>s</mi><mo>−</mo><msub><mrow><mi>s</mi></mrow><mrow><mn>6</mn></mrow></msub><mo>−</mo><msub><mrow><mi>g</mi></mrow><mrow><mn>5</mn></mrow></msub><mo>+</mo><msub><mrow><mi>g</mi></mrow><mrow><mn>6</mn></mrow></msub></mrow></math></span>. However, it has been shown that computer vision methods based on the Transformer architecture tend to outperform CNN models if the scale of the image database is large enough. Here, for the first time, we developed a Vision Transformer (ViT) model and applied it to publicly available databases for the three secular resonances quoted above. ViT architecture outperforms CNN models in speed and accuracy while avoiding overfitting concerns. If hyper-parameter tuning research is undertaken for each analyzed database, ViT models should be preferred over CNN architectures.</div></div>","PeriodicalId":13199,"journal":{"name":"Icarus","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Icarus","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0019103524004068","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ASTRONOMY & ASTROPHYSICS","Score":null,"Total":0}
引用次数: 0

Abstract

Currently, more than 1.4 million asteroids are known in the main belt. Future surveys, like those that the Vera C. Rubin Observatory will perform, may increase this number to up to 8 million. While in the past identification of asteroids interacting with secular resonances was performed by a visual analysis of images of resonant arguments, this method is no longer feasible in the age of big data. Deep learning methods based on Convolutional Neural Networks (CNNs) have been used in the recent past to automatically classify databases of several thousands of images of resonant arguments for resonances like the ν6, the g2g6+g5, and the ss6g5+g6. However, it has been shown that computer vision methods based on the Transformer architecture tend to outperform CNN models if the scale of the image database is large enough. Here, for the first time, we developed a Vision Transformer (ViT) model and applied it to publicly available databases for the three secular resonances quoted above. ViT architecture outperforms CNN models in speed and accuracy while avoiding overfitting concerns. If hyper-parameter tuning research is undertaken for each analyzed database, ViT models should be preferred over CNN architectures.
用于识别与世俗共振相互作用的小行星的视觉变换器
目前,已知主带中有 140 多万颗小行星。未来的勘测,如 Vera C. Rubin 天文台将进行的勘测,可能会使这一数字增加到 800 万。过去,识别与世俗共振相互作用的小行星是通过对共振论点的图像进行视觉分析来实现的,但在大数据时代,这种方法已不再可行。最近,基于卷积神经网络(CNN)的深度学习方法已被用于自动分类数据库中成千上万的共振参数图像,如ν6、g-2g6+g5 和 s-s6-g5+g6。然而,有研究表明,如果图像数据库的规模足够大,基于变换器架构的计算机视觉方法往往优于 CNN 模型。在此,我们首次开发了视觉变换器(ViT)模型,并将其应用于上述三种世俗共振的公开数据库。ViT 架构在速度和准确性上都优于 CNN 模型,同时避免了过拟合问题。如果针对每个分析数据库进行超参数调整研究,ViT 模型应优于 CNN 架构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Icarus
Icarus 地学天文-天文与天体物理
CiteScore
6.30
自引率
18.80%
发文量
356
审稿时长
2-4 weeks
期刊介绍: Icarus is devoted to the publication of original contributions in the field of Solar System studies. Manuscripts reporting the results of new research - observational, experimental, or theoretical - concerning the astronomy, geology, meteorology, physics, chemistry, biology, and other scientific aspects of our Solar System or extrasolar systems are welcome. The journal generally does not publish papers devoted exclusively to the Sun, the Earth, celestial mechanics, meteoritics, or astrophysics. Icarus does not publish papers that provide "improved" versions of Bode''s law, or other numerical relations, without a sound physical basis. Icarus does not publish meeting announcements or general notices. Reviews, historical papers, and manuscripts describing spacecraft instrumentation may be considered, but only with prior approval of the editor. An entire issue of the journal is occasionally devoted to a single subject, usually arising from a conference on the same topic. The language of publication is English. American or British usage is accepted, but not a mixture of these.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信