Modeling Images of Proton Events for the TAIGA Project Using a Generative Adversaria Network: Features of the Network Architecture and the Learning Process

J. Dubenskaya, A. Kryukov, A. Demichev
{"title":"Modeling Images of Proton Events for the TAIGA Project Using a Generative Adversaria Network: Features of the Network Architecture and the Learning Process","authors":"J. Dubenskaya, A. Kryukov, A. Demichev","doi":"10.22323/1.410.0011","DOIUrl":null,"url":null,"abstract":"High-energy particles interacting with the Earth atmosphere give rise to extensive air showers emitting Cherenkov light. This light can be detected on the ground by imaging atmospheric Cherenkov telescopes (IACTs). One of the main problems solved during primary processing of experimental data is the separation of signal events (gamma quanta) against the hadronic background, the bulk of which is made up of proton events. To ensure correct gamma event/proton event separation under real conditions, a large amount of experimental data, including model data, is required. Thus, although proton events are considered as background, their images are also necessary for accurate registration of gamma quanta. We applied a machine learning method, namely the generative adversarial networks (GANs) to generate images of proton events for the TAIGA project. This approach allowed us to significantly increase the speed of image generation. At the same time testing the results using third-party software showed that over 95% of the generated images are correct and can be used in the experiment. In this article we provide a detailed GAN architecture suitable for generating images of proton events similar to those obtained from IACTs of the TAIGA project. The features of the training process are also discussed, including the number of learning epochs and selecting appropriate network parameters.","PeriodicalId":217453,"journal":{"name":"Proceedings of The 5th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2021)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of The 5th International Workshop on Deep Learning in Computational Physics — PoS(DLCP2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22323/1.410.0011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

High-energy particles interacting with the Earth atmosphere give rise to extensive air showers emitting Cherenkov light. This light can be detected on the ground by imaging atmospheric Cherenkov telescopes (IACTs). One of the main problems solved during primary processing of experimental data is the separation of signal events (gamma quanta) against the hadronic background, the bulk of which is made up of proton events. To ensure correct gamma event/proton event separation under real conditions, a large amount of experimental data, including model data, is required. Thus, although proton events are considered as background, their images are also necessary for accurate registration of gamma quanta. We applied a machine learning method, namely the generative adversarial networks (GANs) to generate images of proton events for the TAIGA project. This approach allowed us to significantly increase the speed of image generation. At the same time testing the results using third-party software showed that over 95% of the generated images are correct and can be used in the experiment. In this article we provide a detailed GAN architecture suitable for generating images of proton events similar to those obtained from IACTs of the TAIGA project. The features of the training process are also discussed, including the number of learning epochs and selecting appropriate network parameters.
使用生成对抗网络为TAIGA项目建模质子事件图像:网络架构的特征和学习过程
高能粒子与地球大气相互作用,产生大量的空气阵雨,发出切伦科夫光。这种光可以在地面上通过成像大气切伦科夫望远镜(IACTs)探测到。在实验数据的初级处理过程中解决的主要问题之一是信号事件(伽马量子)与强子背景的分离,其中大部分由质子事件组成。为了保证真实条件下伽马事件/质子事件的正确分离,需要大量的实验数据,包括模型数据。因此,虽然质子事件被认为是背景,但它们的图像对于伽马量子的精确配准也是必要的。我们应用了一种机器学习方法,即生成对抗网络(gan)来为TAIGA项目生成质子事件图像。这种方法使我们能够显著提高图像生成的速度。同时利用第三方软件对结果进行了测试,结果表明95%以上的生成图像是正确的,可以用于实验。在本文中,我们提供了一个详细的GAN架构,适用于生成质子事件的图像,类似于TAIGA项目的IACTs获得的图像。讨论了训练过程的特点,包括学习周期的个数和选择合适的网络参数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信