{"title":"具有任意几何形状的异质轨道折叠的自动编码器","authors":"Enrique Escalante–Notario, Ignacio Portillo–Castillo, Saúl Ramos–Sánchez","doi":"10.1088/2399-6528/ad246f","DOIUrl":null,"url":null,"abstract":"Artificial neural networks can be an important tool to improve the search for admissible string compactifications and characterize them. In this paper we construct the <monospace>heterotic orbiencoder</monospace>, a general deep autoencoder to study heterotic orbifold models arising from various Abelian orbifold geometries. Our neural network can be easily trained to successfully encode the large parameter space of many orbifold geometries simultaneously, independently of the statistical dissimilarities of their training features. In particular, we show that our autoencoder is capable of compressing with good accuracy the large parameter space of two promising orbifold geometries in just three parameters. Further, most orbifold models with phenomenologically appealing features appear in bounded regions of this small space. Our results hint towards a possible simplification of the classification of (promising) heterotic orbifold models.","PeriodicalId":47089,"journal":{"name":"Journal of Physics Communications","volume":"12 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An autoencoder for heterotic orbifolds with arbitrary geometry\",\"authors\":\"Enrique Escalante–Notario, Ignacio Portillo–Castillo, Saúl Ramos–Sánchez\",\"doi\":\"10.1088/2399-6528/ad246f\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial neural networks can be an important tool to improve the search for admissible string compactifications and characterize them. In this paper we construct the <monospace>heterotic orbiencoder</monospace>, a general deep autoencoder to study heterotic orbifold models arising from various Abelian orbifold geometries. Our neural network can be easily trained to successfully encode the large parameter space of many orbifold geometries simultaneously, independently of the statistical dissimilarities of their training features. In particular, we show that our autoencoder is capable of compressing with good accuracy the large parameter space of two promising orbifold geometries in just three parameters. Further, most orbifold models with phenomenologically appealing features appear in bounded regions of this small space. Our results hint towards a possible simplification of the classification of (promising) heterotic orbifold models.\",\"PeriodicalId\":47089,\"journal\":{\"name\":\"Journal of Physics Communications\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-02-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Physics Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2399-6528/ad246f\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PHYSICS, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Physics Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2399-6528/ad246f","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
An autoencoder for heterotic orbifolds with arbitrary geometry
Artificial neural networks can be an important tool to improve the search for admissible string compactifications and characterize them. In this paper we construct the heterotic orbiencoder, a general deep autoencoder to study heterotic orbifold models arising from various Abelian orbifold geometries. Our neural network can be easily trained to successfully encode the large parameter space of many orbifold geometries simultaneously, independently of the statistical dissimilarities of their training features. In particular, we show that our autoencoder is capable of compressing with good accuracy the large parameter space of two promising orbifold geometries in just three parameters. Further, most orbifold models with phenomenologically appealing features appear in bounded regions of this small space. Our results hint towards a possible simplification of the classification of (promising) heterotic orbifold models.