{"title":"Symbolic regression as a feature engineering method for machine and deep learning regression tasks","authors":"Assaf Shmuel, Oren Glickman and Teddy Lazebnik","doi":"10.1088/2632-2153/ad513a","DOIUrl":"https://doi.org/10.1088/2632-2153/ad513a","url":null,"abstract":"In the realm of machine and deep learning (DL) regression tasks, the role of effective feature engineering (FE) is pivotal in enhancing model performance. Traditional approaches of FE often rely on domain expertise to manually design features for machine learning (ML) models. In the context of DL models, the FE is embedded in the neural network’s architecture, making it hard for interpretation. In this study, we propose to integrate symbolic regression (SR) as an FE process before a ML model to improve its performance. We show, through extensive experimentation on synthetic and 21 real-world datasets, that the incorporation of SR-derived features significantly enhances the predictive capabilities of both machine and DL regression models with 34%–86% root mean square error (RMSE) improvement in synthetic datasets and 4%–11.5% improvement in real-world datasets. In an additional realistic use case, we show the proposed method improves the ML performance in predicting superconducting critical temperatures based on Eliashberg theory by more than 20% in terms of RMSE. These results outline the potential of SR as an FE component in data-driven models, improving them in terms of performance and interpretability.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"49 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoang-Quan Nguyen, Ba-Anh Le, Bao-Viet Tran, Thai-Son Vu, Thi-Loan Bui
{"title":"Deep artificial neural network-powered phase field model for predicting damage characteristic in brittle composite under varying configurations","authors":"Hoang-Quan Nguyen, Ba-Anh Le, Bao-Viet Tran, Thai-Son Vu, Thi-Loan Bui","doi":"10.1088/2632-2153/ad52e8","DOIUrl":"https://doi.org/10.1088/2632-2153/ad52e8","url":null,"abstract":"This work introduces a novel artificial neural network (ANN)-powered phase field model, offering rapid and precise predictions of fracture propagation in brittle materials. To improve the capabilities of the ANN model, we incorporate a loop of conditions into its core to regulate the absolute percentage error for each observation point, that filters and consistently selects the most accurate outcome. This algorithm enables our model to better adapt to the highly sensitive validation data arising from varying configurations. The effectiveness of the approach is illustrated through three examples involving changes in the microgeometry and material properties of steel fiber-reinforced high-strength concrete structures. Indeed, the predicted outcomes from the improved ANN phase field model in terms of stress–strain relationship, and crack propagation path demonstrates an outperformance compared with that based on the extreme gradient boosting method, a leading regression machine learning technique for tabular data. Additionally, the introduced model exhibits a remarkable speed advantage, being 180 times faster than traditional phase field simulations, and provides results at nearly any fiber location, demonstrating superiority over the phase field model. This study marks a significant advancement in the application of artificial intelligence for accurately predicting crack propagation paths in composite materials, particularly in cases involving the relative positioning of the fiber and initial crack location.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"63 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizaveta Demyanenko, Christoph Feinauer, Enrico M Malatesta, Luca Saglietti
{"title":"The twin peaks of learning neural networks","authors":"Elizaveta Demyanenko, Christoph Feinauer, Enrico M Malatesta, Luca Saglietti","doi":"10.1088/2632-2153/ad524d","DOIUrl":"https://doi.org/10.1088/2632-2153/ad524d","url":null,"abstract":"Recent works demonstrated the existence of a double-descent phenomenon for the generalization error of neural networks, where highly overparameterized models escape overfitting and achieve good test performance, at odds with the standard bias-variance trade-off described by statistical learning theory. In the present work, we explore a link between this phenomenon and the increase of complexity and sensitivity of the function represented by neural networks. In particular, we study the Boolean mean dimension (BMD), a metric developed in the context of Boolean function analysis. Focusing on a simple teacher-student setting for the random feature model, we derive a theoretical analysis based on the replica method that yields an interpretable expression for the BMD, in the high dimensional regime where the number of data points, the number of features, and the input size grow to infinity. We find that, as the degree of overparameterization of the network is increased, the BMD reaches an evident peak at the interpolation threshold, in correspondence with the generalization error peak, and then slowly approaches a low asymptotic value. The same phenomenology is then traced in numerical experiments with different model classes and training setups. Moreover, we find empirically that adversarially initialized models tend to show higher BMD values, and that models that are more robust to adversarial attacks exhibit a lower BMD.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"375 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anjana S Desai, Anindita Bandopadhyaya, Aparna Ashok, Maneesha4*maneesha@dubai.bits-pilani.ac.i, Neeru Bhagat
{"title":"Decoding characteristics of key physical properties in silver nanoparticles by attaining centroids for cytotoxicity prediction through data cleansing","authors":"Anjana S Desai, Anindita Bandopadhyaya, Aparna Ashok, Maneesha4*maneesha@dubai.bits-pilani.ac.i, Neeru Bhagat","doi":"10.1088/2632-2153/ad51cb","DOIUrl":"https://doi.org/10.1088/2632-2153/ad51cb","url":null,"abstract":"This research underscores the profound impact of data cleansing, ensuring dataset integrity and providing a structured foundation for unraveling convoluted connections between diverse physical properties and cytotoxicity. As the scientific community delves deeper into this interplay, it becomes clear that precise data purification is a fundamental aspect of investigating parameters within datasets. The study presents the need for data filtration in the background of machine learning (ML) that has widened its horizon into the field of biological application through the amalgamation of predictive systems and algorithms that delve into the intricate characteristics of cytotoxicity of nanoparticles. The reliability and accuracy of models in the ML landscape hinge on the quality of input data, making data cleansing a critical component of the pre-processing pipeline. The main encounter faced here is the lengthy, broad and complex datasets that have to be toned down for further studies. Through a thorough data cleansing process, this study addresses the complexities arising from diverse sources, resulting in a refined dataset. The filtration process employs K-means clustering to derive centroids, revealing the correlation between the physical properties of nanoparticles, viz, concentration, zeta potential, hydrodynamic diameter, morphology, and absorbance wavelength, and cytotoxicity outcomes measured in terms of cell viability. The cell lines considered for determining the centroid values that predicts the cytotoxicity of silver nanoparticles are human and animal cell lines which were categorized as normal and carcinoma type. The objective of the study is to simplify the high-dimensional data for accurate analysis of the parameters that affect the cytotoxicity of silver NPs through centroids.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"21 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Kipp, Fabian R Lux, Thorben Pürling, Abigail Morrison, Stefan Blügel, Daniele Pinna, Yuriy Mokrousov
{"title":"Machine learning inspired models for Hall effects in non-collinear magnets","authors":"Jonathan Kipp, Fabian R Lux, Thorben Pürling, Abigail Morrison, Stefan Blügel, Daniele Pinna, Yuriy Mokrousov","doi":"10.1088/2632-2153/ad51ca","DOIUrl":"https://doi.org/10.1088/2632-2153/ad51ca","url":null,"abstract":"The anomalous Hall effect has been front and center in solid state research and material science for over a century now, and the complex transport phenomena in nontrivial magnetic textures have gained an increasing amount of attention, both in theoretical and experimental studies. However, a clear path forward to capturing the influence of magnetization dynamics on anomalous Hall effect even in smallest frustrated magnets or spatially extended magnetic textures is still intensively sought after. In this work, we present an expansion of the anomalous Hall tensor into symmetrically invariant objects, encoding the magnetic configuration up to arbitrary power of spin. We show that these symmetric invariants can be utilized in conjunction with advanced regularization techniques in order to build models for the electric transport in magnetic textures which are, on one hand, complete with respect to the point group symmetry of the underlying lattice, and on the other hand, depend on a minimal number of order parameters only. Here, using a four-band tight-binding model on a honeycomb lattice, we demonstrate that the developed method can be used to address the importance and properties of higher-order contributions to transverse transport. The efficiency and breadth enabled by this method provides an ideal systematic approach to tackle the inherent complexity of response properties of noncollinear magnets, paving the way to the exploration of electric transport in intrinsically frustrated magnets as well as large-scale magnetic textures.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"154 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefan Heinen, Danish Khan, Guido Falk von Rudorff, Konstantin Karandashev, Daniel Jose Arismendi Arrieta, Alastair J A Price, Surajit Nandi, Arghya Bhowmik, Kersti Hermansson, O Anatole von Lilienfeld
{"title":"Reducing training data needs with minimal multilevel machine learning (M3L)","authors":"Stefan Heinen, Danish Khan, Guido Falk von Rudorff, Konstantin Karandashev, Daniel Jose Arismendi Arrieta, Alastair J A Price, Surajit Nandi, Arghya Bhowmik, Kersti Hermansson, O Anatole von Lilienfeld","doi":"10.1088/2632-2153/ad4ae5","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4ae5","url":null,"abstract":"For many machine learning applications in science, data acquisition, not training, is the bottleneck even when avoiding experiments and relying on computation and simulation. Correspondingly, and in order to reduce cost and carbon footprint, training data efficiency is key. We introduce minimal multilevel machine learning (M3L) which optimizes training data set sizes using a loss function at multiple levels of reference data in order to minimize a combination of prediction error with overall training data acquisition costs (as measured by computational wall-times). Numerical evidence has been obtained for calculated atomization energies and electron affinities of thousands of organic molecules at various levels of theory including HF, MP2, DLPNO-CCSD(T), DFHFCABS, PNOMP2F12, and PNOCCSD(T)F12, and treating them with basis sets TZ, cc-pVTZ, and AVTZ-F12. Our M3L benchmarks for reaching chemical accuracy in distinct chemical compound sub-spaces indicate substantial computational cost reductions by factors of ∼1.01, 1.1, 3.8, 13.8, and 25.8 when compared to heuristic sub-optimal multilevel machine learning (M2L) for the data sets QM7b, QM9<inline-formula>\u0000<tex-math><?CDATA $^mathrm{LCCSD(T)}$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mrow><mml:mi>LCCSD</mml:mi><mml:mo stretchy=\"false\">(</mml:mo><mml:mi mathvariant=\"normal\">T</mml:mi><mml:mo stretchy=\"false\">)</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad4ae5ieqn1.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>, Electrolyte Genome Project, QM9<inline-formula>\u0000<tex-math><?CDATA $^mathrm{CCSD(T)}_mathrm{AE}$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msubsup><mml:mi></mml:mi><mml:mrow><mml:mi>AE</mml:mi></mml:mrow><mml:mrow><mml:mi>CCSD</mml:mi><mml:mo stretchy=\"false\">(</mml:mo><mml:mi mathvariant=\"normal\">T</mml:mi><mml:mo stretchy=\"false\">)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad4ae5ieqn2.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>, and QM9<inline-formula>\u0000<tex-math><?CDATA $^mathrm{CCSD(T)}_mathrm{EA}$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msubsup><mml:mi></mml:mi><mml:mrow><mml:mi>EA</mml:mi></mml:mrow><mml:mrow><mml:mi>CCSD</mml:mi><mml:mo stretchy=\"false\">(</mml:mo><mml:mi mathvariant=\"normal\">T</mml:mi><mml:mo stretchy=\"false\">)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad4ae5ieqn3.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>, respectively. Furthermore, we use M2L to investigate the performance for 76 density functionals when used within multilevel learning and building on the following levels drawn from the hierarchy of Jacobs Ladder: LDA, GGA, mGGA, and hybrid functionals. Within M2L and the molecules considered, mGGAs do not provide any noticeable advantage over GGAs. Among the functionals considered and in combination with LDA, the three ","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"19 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles Fox, Neil D Tran, F Nikki Nacion, Samiha Sharlin and Tyler R Josephson
{"title":"Incorporating background knowledge in symbolic regression using a computer algebra system","authors":"Charles Fox, Neil D Tran, F Nikki Nacion, Samiha Sharlin and Tyler R Josephson","doi":"10.1088/2632-2153/ad4a1e","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4a1e","url":null,"abstract":"Symbolic regression (SR) can generate interpretable, concise expressions that fit a given dataset, allowing for more human understanding of the structure than black-box approaches. The addition of background knowledge (in the form of symbolic mathematical constraints) allows for the generation of expressions that are meaningful with respect to theory while also being consistent with data. We specifically examine the addition of constraints to traditional genetic algorithm (GA) based SR (PySR) as well as a Markov-chain Monte Carlo (MCMC) based Bayesian SR architecture (Bayesian Machine Scientist), and apply these to rediscovering adsorption equations from experimental, historical datasets. We find that, while hard constraints prevent GA and MCMC SR from searching, soft constraints can lead to improved performance both in terms of search effectiveness and model meaningfulness, with computational costs increasing by about an order of magnitude. If the constraints do not correlate well with the dataset or expected models, they can hinder the search of expressions. We find incorporating these constraints in Bayesian SR (as the Bayesian prior) is better than by modifying the fitness function in the GA.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"26 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141258806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU optimization techniques to accelerate optiGAN-a particle simulation GAN.","authors":"Anirudh Srikanth, Carlotta Trigila, Emilie Roncali","doi":"10.1088/2632-2153/ad51c9","DOIUrl":"10.1088/2632-2153/ad51c9","url":null,"abstract":"<p><p>The demand for specialized hardware to train AI models has increased in tandem with the increase in the model complexity over the recent years. Graphics processing unit (GPU) is one such hardware that is capable of parallelizing operations performed on a large chunk of data. Companies like Nvidia, AMD, and Google have been constantly scaling-up the hardware performance as fast as they can. Nevertheless, there is still a gap between the required processing power and processing capacity of the hardware. To increase the hardware utilization, the software has to be optimized too. In this paper, we present some general GPU optimization techniques we used to efficiently train the optiGAN model, a Generative Adversarial Network that is capable of generating multidimensional probability distributions of optical photons at the photodetector face in radiation detectors, on an 8GB Nvidia Quadro RTX 4000 GPU. We analyze and compare the performances of all the optimizations based on the execution time and the memory consumed using the Nvidia Nsight Systems profiler tool. The optimizations gave approximately a 4.5x increase in the runtime performance when compared to a naive training on the GPU, without compromising the model performance. Finally we discuss optiGANs future work and how we are planning to scale the model on GPUs.</p>","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"5 2","pages":"027001"},"PeriodicalIF":6.3,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11170465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141331906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew L Olson, Shusen Liu, Jayaraman J Thiagarajan, Bogdan Kustowski, Weng-Keen Wong and Rushil Anirudh
{"title":"Transformer-powered surrogates close the ICF simulation-experiment gap with extremely limited data","authors":"Matthew L Olson, Shusen Liu, Jayaraman J Thiagarajan, Bogdan Kustowski, Weng-Keen Wong and Rushil Anirudh","doi":"10.1088/2632-2153/ad4e03","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4e03","url":null,"abstract":"Recent advances in machine learning, specifically transformer architecture, have led to significant advancements in commercial domains. These powerful models have demonstrated superior capability to learn complex relationships and often generalize better to new data and problems. This paper presents a novel transformer-powered approach for enhancing prediction accuracy in multi-modal output scenarios, where sparse experimental data is supplemented with simulation data. The proposed approach integrates transformer-based architecture with a novel graph-based hyper-parameter optimization technique. The resulting system not only effectively reduces simulation bias, but also achieves superior prediction accuracy compared to the prior method. We demonstrate the efficacy of our approach on inertial confinement fusion experiments, where only 10 shots of real-world data are available, as well as synthetic versions of these experiments.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"31 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Zeng, Carlos E Pérez De Jesús, Andrew J Fox and Michael D Graham
{"title":"Autoencoders for discovering manifold dimension and coordinates in data from complex dynamical systems","authors":"Kevin Zeng, Carlos E Pérez De Jesús, Andrew J Fox and Michael D Graham","doi":"10.1088/2632-2153/ad4ba5","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4ba5","url":null,"abstract":"While many phenomena in physics and engineering are formally high-dimensional, their long-time dynamics often live on a lower-dimensional manifold. The present work introduces an autoencoder framework that combines implicit regularization with internal linear layers and L2 regularization (weight decay) to automatically estimate the underlying dimensionality of a data set, produce an orthogonal manifold coordinate system, and provide the mapping functions between the ambient space and manifold space, allowing for out-of-sample projections. We validate our framework’s ability to estimate the manifold dimension for a series of datasets from dynamical systems of varying complexities and compare to other state-of-the-art estimators. We analyze the training dynamics of the network to glean insight into the mechanism of low-rank learning and find that collectively each of the implicit regularizing layers compound the low-rank representation and even self-correct during training. Analysis of gradient descent dynamics for this architecture in the linear case reveals the role of the internal linear layers in leading to faster decay of a ‘collective weight variable’ incorporating all layers, and the role of weight decay in breaking degeneracies and thus driving convergence along directions in which no decay would occur in its absence. We show that this framework can be naturally extended for applications of state-space modeling and forecasting by generating a data-driven dynamic model of a spatiotemporally chaotic partial differential equation using only the manifold coordinates. Finally, we demonstrate that our framework is robust to hyperparameter choices.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"9 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}