{"title":"Synergizing human expertise and AI efficiency with language model for microscopy operation and automated experiment design *","authors":"Yongtao Liu, Marti Checa and Rama K Vasudevan","doi":"10.1088/2632-2153/ad52e9","DOIUrl":"https://doi.org/10.1088/2632-2153/ad52e9","url":null,"abstract":"With the advent of large language models (LLMs), in both the open source and proprietary domains, attention is turning to how to exploit such artificial intelligence (AI) systems in assisting complex scientific tasks, such as material synthesis, characterization, analysis and discovery. Here, we explore the utility of LLMs, particularly ChatGPT4, in combination with application program interfaces (APIs) in tasks of experimental design, programming workflows, and data analysis in scanning probe microscopy, using both in-house developed APIs and APIs given by a commercial vendor for instrument control. We find that the LLM can be especially useful in converting ideations of experimental workflows to executable code on microscope APIs. Beyond code generation, we find that the GPT4 is capable of analyzing microscopy images in a generic sense. At the same time, we find that GPT4 suffers from an inability to extend beyond basic analyses for more in-depth technical experimental design. We argue that an LLM specifically fine-tuned for individual scientific domains can potentially be a better language interface for converting scientific ideations from human experts to executable workflows. Such a synergy between human expertise and LLM efficiency in experimentation can open new doors for accelerating scientific research, enabling effective experimental protocols sharing in the scientific community.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the ability of PINNs to solve Burgers’ PDE near finite-time blowup","authors":"Dibyakanti Kumar, Anirbit Mukherjee","doi":"10.1088/2632-2153/ad51cd","DOIUrl":"https://doi.org/10.1088/2632-2153/ad51cd","url":null,"abstract":"Physics Informed Neural Networks (PINNs) have been achieving ever newer feats of solving complicated Partial Differential Equations (PDEs) numerically while offering an attractive trade-off between accuracy and speed of inference. A particularly challenging aspect of PDEs is that there exist simple PDEs which can evolve into singular solutions in finite time starting from smooth initial conditions. In recent times some striking experiments have suggested that PINNs might be good at even detecting such finite-time blow-ups. In this work, we embark on a program to investigate this stability of PINNs from a rigorous theoretical viewpoint. Firstly, we derive error bounds for PINNs for Burgers’ PDE, in arbitrary dimensions, under conditions that allow for a finite-time blow-up. Our bounds give a theoretical justification for the functional regularization terms that have been reported to be useful for training PINNs near finite-time blow-up. Then we demonstrate via experiments that our bounds are significantly correlated to the <inline-formula>\u0000<tex-math><?CDATA $ell_2$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msub><mml:mi>ℓ</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad51cdieqn1.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>-distance of the neurally found surrogate from the true blow-up solution, when computed on sequences of PDEs that are getting increasingly close to a blow-up.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A quantum inspired approach to learning dynamical laws from data—block-sparsity and gauge-mediated weight sharing","authors":"J Fuksa, M Götte, I Roth, J Eisert","doi":"10.1088/2632-2153/ad4f4e","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4f4e","url":null,"abstract":"Recent years have witnessed an increased interest in recovering dynamical laws of complex systems in a largely data-driven fashion under meaningful hypotheses. In this work, we propose a scalable and numerically robust method for this task, utilizing efficient block-sparse tensor train representations of dynamical laws, inspired by similar approaches in quantum many-body systems. Low-rank tensor train representations have been previously derived for dynamical laws of one-dimensional systems. We extend this result to efficient representations of systems with <italic toggle=\"yes\">K</italic>-mode interactions and controlled approximations of systems with decaying interactions. We further argue that natural structure assumptions on dynamical laws, such as bounded polynomial degrees, can be exploited in the form of block-sparse support patterns of tensor-train cores. Additional structural similarities between interactions of certain modes can be accounted for by weight sharing within the ansatz. To make use of these structure assumptions, we propose a novel optimization algorithm, block-sparsity restricted alternating least squares with gauge-mediated weight sharing. The algorithm is inspired by similar notions in machine learning and achieves a significant improvement in performance over previous approaches. We demonstrate the performance of the method numerically on three one-dimensional systems—the Fermi–Pasta–Ulam–Tsingou system, rotating magnetic dipoles and point particles interacting via modified Lennard–Jones potentials, observing a highly accurate and noise-robust recovery.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ho Nam Nguyen, Felix Motzoi, Mekena Metcalf, K Birgitta Whaley, Marin Bukov and Markus Schmitt
{"title":"Reinforcement learning pulses for transmon qubit entangling gates","authors":"Ho Nam Nguyen, Felix Motzoi, Mekena Metcalf, K Birgitta Whaley, Marin Bukov and Markus Schmitt","doi":"10.1088/2632-2153/ad4f4d","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4f4d","url":null,"abstract":"The utility of a quantum computer is highly dependent on the ability to reliably perform accurate quantum logic operations. For finding optimal control solutions, it is of particular interest to explore model-free approaches, since their quality is not constrained by the limited accuracy of theoretical models for the quantum processor—in contrast to many established gate implementation strategies. In this work, we utilize a continuous control reinforcement learning algorithm to design entangling two-qubit gates for superconducting qubits; specifically, our agent constructs cross-resonance and CNOT gates without any prior information about the physical system. Using a simulated environment of fixed-frequency fixed-coupling transmon qubits, we demonstrate the capability to generate novel pulse sequences that outperform the standard cross-resonance gates in both fidelity and gate duration, while maintaining a comparable susceptibility to stochastic unitary noise. We further showcase an augmentation in training and input information that allows our agent to adapt its pulse design abilities to drifting hardware characteristics, importantly, with little to no additional optimization. Our results exhibit clearly the advantages of unbiased adaptive-feedback learning-based optimization methods for transmon gate design.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symbolic regression as a feature engineering method for machine and deep learning regression tasks","authors":"Assaf Shmuel, Oren Glickman and Teddy Lazebnik","doi":"10.1088/2632-2153/ad513a","DOIUrl":"https://doi.org/10.1088/2632-2153/ad513a","url":null,"abstract":"In the realm of machine and deep learning (DL) regression tasks, the role of effective feature engineering (FE) is pivotal in enhancing model performance. Traditional approaches of FE often rely on domain expertise to manually design features for machine learning (ML) models. In the context of DL models, the FE is embedded in the neural network’s architecture, making it hard for interpretation. In this study, we propose to integrate symbolic regression (SR) as an FE process before a ML model to improve its performance. We show, through extensive experimentation on synthetic and 21 real-world datasets, that the incorporation of SR-derived features significantly enhances the predictive capabilities of both machine and DL regression models with 34%–86% root mean square error (RMSE) improvement in synthetic datasets and 4%–11.5% improvement in real-world datasets. In an additional realistic use case, we show the proposed method improves the ML performance in predicting superconducting critical temperatures based on Eliashberg theory by more than 20% in terms of RMSE. These results outline the potential of SR as an FE component in data-driven models, improving them in terms of performance and interpretability.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoang-Quan Nguyen, Ba-Anh Le, Bao-Viet Tran, Thai-Son Vu, Thi-Loan Bui
{"title":"Deep artificial neural network-powered phase field model for predicting damage characteristic in brittle composite under varying configurations","authors":"Hoang-Quan Nguyen, Ba-Anh Le, Bao-Viet Tran, Thai-Son Vu, Thi-Loan Bui","doi":"10.1088/2632-2153/ad52e8","DOIUrl":"https://doi.org/10.1088/2632-2153/ad52e8","url":null,"abstract":"This work introduces a novel artificial neural network (ANN)-powered phase field model, offering rapid and precise predictions of fracture propagation in brittle materials. To improve the capabilities of the ANN model, we incorporate a loop of conditions into its core to regulate the absolute percentage error for each observation point, that filters and consistently selects the most accurate outcome. This algorithm enables our model to better adapt to the highly sensitive validation data arising from varying configurations. The effectiveness of the approach is illustrated through three examples involving changes in the microgeometry and material properties of steel fiber-reinforced high-strength concrete structures. Indeed, the predicted outcomes from the improved ANN phase field model in terms of stress–strain relationship, and crack propagation path demonstrates an outperformance compared with that based on the extreme gradient boosting method, a leading regression machine learning technique for tabular data. Additionally, the introduced model exhibits a remarkable speed advantage, being 180 times faster than traditional phase field simulations, and provides results at nearly any fiber location, demonstrating superiority over the phase field model. This study marks a significant advancement in the application of artificial intelligence for accurately predicting crack propagation paths in composite materials, particularly in cases involving the relative positioning of the fiber and initial crack location.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizaveta Demyanenko, Christoph Feinauer, Enrico M Malatesta, Luca Saglietti
{"title":"The twin peaks of learning neural networks","authors":"Elizaveta Demyanenko, Christoph Feinauer, Enrico M Malatesta, Luca Saglietti","doi":"10.1088/2632-2153/ad524d","DOIUrl":"https://doi.org/10.1088/2632-2153/ad524d","url":null,"abstract":"Recent works demonstrated the existence of a double-descent phenomenon for the generalization error of neural networks, where highly overparameterized models escape overfitting and achieve good test performance, at odds with the standard bias-variance trade-off described by statistical learning theory. In the present work, we explore a link between this phenomenon and the increase of complexity and sensitivity of the function represented by neural networks. In particular, we study the Boolean mean dimension (BMD), a metric developed in the context of Boolean function analysis. Focusing on a simple teacher-student setting for the random feature model, we derive a theoretical analysis based on the replica method that yields an interpretable expression for the BMD, in the high dimensional regime where the number of data points, the number of features, and the input size grow to infinity. We find that, as the degree of overparameterization of the network is increased, the BMD reaches an evident peak at the interpolation threshold, in correspondence with the generalization error peak, and then slowly approaches a low asymptotic value. The same phenomenology is then traced in numerical experiments with different model classes and training setups. Moreover, we find empirically that adversarially initialized models tend to show higher BMD values, and that models that are more robust to adversarial attacks exhibit a lower BMD.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anjana S Desai, Anindita Bandopadhyaya, Aparna Ashok, Maneesha4*maneesha@dubai.bits-pilani.ac.i, Neeru Bhagat
{"title":"Decoding characteristics of key physical properties in silver nanoparticles by attaining centroids for cytotoxicity prediction through data cleansing","authors":"Anjana S Desai, Anindita Bandopadhyaya, Aparna Ashok, Maneesha4*maneesha@dubai.bits-pilani.ac.i, Neeru Bhagat","doi":"10.1088/2632-2153/ad51cb","DOIUrl":"https://doi.org/10.1088/2632-2153/ad51cb","url":null,"abstract":"This research underscores the profound impact of data cleansing, ensuring dataset integrity and providing a structured foundation for unraveling convoluted connections between diverse physical properties and cytotoxicity. As the scientific community delves deeper into this interplay, it becomes clear that precise data purification is a fundamental aspect of investigating parameters within datasets. The study presents the need for data filtration in the background of machine learning (ML) that has widened its horizon into the field of biological application through the amalgamation of predictive systems and algorithms that delve into the intricate characteristics of cytotoxicity of nanoparticles. The reliability and accuracy of models in the ML landscape hinge on the quality of input data, making data cleansing a critical component of the pre-processing pipeline. The main encounter faced here is the lengthy, broad and complex datasets that have to be toned down for further studies. Through a thorough data cleansing process, this study addresses the complexities arising from diverse sources, resulting in a refined dataset. The filtration process employs K-means clustering to derive centroids, revealing the correlation between the physical properties of nanoparticles, viz, concentration, zeta potential, hydrodynamic diameter, morphology, and absorbance wavelength, and cytotoxicity outcomes measured in terms of cell viability. The cell lines considered for determining the centroid values that predicts the cytotoxicity of silver nanoparticles are human and animal cell lines which were categorized as normal and carcinoma type. The objective of the study is to simplify the high-dimensional data for accurate analysis of the parameters that affect the cytotoxicity of silver NPs through centroids.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Kipp, Fabian R Lux, Thorben Pürling, Abigail Morrison, Stefan Blügel, Daniele Pinna, Yuriy Mokrousov
{"title":"Machine learning inspired models for Hall effects in non-collinear magnets","authors":"Jonathan Kipp, Fabian R Lux, Thorben Pürling, Abigail Morrison, Stefan Blügel, Daniele Pinna, Yuriy Mokrousov","doi":"10.1088/2632-2153/ad51ca","DOIUrl":"https://doi.org/10.1088/2632-2153/ad51ca","url":null,"abstract":"The anomalous Hall effect has been front and center in solid state research and material science for over a century now, and the complex transport phenomena in nontrivial magnetic textures have gained an increasing amount of attention, both in theoretical and experimental studies. However, a clear path forward to capturing the influence of magnetization dynamics on anomalous Hall effect even in smallest frustrated magnets or spatially extended magnetic textures is still intensively sought after. In this work, we present an expansion of the anomalous Hall tensor into symmetrically invariant objects, encoding the magnetic configuration up to arbitrary power of spin. We show that these symmetric invariants can be utilized in conjunction with advanced regularization techniques in order to build models for the electric transport in magnetic textures which are, on one hand, complete with respect to the point group symmetry of the underlying lattice, and on the other hand, depend on a minimal number of order parameters only. Here, using a four-band tight-binding model on a honeycomb lattice, we demonstrate that the developed method can be used to address the importance and properties of higher-order contributions to transverse transport. The efficiency and breadth enabled by this method provides an ideal systematic approach to tackle the inherent complexity of response properties of noncollinear magnets, paving the way to the exploration of electric transport in intrinsically frustrated magnets as well as large-scale magnetic textures.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefan Heinen, Danish Khan, Guido Falk von Rudorff, Konstantin Karandashev, Daniel Jose Arismendi Arrieta, Alastair J A Price, Surajit Nandi, Arghya Bhowmik, Kersti Hermansson, O Anatole von Lilienfeld
{"title":"Reducing training data needs with minimal multilevel machine learning (M3L)","authors":"Stefan Heinen, Danish Khan, Guido Falk von Rudorff, Konstantin Karandashev, Daniel Jose Arismendi Arrieta, Alastair J A Price, Surajit Nandi, Arghya Bhowmik, Kersti Hermansson, O Anatole von Lilienfeld","doi":"10.1088/2632-2153/ad4ae5","DOIUrl":"https://doi.org/10.1088/2632-2153/ad4ae5","url":null,"abstract":"For many machine learning applications in science, data acquisition, not training, is the bottleneck even when avoiding experiments and relying on computation and simulation. Correspondingly, and in order to reduce cost and carbon footprint, training data efficiency is key. We introduce minimal multilevel machine learning (M3L) which optimizes training data set sizes using a loss function at multiple levels of reference data in order to minimize a combination of prediction error with overall training data acquisition costs (as measured by computational wall-times). Numerical evidence has been obtained for calculated atomization energies and electron affinities of thousands of organic molecules at various levels of theory including HF, MP2, DLPNO-CCSD(T), DFHFCABS, PNOMP2F12, and PNOCCSD(T)F12, and treating them with basis sets TZ, cc-pVTZ, and AVTZ-F12. Our M3L benchmarks for reaching chemical accuracy in distinct chemical compound sub-spaces indicate substantial computational cost reductions by factors of ∼1.01, 1.1, 3.8, 13.8, and 25.8 when compared to heuristic sub-optimal multilevel machine learning (M2L) for the data sets QM7b, QM9<inline-formula>\u0000<tex-math><?CDATA $^mathrm{LCCSD(T)}$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msup><mml:mi></mml:mi><mml:mrow><mml:mi>LCCSD</mml:mi><mml:mo stretchy=\"false\">(</mml:mo><mml:mi mathvariant=\"normal\">T</mml:mi><mml:mo stretchy=\"false\">)</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad4ae5ieqn1.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>, Electrolyte Genome Project, QM9<inline-formula>\u0000<tex-math><?CDATA $^mathrm{CCSD(T)}_mathrm{AE}$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msubsup><mml:mi></mml:mi><mml:mrow><mml:mi>AE</mml:mi></mml:mrow><mml:mrow><mml:mi>CCSD</mml:mi><mml:mo stretchy=\"false\">(</mml:mo><mml:mi mathvariant=\"normal\">T</mml:mi><mml:mo stretchy=\"false\">)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad4ae5ieqn2.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>, and QM9<inline-formula>\u0000<tex-math><?CDATA $^mathrm{CCSD(T)}_mathrm{EA}$?></tex-math>\u0000<mml:math overflow=\"scroll\"><mml:mrow><mml:msubsup><mml:mi></mml:mi><mml:mrow><mml:mi>EA</mml:mi></mml:mrow><mml:mrow><mml:mi>CCSD</mml:mi><mml:mo stretchy=\"false\">(</mml:mo><mml:mi mathvariant=\"normal\">T</mml:mi><mml:mo stretchy=\"false\">)</mml:mo></mml:mrow></mml:msubsup></mml:mrow></mml:math>\u0000<inline-graphic xlink:href=\"mlstad4ae5ieqn3.gif\" xlink:type=\"simple\"></inline-graphic>\u0000</inline-formula>, respectively. Furthermore, we use M2L to investigate the performance for 76 density functionals when used within multilevel learning and building on the following levels drawn from the hierarchy of Jacobs Ladder: LDA, GGA, mGGA, and hybrid functionals. Within M2L and the molecules considered, mGGAs do not provide any noticeable advantage over GGAs. Among the functionals considered and in combination with LDA, the three ","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}