Nature Machine Intelligence最新文献

筛选
英文 中文
Predicting RNA 3D structure and conformers using a pre-trained secondary structure model and structure-aware attention 使用预训练的二级结构模型和结构感知注意力预测RNA 3D结构和构象
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-21 DOI: 10.1038/s42256-026-01223-x
Wenkai Wang, Zhenling Peng, Jianyi Yang
{"title":"Predicting RNA 3D structure and conformers using a pre-trained secondary structure model and structure-aware attention","authors":"Wenkai Wang, Zhenling Peng, Jianyi Yang","doi":"10.1038/s42256-026-01223-x","DOIUrl":"https://doi.org/10.1038/s42256-026-01223-x","url":null,"abstract":"Determining RNA three-dimensional (3D) structure and conformers remains a grand challenge in structural biology, primarily owing to the scarcity of experimental data, the intrinsic flexibility of RNA molecules, and the limitations of current experimental and computational methods. Here we propose trRosettaRNA2, a deep learning-based end-to-end approach to this problem. Considering the scarcity of RNA 3D structure data, trRosettaRNA2 integrates an auxiliary secondary structure (SS) prior module, pre-trained on extensive SS data, to generate informative base-pairing priors. This module also serves as an independent RNA SS prediction method, trRNA2-SS, and achieves state-of-the-art performance. To enable end-to-end prediction, trRosettaRNA2 uses SS-aware attention to generate RNA 3D structure and conformers (distinct 3D spatial arrangements of the same molecule resulting from its intrinsic flexibility). Rigorous benchmarks demonstrate that trRosettaRNA2 outperforms other RNA 3D structure prediction methods, despite using substantially fewer parameters and computational resources. Notably, its flexibility in leveraging diverse secondary structure inputs provides a pathway to generate accurate 3D structure and explore the RNA conformers. Based on trRosettaRNA2, our group, Yang-Server, was the top automated server for RNA structure prediction in the CASP16 blind test, surpassing AlphaFold 3. This performance highlights that trRosettaRNA2 represents a solid step forward for RNA structure prediction. Application to the ribonuclease P RNA demonstrates that trRosettaRNA2 successfully captures its structural heterogeneity even without requiring experimental data, showing its potential to predict RNA conformational ensembles.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"55 1","pages":""},"PeriodicalIF":23.8,"publicationDate":"2026-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147734064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear attention goes global in molecular dynamics 线性关注在分子动力学中是全球性的
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-21 DOI: 10.1038/s42256-026-01222-y
Sheng Gong, Wen Yan
{"title":"Linear attention goes global in molecular dynamics","authors":"Sheng Gong, Wen Yan","doi":"10.1038/s42256-026-01222-y","DOIUrl":"10.1038/s42256-026-01222-y","url":null,"abstract":"A new attention mechanism brings long-range awareness to machine learning force fields with linear cost and preservation of symmetry. The method offers a flexible alternative to existing long-range modules, including fragmentation-based interactions and physics-based long-range fixes.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"504-505"},"PeriodicalIF":23.9,"publicationDate":"2026-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147738933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a universal model for spin–orbit physics 建立自旋轨道物理学的通用模型
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-20 DOI: 10.1038/s42256-026-01221-z
Atul C. Thakur, Shyue Ping Ong
{"title":"Towards a universal model for spin–orbit physics","authors":"Atul C. Thakur, Shyue Ping Ong","doi":"10.1038/s42256-026-01221-z","DOIUrl":"10.1038/s42256-026-01221-z","url":null,"abstract":"A new machine learning framework predicts the spin–orbit-coupled electronic structure across the periodic table, enabling high-throughput exploration of quantum materials.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"502-503"},"PeriodicalIF":23.9,"publicationDate":"2026-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147738934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI economics for the common good 为了共同利益的人工智能经济学
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-17 DOI: 10.1038/s42256-026-01212-0
Francesco Fuso Nerini
{"title":"AI economics for the common good","authors":"Francesco Fuso Nerini","doi":"10.1038/s42256-026-01212-0","DOIUrl":"10.1038/s42256-026-01212-0","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"495-496"},"PeriodicalIF":23.9,"publicationDate":"2026-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147738936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-inspired warm-up training with random noise for uncertainty calibration 不确定度校正随机噪声的脑启发热身训练
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-09 DOI: 10.1038/s42256-026-01215-x
Jeonghwan Cheon, Se-Bum Paik
{"title":"Brain-inspired warm-up training with random noise for uncertainty calibration","authors":"Jeonghwan Cheon, Se-Bum Paik","doi":"10.1038/s42256-026-01215-x","DOIUrl":"10.1038/s42256-026-01215-x","url":null,"abstract":"Uncertainty calibration, the alignment of predictive confidence with accuracy, is essential for the reliable deployment of machine learning systems in real-world applications. However, current models often fail to achieve this goal, generating responses that are overconfident, inaccurate or even fabricated. Here we show that the widely adopted initialization method in deep learning—long regarded as standard practice—is, in fact, a primary source of overconfidence. To address this problem, we introduce a neurodevelopment-inspired warm-up strategy that inherently resolves uncertainty-related issues without requiring pre- or post-processing. In our approach, networks are first briefly trained on random noise and random labels before being exposed to real data. This warm-up phase yields optimal calibration, ensuring that confidence remains well aligned with accuracy throughout subsequent training. Moreover, the resulting networks demonstrate high proficiency in the identification of ‘unknown’ inputs, providing a robust solution for uncertainty calibration in both in-distribution and out-of-distribution contexts. Cheon and Paik show that overconfidence in deep neural networks arises from standard initialization practices, and that brief warm-up training with random noise improves uncertainty calibration and meta-cognitive recognition of unknown inputs.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"602-613"},"PeriodicalIF":23.9,"publicationDate":"2026-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s42256-026-01215-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147655986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to be uncertain before learning from data 在从数据中学习之前学会不确定
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-09 DOI: 10.1038/s42256-026-01205-z
Takuya Isomura
{"title":"Learning to be uncertain before learning from data","authors":"Takuya Isomura","doi":"10.1038/s42256-026-01205-z","DOIUrl":"10.1038/s42256-026-01205-z","url":null,"abstract":"Neural networks may be overconfident before they see real data. By briefly training on random noise, models can learn to be uncertain, leading to better calibration, improved identification of out-of-distribution inputs and thus more reliable predictions.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"500-501"},"PeriodicalIF":23.9,"publicationDate":"2026-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147738872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-dimensional geometric template diffusion for boosting single-sequence protein structure prediction 二维几何模板扩散促进单序列蛋白质结构预测
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-01 DOI: 10.1038/s42256-026-01210-2
Xudong Wang, Tong Zhang, Zhen Cui, Xu Guo, Fuyun Wang, Yuanzhi Wang, Xing Cai, Wenming Zheng
{"title":"Two-dimensional geometric template diffusion for boosting single-sequence protein structure prediction","authors":"Xudong Wang, Tong Zhang, Zhen Cui, Xu Guo, Fuyun Wang, Yuanzhi Wang, Xing Cai, Wenming Zheng","doi":"10.1038/s42256-026-01210-2","DOIUrl":"10.1038/s42256-026-01210-2","url":null,"abstract":"Protein structure prediction from a single sequence has drawn increasing attention due to the high computational costs associated with obtaining homologous information. Here we propose a two-dimensional geometric template diffusion method, named TDFold, to generate high-quality pairwise geometries (including pairwise distances and orientations). These are subsequently used for accurate and highly efficient three-dimensional protein structure prediction. Given a protein sequence, TDFold infers three-dimensional structure via a network architecture consisting of two stages: two-dimensional geometric template generation and sequence-geometry collaborative learning. TDFold presents three key advantages compared with existing protein language models (for example, ESMFold and OmegaFold) and homology-based methods (for example, AlphaFold2, AlphaFold3 and RoseTTAFold): better single-sequence-based prediction performance, lower resource consumption and higher efficiency in inference. This work demonstrates the model effectiveness on homology-insufficient datasets such as Orphan and Orphan25 and popular CASP benchmarks, introducing an alternative solution for single-sequence protein structure prediction. It also accelerates protein-related research, particularly for resource-limited universities and academic institutions. Wang et al. introduce TDFold, which reformulates 3D protein structure prediction as a 2D image-like diffusion task. Its geometric template diffusion framework offers greater accuracy, speed and efficiency than leading models.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"545-558"},"PeriodicalIF":23.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147585942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting new research directions in materials science using large language models and concept graphs 利用大型语言模型和概念图预测材料科学的新研究方向
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-04-01 DOI: 10.1038/s42256-026-01206-y
Thomas Marwitz, Alexander Colsmann, Ben Breitung, Christoph Brabec, Christoph Kirchlechner, Eva Blasco, Gabriel Cadilha Marques, Horst Hahn, Michael Hirtz, Pavel A. Levkin, Yolita M. Eggeler, Tobias Schlöder, Pascal Friederich
{"title":"Predicting new research directions in materials science using large language models and concept graphs","authors":"Thomas Marwitz, Alexander Colsmann, Ben Breitung, Christoph Brabec, Christoph Kirchlechner, Eva Blasco, Gabriel Cadilha Marques, Horst Hahn, Michael Hirtz, Pavel A. Levkin, Yolita M. Eggeler, Tobias Schlöder, Pascal Friederich","doi":"10.1038/s42256-026-01206-y","DOIUrl":"10.1038/s42256-026-01206-y","url":null,"abstract":"Due to an exponential increase in published research articles, it is impossible for individual scientists to read all publications, even within their own research field. Here we investigate the use of large language models to extract the main concepts and semantic information from scientific abstracts in the domain of materials science to identify links that were not noticed by humans and to suggest inspiring near and/or mid-term future research directions. We show that large language models can extract concepts more efficiently than automated keyword extraction methods to build a concept graph as an abstraction of the scientific literature. A machine learning model is trained to predict emerging combinations of concepts, that is, new research ideas, based on historical data. We demonstrate that integrating semantic concept information leads to increased prediction performance. The applicability of our model is demonstrated in qualitative interviews with domain experts based on individualized model suggestions. We show that the model can inspire materials scientists in their creative thinking process by predicting innovative combinations of concepts that have not yet been investigated. Marwitz et al. demonstrate the use of large language models to build semantic concept graphs from materials science abstracts and train a machine learning model to predict emerging topic combinations from historical data. They show that the model enables experts to find suggestions that can inspire new research.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 4","pages":"535-544"},"PeriodicalIF":23.9,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s42256-026-01206-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147585941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning global atomic representations with Euclidean fast attention 机器学习全局原子表示与欧几里得快速关注
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-03-25 DOI: 10.1038/s42256-026-01195-y
J. Thorben Frank, Stefan Chmiela, Klaus-Robert Müller, Oliver T. Unke
{"title":"Machine learning global atomic representations with Euclidean fast attention","authors":"J. Thorben Frank, Stefan Chmiela, Klaus-Robert Müller, Oliver T. Unke","doi":"10.1038/s42256-026-01195-y","DOIUrl":"10.1038/s42256-026-01195-y","url":null,"abstract":"Long-range correlations are essential across numerous machine learning tasks, especially for data embedded in Euclidean space, where the relative positions and orientations of distant components are often critical for accurate predictions. Self-attention offers a compelling mechanism for capturing these global effects, but its quadratic complexity presents a significant practical limitation. This problem is particularly pronounced in computational chemistry, where the stringent efficiency requirements of machine learning force fields (MLFFs) often preclude accurately modelling long-range interactions. Here, to address this, we introduce Euclidean fast attention (EFA), a linear-scaling attention-like mechanism designed for Euclidean data, which can be easily incorporated into existing model architectures. A core component of EFA is our proposed Euclidean rotary positional encoding, which enables efficient representation of spatial information while preserving essential physical symmetries. We empirically demonstrate that EFA effectively captures diverse long-range effects, enabling EFA-equipped MLFFs to describe challenging chemical interactions for which conventional MLFFs yield incorrect results. Frank et al. introduce Euclidean fast attention, a linear-scaling framework for 3D data. By leveraging Euclidean rotary encodings, the method overcomes the quadratic cost of standard attention to accurately capture long-range effects in physical systems.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 3","pages":"388-402"},"PeriodicalIF":23.9,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147506143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reverse predictivity for bidirectional comparison of neural networks and biological brains 神经网络与生物大脑双向比较的反向预测
IF 23.9 1区 计算机科学
Nature Machine Intelligence Pub Date : 2026-03-25 DOI: 10.1038/s42256-026-01204-0
Sabine Muzellec, Kohitij Kar
{"title":"Reverse predictivity for bidirectional comparison of neural networks and biological brains","authors":"Sabine Muzellec, Kohitij Kar","doi":"10.1038/s42256-026-01204-0","DOIUrl":"10.1038/s42256-026-01204-0","url":null,"abstract":"A major goal in systems neuroscience is to build computational models that capture the primate brain’s internal representations. Standard evaluations of artificial neural networks (ANNs) emphasize forward predictivity—how well model features predict neural responses—without testing whether model representations are themselves predictable from neural activity. Here we develop a diagnostic metric, reverse predictivity, that quantifies how well macaque inferior temporal cortex responses predict ANN unit activations. Using this comparative framework, we reveal a striking asymmetry: models with high forward predictivity (~50% variance explained) often contain units unpredictable from neural activity, reflecting biologically inaccessible dimensions. In contrast, monkey-to-monkey mappings are symmetric, providing an empirical reference point and indicating that the asymmetry reflects genuine representational mismatch. Reverse predictivity enables the identification of ‘common’ ANN units that are shared with the inferior temporal cortex, are behaviourally relevant and generalize across species, and ‘unique’ units lacking such alignment. Influenced by feature dimensionality, training objectives and adversarial robustness, reverse predictivity serves as a conservative diagnostic and comparative tool for guiding next-generation ANNs towards both high task performance and greater biological plausibility. Muzellec and Kar use reverse predictivity to show that only a subset of artificial neural network (ANN) units align with primate brain responses. This reveals a substantial misalignment between ANNs and brains compared with the strong bidirectional alignment observed between two primate brains.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"8 3","pages":"474-488"},"PeriodicalIF":23.9,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147506154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书