Nature Machine Intelligence最新文献

筛选
英文 中文
Deep learning at the forefront of detecting tipping points
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-12-04 DOI: 10.1038/s42256-024-00957-w
Smita Deb, Partha Sharathi Dutta
{"title":"Deep learning at the forefront of detecting tipping points","authors":"Smita Deb, Partha Sharathi Dutta","doi":"10.1038/s42256-024-00957-w","DOIUrl":"10.1038/s42256-024-00957-w","url":null,"abstract":"A deep learning-based method shows promise in issuing early warnings of rate-induced tipping, of particular interest in anticipating effects due to anthropogenic climate change.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1433-1434"},"PeriodicalIF":18.8,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142763063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in biomaterials discovery: generating self-assembling peptides with resource-efficient deep learning
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-12-02 DOI: 10.1038/s42256-024-00936-1
Tianang Leng, Cesar de la Fuente-Nunez
{"title":"AI in biomaterials discovery: generating self-assembling peptides with resource-efficient deep learning","authors":"Tianang Leng, Cesar de la Fuente-Nunez","doi":"10.1038/s42256-024-00936-1","DOIUrl":"10.1038/s42256-024-00936-1","url":null,"abstract":"Recurrent neural networks are efficient and capable agents for discovering new peptides with strong self-organizing capabilities.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1429-1430"},"PeriodicalIF":18.8,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142758167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A plea for caution and guidance about using AI in genomics
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-29 DOI: 10.1038/s42256-024-00947-y
Mohammad Hosseini, Christopher R. Donohue
{"title":"A plea for caution and guidance about using AI in genomics","authors":"Mohammad Hosseini, Christopher R. Donohue","doi":"10.1038/s42256-024-00947-y","DOIUrl":"10.1038/s42256-024-00947-y","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1409-1410"},"PeriodicalIF":18.8,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142753769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for predicting rate-induced tipping
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-28 DOI: 10.1038/s42256-024-00937-0
Yu Huang, Sebastian Bathiany, Peter Ashwin, Niklas Boers
{"title":"Deep learning for predicting rate-induced tipping","authors":"Yu Huang, Sebastian Bathiany, Peter Ashwin, Niklas Boers","doi":"10.1038/s42256-024-00937-0","DOIUrl":"10.1038/s42256-024-00937-0","url":null,"abstract":"Nonlinear dynamical systems exposed to changing forcing values can exhibit catastrophic transitions between distinct states. The phenomenon of critical slowing down can help anticipate such transitions if caused by a bifurcation and if the change in forcing is slow compared with the system’s internal timescale. However, in many real-world situations, these assumptions are not met and transitions can be triggered because the forcing exceeds a critical rate. For instance, the rapid pace of anthropogenic climate change compared with the internal timescales of key Earth system components, like polar ice sheets or the Atlantic Meridional Overturning Circulation, poses significant risk of rate-induced tipping. Moreover, random perturbations may cause some trajectories to cross an unstable boundary whereas others do not—even under the same forcing. Critical-slowing-down-based indicators generally cannot distinguish these cases of noise-induced tipping from no tipping. This severely limits our ability to assess the tipping risks and to predict individual trajectories. To address this, we make the first attempt to develop a deep learning framework predicting the transition probabilities of dynamical systems ahead of rate-induced transitions. Our method issues early warnings, as demonstrated on three prototypical systems for rate-induced tipping subjected to time-varying equilibrium drift and noise perturbations. Exploiting explainable artificial intelligence methods, our framework captures the fingerprints for the early detection of rate-induced tipping, even with long lead times. Our findings demonstrate the predictability of rate-induced and noise-induced tipping, advancing our ability to determine safe operating spaces for a broader class of dynamical systems than possible so far. Rate- and noise-induced transitions pose key tipping risks for ecosystems and climate subsystems, yet no predictive theory existed before. This study introduces deep learning as an effective prediction tool for these tipping events.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1556-1565"},"PeriodicalIF":18.8,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00937-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142753770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-decoupling three-axis forces in a simple sensor 简单传感器中的三轴力自解耦功能
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-27 DOI: 10.1038/s42256-024-00941-4
Kuanming Yao, Qiuna Zhuang
{"title":"Self-decoupling three-axis forces in a simple sensor","authors":"Kuanming Yao, Qiuna Zhuang","doi":"10.1038/s42256-024-00941-4","DOIUrl":"10.1038/s42256-024-00941-4","url":null,"abstract":"A self-decoupling tactile sensor dramatically reduces calibration time for three-dimensional force measurement, scaling from cubic (N³) to linear (3N). This advancement facilitates robotic tactile perception in human–machine interfaces.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1431-1432"},"PeriodicalIF":18.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal language and graph learning of adsorption configuration in catalysis 催化中吸附构型的多模态语言和图式学习
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-27 DOI: 10.1038/s42256-024-00930-7
Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani
{"title":"Multimodal language and graph learning of adsorption configuration in catalysis","authors":"Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani","doi":"10.1038/s42256-024-00930-7","DOIUrl":"10.1038/s42256-024-00930-7","url":null,"abstract":"Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1501-1511"},"PeriodicalIF":18.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual feature extraction hierarchies converge in large language models and the brain 大型语言模型和大脑中的上下文特征提取层次趋同
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-26 DOI: 10.1038/s42256-024-00925-4
Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani
{"title":"Contextual feature extraction hierarchies converge in large language models and the brain","authors":"Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani","doi":"10.1038/s42256-024-00925-4","DOIUrl":"10.1038/s42256-024-00925-4","url":null,"abstract":"Recent advancements in artificial intelligence have sparked interest in the parallels between large language models (LLMs) and human neural processing, particularly in language comprehension. Although previous research has demonstrated similarities between LLM representations and neural responses, the computational principles driving this convergence—especially as LLMs evolve—remain elusive. Here we used intracranial electroencephalography recordings from neurosurgical patients listening to speech to investigate the alignment between high-performance LLMs and the language-processing mechanisms of the brain. We examined a diverse selection of LLMs with similar parameter sizes and found that as their performance on benchmark tasks improves, they not only become more brain-like, reflected in better neural response predictions from model embeddings, but they also align more closely with the hierarchical feature extraction pathways of the brain, using fewer layers for the same encoding. Additionally, we identified commonalities in the hierarchical processing mechanisms of high-performing LLMs, revealing their convergence towards similar language-processing strategies. Finally, we demonstrate the critical role of contextual information in both LLM performance and brain alignment. These findings reveal converging aspects of language processing in the brain and LLMs, offering new directions for developing models that better align with human cognitive processing. Why brain-like feature extraction emerges in large language models (LLMs) remains elusive. Mischler, Li and colleagues demonstrate that high-performing LLMs not only predict neural responses more accurately than other LLMs but also align more closely with the hierarchical language processing pathway in the brain, revealing parallels between these models and human cognitive mechanisms.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1467-1477"},"PeriodicalIF":18.8,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research 为降低生物医学研究中可能滥用人工智能的风险制定框架
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-26 DOI: 10.1038/s42256-024-00926-3
Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus
{"title":"Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research","authors":"Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus","doi":"10.1038/s42256-024-00926-3","DOIUrl":"10.1038/s42256-024-00926-3","url":null,"abstract":"The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence. The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1435-1442"},"PeriodicalIF":18.8,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI pioneers win 2024 Nobel prizes 人工智能先驱荣获 2024 年诺贝尔奖
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-22 DOI: 10.1038/s42256-024-00945-0
{"title":"AI pioneers win 2024 Nobel prizes","authors":"","doi":"10.1038/s42256-024-00945-0","DOIUrl":"10.1038/s42256-024-00945-0","url":null,"abstract":"The 2024 Nobel prizes in physics and chemistry highlight the interdisciplinary nature and impact of AI in science.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1271-1271"},"PeriodicalIF":18.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00945-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142690742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for practical quantum error mitigation 实用量子错误缓解的机器学习
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-22 DOI: 10.1038/s42256-024-00927-2
Haoran Liao, Derek S. Wang, Iskandar Sitdikov, Ciro Salcedo, Alireza Seif, Zlatko K. Minev
{"title":"Machine learning for practical quantum error mitigation","authors":"Haoran Liao, Derek S. Wang, Iskandar Sitdikov, Ciro Salcedo, Alireza Seif, Zlatko K. Minev","doi":"10.1038/s42256-024-00927-2","DOIUrl":"10.1038/s42256-024-00927-2","url":null,"abstract":"Quantum computers have progressed towards outperforming classical supercomputers, but quantum errors remain the primary obstacle. In the past few years, the field of quantum error mitigation has provided strategies for overcoming errors in near-term devices, enabling improved accuracy at the cost of additional run time. Through experiments on state-of-the-art quantum computers using up to 100 qubits, we demonstrate that without sacrificing accuracy, machine learning for quantum error mitigation (ML-QEM) drastically reduces the cost of mitigation. We benchmarked ML-QEM using a variety of machine learning models—linear regression, random forest, multilayer perceptron and graph neural networks—on diverse classes of quantum circuits, over increasingly complex device noise profiles, under interpolation and extrapolation, and in both numerics and experiments. These tests employed the popular digital zero-noise extrapolation method as an added reference. Finally, we propose a path towards scalable mitigation using ML-QEM to mimic traditional mitigation methods with superior runtime efficiency. Our results show that classical machine learning can extend the reach and practicality of quantum error mitigation by reducing its overhead and highlight its broader potential for practical quantum computations. Quantum error mitigation improves the accuracy of quantum computers at a computational overhead. Liao et al. demonstrate that classical machine learning models can deliver accuracy comparable to that of conventional techniques while reducing quantum computational costs.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1478-1486"},"PeriodicalIF":18.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142684374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信