Nature Machine Intelligence最新文献

筛选
英文 中文
An end-to-end recurrent compressed sensing method to denoise, detect and demix calcium imaging data
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-09-19 DOI: 10.1038/s42256-024-00892-w
Kangning Zhang, Sean Tang, Vivian Zhu, Majd Barchini, Weijian Yang
{"title":"An end-to-end recurrent compressed sensing method to denoise, detect and demix calcium imaging data","authors":"Kangning Zhang, Sean Tang, Vivian Zhu, Majd Barchini, Weijian Yang","doi":"10.1038/s42256-024-00892-w","DOIUrl":"https://doi.org/10.1038/s42256-024-00892-w","url":null,"abstract":"<p>Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyse calcium imaging data. Here we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long short-term memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyperparameter. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed sensing-inspired neural network with a recurrent layer and fully connected layers. The neural network can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pre-training with fractional denoising to enhance molecular property prediction 利用分数去噪进行预训练,提高分子特性预测能力
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-09-18 DOI: 10.1038/s42256-024-00900-z
Yuyan Ni, Shikun Feng, Xin Hong, Yuancheng Sun, Wei-Ying Ma, Zhi-Ming Ma, Qiwei Ye, Yanyan Lan
{"title":"Pre-training with fractional denoising to enhance molecular property prediction","authors":"Yuyan Ni, Shikun Feng, Xin Hong, Yuancheng Sun, Wei-Ying Ma, Zhi-Ming Ma, Qiwei Ye, Yanyan Lan","doi":"10.1038/s42256-024-00900-z","DOIUrl":"https://doi.org/10.1038/s42256-024-00900-z","url":null,"abstract":"<p>Deep learning methods have been considered promising for accelerating molecular screening in drug discovery and material design. Due to the limited availability of labelled data, various self-supervised molecular pre-training methods have been presented. Although many existing methods utilize common pre-training tasks in computer vision and natural language processing, they often overlook the fundamental physical principles governing molecules. In contrast, applying denoising in pre-training can be interpreted as an equivalent force learning, but the limited noise distribution introduces bias into the molecular distribution. To address this issue, we introduce a molecular pre-training framework called fractional denoising, which decouples noise design from the constraints imposed by force learning equivalence. In this way, the noise becomes customizable, allowing for incorporating chemical priors to substantially improve the molecular distribution modelling. Experiments demonstrate that our framework consistently outperforms existing methods, establishing state-of-the-art results across force prediction, quantum chemical properties and binding affinity tasks. The refined noise design enhances force accuracy and sampling coverage, which contribute to the creation of physically consistent molecular representations, ultimately leading to superior predictive performance.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse learned kernels for interpretable and efficient medical time series processing 用于可解释和高效医学时间序列处理的稀疏学习核
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-09-18 DOI: 10.1038/s42256-024-00898-4
Sully F. Chen, Zhicheng Guo, Cheng Ding, Xiao Hu, Cynthia Rudin
{"title":"Sparse learned kernels for interpretable and efficient medical time series processing","authors":"Sully F. Chen, Zhicheng Guo, Cheng Ding, Xiao Hu, Cynthia Rudin","doi":"10.1038/s42256-024-00898-4","DOIUrl":"https://doi.org/10.1038/s42256-024-00898-4","url":null,"abstract":"<p>Rapid, reliable and accurate interpretation of medical time series signals is crucial for high-stakes clinical decision-making. Deep learning methods offered unprecedented performance in medical signal processing but at a cost: they were compute intensive and lacked interpretability. We propose sparse mixture of learned kernels (SMoLK), an interpretable architecture for medical time series processing. SMoLK learns a set of lightweight flexible kernels that form a single-layer sparse neural network, providing not only interpretability but also efficiency, robustness and generalization to unseen data distributions. We introduce parameter reduction techniques to reduce the size of SMoLK networks and maintain performance. We test SMoLK on two important tasks common to many consumer wearables: photoplethysmography artefact detection and atrial fibrillation detection from single-lead electrocardiograms. We find that SMoLK matches the performance of models orders of magnitude larger. It is particularly suited for real-time applications using low-power devices, and its interpretability benefits high-stakes situations.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realizing full-body control of humanoid robots 实现仿人机器人的全身控制
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-09-11 DOI: 10.1038/s42256-024-00891-x
Guangliang Li, Randy Gomez
{"title":"Realizing full-body control of humanoid robots","authors":"Guangliang Li, Randy Gomez","doi":"10.1038/s42256-024-00891-x","DOIUrl":"https://doi.org/10.1038/s42256-024-00891-x","url":null,"abstract":"Using deep reinforcement learning, flexible skills and behaviours emerge in humanoid robots, as demonstrated in two recent reports.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142166408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating histopathology workflows with generative AI-based virtually multiplexed tumour profiling 利用基于生成式人工智能的虚拟多重肿瘤特征分析加速组织病理学工作流程
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-09-09 DOI: 10.1038/s42256-024-00889-5
Pushpak Pati, Sofia Karkampouna, Francesco Bonollo, Eva Compérat, Martina Radić, Martin Spahn, Adriano Martinelli, Martin Wartenberg, Marianna Kruithof-de Julio, Marianna Rapsomaniki
{"title":"Accelerating histopathology workflows with generative AI-based virtually multiplexed tumour profiling","authors":"Pushpak Pati, Sofia Karkampouna, Francesco Bonollo, Eva Compérat, Martina Radić, Martin Spahn, Adriano Martinelli, Martin Wartenberg, Marianna Kruithof-de Julio, Marianna Rapsomaniki","doi":"10.1038/s42256-024-00889-5","DOIUrl":"https://doi.org/10.1038/s42256-024-00889-5","url":null,"abstract":"<p>Understanding the spatial heterogeneity of tumours and its links to disease initiation and progression is a cornerstone of cancer biology. Presently, histopathology workflows heavily rely on hematoxylin and eosin and serial immunohistochemistry staining, a cumbersome, tissue-exhaustive process that results in non-aligned tissue images. We propose the VirtualMultiplexer, a generative artificial intelligence toolkit that effectively synthesizes multiplexed immunohistochemistry images for several antibody markers (namely AR, NKX3.1, CD44, CD146, p53 and ERG) from only an input hematoxylin and eosin image. The VirtualMultiplexer captures biologically relevant staining patterns across tissue scales without requiring consecutive tissue sections, image registration or extensive expert annotations. Thorough qualitative and quantitative assessment indicates that the VirtualMultiplexer achieves rapid, robust and precise generation of virtually multiplexed imaging datasets of high staining quality that are indistinguishable from the real ones. The VirtualMultiplexer is successfully transferred across tissue scales and patient cohorts with no need for model fine-tuning. Crucially, the virtually multiplexed images enabled training a graph transformer that simultaneously learns from the joint spatial distribution of several proteins to predict clinically relevant endpoints. We observe that this multiplexed learning scheme was able to greatly improve clinical prediction, as corroborated across several downstream tasks, independent patient cohorts and cancer types. Our results showcase the clinical relevance of artificial intelligence-assisted multiplexed tumour imaging, accelerating histopathology workflows and cancer biology.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and scalable reinforcement learning for large-scale network control 用于大规模网络控制的高效可扩展强化学习
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-09-03 DOI: 10.1038/s42256-024-00879-7
Chengdong Ma, Aming Li, Yali Du, Hao Dong, Yaodong Yang
{"title":"Efficient and scalable reinforcement learning for large-scale network control","authors":"Chengdong Ma, Aming Li, Yali Du, Hao Dong, Yaodong Yang","doi":"10.1038/s42256-024-00879-7","DOIUrl":"https://doi.org/10.1038/s42256-024-00879-7","url":null,"abstract":"<p>The primary challenge in the development of large-scale artificial intelligence (AI) systems lies in achieving scalable decision-making—extending the AI models while maintaining sufficient performance. Existing research indicates that distributed AI can improve scalability by decomposing complex tasks and distributing them across collaborative nodes. However, previous technologies suffered from compromised real-world applicability and scalability due to the massive requirement of communication and sampled data. Here we develop a model-based decentralized policy optimization framework, which can be efficiently deployed in multi-agent systems. By leveraging local observation through the agent-level topological decoupling of global dynamics, we prove that this decentralized mechanism achieves accurate estimations of global information. Importantly, we further introduce model learning to reinforce the optimal policy for monotonic improvement with a limited amount of sampled data. Empirical results on diverse scenarios show the superior scalability of our approach, particularly in real-world systems with hundreds of agents, thereby paving the way for scaling up AI systems.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142123712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A large-scale audit of dataset licensing and attribution in AI 对人工智能中的数据集许可和归属进行大规模审计
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-08-30 DOI: 10.1038/s42256-024-00878-8
Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, Xinyi (Alexis) Wu, Enrico Shippole, Kurt Bollacker, Tongshuang Wu, Luis Villa, Sandy Pentland, Sara Hooker
{"title":"A large-scale audit of dataset licensing and attribution in AI","authors":"Shayne Longpre,&nbsp;Robert Mahari,&nbsp;Anthony Chen,&nbsp;Naana Obeng-Marnu,&nbsp;Damien Sileo,&nbsp;William Brannon,&nbsp;Niklas Muennighoff,&nbsp;Nathan Khazam,&nbsp;Jad Kabbara,&nbsp;Kartik Perisetla,&nbsp;Xinyi (Alexis) Wu,&nbsp;Enrico Shippole,&nbsp;Kurt Bollacker,&nbsp;Tongshuang Wu,&nbsp;Luis Villa,&nbsp;Sandy Pentland,&nbsp;Sara Hooker","doi":"10.1038/s42256-024-00878-8","DOIUrl":"10.1038/s42256-024-00878-8","url":null,"abstract":"The race to train language models on vast, diverse and inconsistently documented datasets raises pressing legal and ethical concerns. To improve data transparency and understanding, we convene a multi-disciplinary effort between legal and machine learning experts to systematically audit and trace more than 1,800 text datasets. We develop tools and standards to trace the lineage of these datasets, including their source, creators, licences and subsequent use. Our landscape analysis highlights sharp divides in the composition and focus of data licenced for commercial use. Important categories including low-resource languages, creative tasks and new synthetic data all tend to be restrictively licenced. We observe frequent miscategorization of licences on popular dataset hosting sites, with licence omission rates of more than 70% and error rates of more than 50%. This highlights a crisis in misattribution and informed use of popular datasets driving many recent breakthroughs. Our analysis of data sources also explains the application of copyright law and fair use to finetuning data. As a contribution to continuing improvements in dataset transparency and responsible use, we release our audit, with an interactive user interface, the Data Provenance Explorer, to enable practitioners to trace and filter on data provenance for the most popular finetuning data collections: www.dataprovenance.org . The Data Provenance Initiative audits over 1,800 text artificial intelligence (AI) datasets, analysing trends, permissions of use and global representation. It exposes frequent errors on several major data hosting sites and offers tools for transparent and informed use of AI training data.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00878-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142091222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What is in your LLM-based framework? 您的基于 LLM 的框架中有哪些内容?
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-08-30 DOI: 10.1038/s42256-024-00896-6
{"title":"What is in your LLM-based framework?","authors":"","doi":"10.1038/s42256-024-00896-6","DOIUrl":"10.1038/s42256-024-00896-6","url":null,"abstract":"To maintain high standards in clarity and reproducibility, authors need to clearly mention and describe the use of GPT-4 and other large language models in their work.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00896-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142091142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A step forward in tracing and documenting dataset provenance 在追踪和记录数据集出处方面向前迈进了一步
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-08-30 DOI: 10.1038/s42256-024-00884-w
Nicholas Vincent
{"title":"A step forward in tracing and documenting dataset provenance","authors":"Nicholas Vincent","doi":"10.1038/s42256-024-00884-w","DOIUrl":"10.1038/s42256-024-00884-w","url":null,"abstract":"Training data are crucial for advancements in artificial intelligence, but many questions remain regarding the provenance of training datasets, license enforcement and creator consent. Mahari et al. provide a set of tools for tracing, documenting and sharing AI training data and highlight the importance for developers to engage with metadata of datasets.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142091186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning integral operators via neural integral equations 通过神经积分方程学习积分算子
IF 23.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-08-29 DOI: 10.1038/s42256-024-00886-8
Emanuele Zappala, Antonio Henrique de Oliveira Fonseca, Josue Ortega Caro, Andrew Henry Moberly, Michael James Higley, Jessica Cardin, David van Dijk
{"title":"Learning integral operators via neural integral equations","authors":"Emanuele Zappala, Antonio Henrique de Oliveira Fonseca, Josue Ortega Caro, Andrew Henry Moberly, Michael James Higley, Jessica Cardin, David van Dijk","doi":"10.1038/s42256-024-00886-8","DOIUrl":"https://doi.org/10.1038/s42256-024-00886-8","url":null,"abstract":"<p>Nonlinear operators with long-distance spatiotemporal dependencies are fundamental in modelling complex systems across sciences; yet, learning these non-local operators remains challenging in machine learning. Integral equations, which model such non-local systems, have wide-ranging applications in physics, chemistry, biology and engineering. We introduce the neural integral equation, a method for learning unknown integral operators from data using an integral equation solver. To improve scalability and model capacity, we also present the attentional neural integral equation, which replaces the integral with self-attention. Both models are grounded in the theory of second-kind integral equations, where the indeterminate appears both inside and outside the integral operator. We provide a theoretical analysis showing how self-attention can approximate integral operators under mild regularity assumptions, further deepening previously reported connections between transformers and integration, as well as deriving corresponding approximation results for integral operators. Through numerical benchmarks on synthetic and real-world data, including Lotka–Volterra, Navier–Stokes and Burgers’ equations, as well as brain dynamics and integral equations, we showcase the models’ capabilities and their ability to derive interpretable dynamics embeddings. Our experiments demonstrate that attentional neural integral equations outperform existing methods, especially for longer time intervals and higher-dimensional problems. Our work addresses a critical gap in machine learning for non-local operators and offers a powerful tool for studying unknown complex systems with long-range dependencies.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":23.8,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142090186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信