Nature Machine Intelligence最新文献

筛选
英文 中文
Self-decoupling three-axis forces in a simple sensor 简单传感器中的三轴力自解耦功能
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-27 DOI: 10.1038/s42256-024-00941-4
Kuanming Yao, Qiuna Zhuang
{"title":"Self-decoupling three-axis forces in a simple sensor","authors":"Kuanming Yao, Qiuna Zhuang","doi":"10.1038/s42256-024-00941-4","DOIUrl":"10.1038/s42256-024-00941-4","url":null,"abstract":"A self-decoupling tactile sensor dramatically reduces calibration time for three-dimensional force measurement, scaling from cubic (N³) to linear (3N). This advancement facilitates robotic tactile perception in human–machine interfaces.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1431-1432"},"PeriodicalIF":18.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal language and graph learning of adsorption configuration in catalysis 催化中吸附构型的多模态语言和图式学习
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-27 DOI: 10.1038/s42256-024-00930-7
Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani
{"title":"Multimodal language and graph learning of adsorption configuration in catalysis","authors":"Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani","doi":"10.1038/s42256-024-00930-7","DOIUrl":"10.1038/s42256-024-00930-7","url":null,"abstract":"Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information. Ock and colleagues explore predictive and generative language models for improving adsorption energy prediction in catalysis without relying on exact atomic positions. The method involves aligning a language model’s latent space with graph neural networks using graph-assisted pretraining.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1501-1511"},"PeriodicalIF":18.8,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142718174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual feature extraction hierarchies converge in large language models and the brain 大型语言模型和大脑中的上下文特征提取层次趋同
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-26 DOI: 10.1038/s42256-024-00925-4
Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani
{"title":"Contextual feature extraction hierarchies converge in large language models and the brain","authors":"Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani","doi":"10.1038/s42256-024-00925-4","DOIUrl":"10.1038/s42256-024-00925-4","url":null,"abstract":"Recent advancements in artificial intelligence have sparked interest in the parallels between large language models (LLMs) and human neural processing, particularly in language comprehension. Although previous research has demonstrated similarities between LLM representations and neural responses, the computational principles driving this convergence—especially as LLMs evolve—remain elusive. Here we used intracranial electroencephalography recordings from neurosurgical patients listening to speech to investigate the alignment between high-performance LLMs and the language-processing mechanisms of the brain. We examined a diverse selection of LLMs with similar parameter sizes and found that as their performance on benchmark tasks improves, they not only become more brain-like, reflected in better neural response predictions from model embeddings, but they also align more closely with the hierarchical feature extraction pathways of the brain, using fewer layers for the same encoding. Additionally, we identified commonalities in the hierarchical processing mechanisms of high-performing LLMs, revealing their convergence towards similar language-processing strategies. Finally, we demonstrate the critical role of contextual information in both LLM performance and brain alignment. These findings reveal converging aspects of language processing in the brain and LLMs, offering new directions for developing models that better align with human cognitive processing. Why brain-like feature extraction emerges in large language models (LLMs) remains elusive. Mischler, Li and colleagues demonstrate that high-performing LLMs not only predict neural responses more accurately than other LLMs but also align more closely with the hierarchical language processing pathway in the brain, revealing parallels between these models and human cognitive mechanisms.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1467-1477"},"PeriodicalIF":18.8,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research 为降低生物医学研究中可能滥用人工智能的风险制定框架
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-26 DOI: 10.1038/s42256-024-00926-3
Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus
{"title":"Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research","authors":"Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus","doi":"10.1038/s42256-024-00926-3","DOIUrl":"10.1038/s42256-024-00926-3","url":null,"abstract":"The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence. The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1435-1442"},"PeriodicalIF":18.8,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142713152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI pioneers win 2024 Nobel prizes 人工智能先驱荣获 2024 年诺贝尔奖
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-22 DOI: 10.1038/s42256-024-00945-0
{"title":"AI pioneers win 2024 Nobel prizes","authors":"","doi":"10.1038/s42256-024-00945-0","DOIUrl":"10.1038/s42256-024-00945-0","url":null,"abstract":"The 2024 Nobel prizes in physics and chemistry highlight the interdisciplinary nature and impact of AI in science.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1271-1271"},"PeriodicalIF":18.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00945-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142690742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for practical quantum error mitigation 实用量子错误缓解的机器学习
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-22 DOI: 10.1038/s42256-024-00927-2
Haoran Liao, Derek S. Wang, Iskandar Sitdikov, Ciro Salcedo, Alireza Seif, Zlatko K. Minev
{"title":"Machine learning for practical quantum error mitigation","authors":"Haoran Liao, Derek S. Wang, Iskandar Sitdikov, Ciro Salcedo, Alireza Seif, Zlatko K. Minev","doi":"10.1038/s42256-024-00927-2","DOIUrl":"10.1038/s42256-024-00927-2","url":null,"abstract":"Quantum computers have progressed towards outperforming classical supercomputers, but quantum errors remain the primary obstacle. In the past few years, the field of quantum error mitigation has provided strategies for overcoming errors in near-term devices, enabling improved accuracy at the cost of additional run time. Through experiments on state-of-the-art quantum computers using up to 100 qubits, we demonstrate that without sacrificing accuracy, machine learning for quantum error mitigation (ML-QEM) drastically reduces the cost of mitigation. We benchmarked ML-QEM using a variety of machine learning models—linear regression, random forest, multilayer perceptron and graph neural networks—on diverse classes of quantum circuits, over increasingly complex device noise profiles, under interpolation and extrapolation, and in both numerics and experiments. These tests employed the popular digital zero-noise extrapolation method as an added reference. Finally, we propose a path towards scalable mitigation using ML-QEM to mimic traditional mitigation methods with superior runtime efficiency. Our results show that classical machine learning can extend the reach and practicality of quantum error mitigation by reducing its overhead and highlight its broader potential for practical quantum computations. Quantum error mitigation improves the accuracy of quantum computers at a computational overhead. Liao et al. demonstrate that classical machine learning models can deliver accuracy comparable to that of conventional techniques while reducing quantum computational costs.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1478-1486"},"PeriodicalIF":18.8,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142684374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A soft skin with self-decoupled three-axis force-sensing taxels 带有自解耦三轴力感应传感器的软皮肤
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-19 DOI: 10.1038/s42256-024-00904-9
Youcan Yan, Ahmed Zermane, Jia Pan, Abderrahmane Kheddar
{"title":"A soft skin with self-decoupled three-axis force-sensing taxels","authors":"Youcan Yan, Ahmed Zermane, Jia Pan, Abderrahmane Kheddar","doi":"10.1038/s42256-024-00904-9","DOIUrl":"10.1038/s42256-024-00904-9","url":null,"abstract":"Electronic skins integrating both normal and shear force per taxel have a wide range of applications across diverse fields, including robotics, haptics and health monitoring. Current multi-axis tactile sensors often present complexities in structure and fabrication or require an extensive calibration process, limiting their widespread applications. Here we report an electronic soft magnetic skin capable of self-decoupling three-axis forces at each taxel. We use a simple sensor structure with customizable sensitivity and measurement range, reducing the calibration complexity from known quadratic (N2) or cubic (N3) scales down to a linear (3N) scale. The three-axis self-decoupling property of the sensor is achieved by overlaying two sinusoidally magnetized flexible magnetic films with orthogonal magnetization patterns. Leveraging the self-decoupling feature and its simple structure, we demonstrate that our sensor can facilitate a diverse range of applications, such as measuring the three-dimensional force distribution in artificial knee joints, teaching robots by touch demonstration and monitoring the interaction forces between knee braces and human skin during various activities. Electronic skin with decoupled force feedback is essential in robotics. Yan et al. develop a soft magnetic skin capable of self-decoupling three-axis forces per taxel, reducing calibration complexity from quadratic or cubic scales to a linear scale.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1284-1295"},"PeriodicalIF":18.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reshaping the discovery of self-assembling peptides with generative AI guided by hybrid deep learning 以混合深度学习为指导的生成式人工智能重塑自组装肽的发现过程
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-19 DOI: 10.1038/s42256-024-00928-1
Marko Njirjak, Lucija Žužić, Marko Babić, Patrizia Janković, Erik Otović, Daniela Kalafatovic, Goran Mauša
{"title":"Reshaping the discovery of self-assembling peptides with generative AI guided by hybrid deep learning","authors":"Marko Njirjak, Lucija Žužić, Marko Babić, Patrizia Janković, Erik Otović, Daniela Kalafatovic, Goran Mauša","doi":"10.1038/s42256-024-00928-1","DOIUrl":"10.1038/s42256-024-00928-1","url":null,"abstract":"Supramolecular peptide-based materials have great potential for revolutionizing fields like nanotechnology and medicine. However, deciphering the intricate sequence-to-assembly pathway, essential for their real-life applications, remains a challenging endeavour. Their discovery relies primarily on empirical approaches that require substantial financial resources, impeding their disruptive potential. Consequently, despite the multitude of characterized self-assembling peptides and their demonstrated advantages, only a few peptide materials have found their way to the market. Machine learning trained on experimentally verified data presents a promising tool for quickly identifying sequences with a high propensity to self-assemble, thereby focusing resource expenditures on the most promising candidates. Here we introduce a framework that implements an accurate classifier in a metaheuristic-based generative model to navigate the search through the peptide sequence space of challenging size. For this purpose, we trained five recurrent neural networks among which the hybrid model that uses sequential information on aggregation propensity and specific physicochemical properties achieved a superior performance with 81.9% accuracy and 0.865 F1 score. Molecular dynamics simulations and experimental validation have confirmed the generative model to be 80–95% accurate in the discovery of self-assembling peptides, outperforming the current state-of-the-art models. The proposed modular framework efficiently complements human intuition in the exploration of self-assembling peptides and presents an important step in the development of intelligent laboratories for accelerated material discovery. A generative model guided by a machine-learning-based classifier capable of assessing unexplored regions of the peptide space in the search for new self-assembling sequences.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1487-1500"},"PeriodicalIF":18.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient rare event sampling with unsupervised normalizing flows 利用无监督归一化流量进行高效罕见事件采样
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-19 DOI: 10.1038/s42256-024-00918-3
Solomon Asghar, Qing-Xiang Pei, Giorgio Volpe, Ran Ni
{"title":"Efficient rare event sampling with unsupervised normalizing flows","authors":"Solomon Asghar, Qing-Xiang Pei, Giorgio Volpe, Ran Ni","doi":"10.1038/s42256-024-00918-3","DOIUrl":"10.1038/s42256-024-00918-3","url":null,"abstract":"From physics and biology to seismology and economics, the behaviour of countless systems is determined by impactful yet unlikely transitions between metastable states known as rare events, the study of which is essential for understanding and controlling the properties of these systems. Classical computational methods to sample rare events remain prohibitively inefficient and are bottlenecks for enhanced samplers that require prior data. Here we introduce a physics-informed machine learning framework, normalizing Flow enhanced Rare Event Sampler (FlowRES), which uses unsupervised normalizing flow neural networks to enhance Monte Carlo sampling of rare events by generating high-quality non-local Monte Carlo proposals. We validated FlowRES by sampling the transition path ensembles of equilibrium and non-equilibrium systems of Brownian particles, exploring increasingly complex potentials. Beyond eliminating the requirements for prior data, FlowRES features key advantages over established samplers: no collective variables need to be defined, efficiency remains constant even as events become increasingly rare and systems with multiple routes between states can be straightforwardly simulated. Sampling rare events is key to various fields of science, but current methods are inefficient. Asghar and colleagues propose a rare event sampler based on normalizing flow neural networks that requires no prior data or collective variables, works at and out of equilibrium and keeps efficiency constant as events become rarer.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 11","pages":"1370-1381"},"PeriodicalIF":18.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s42256-024-00918-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical large language models with misplaced focus 重点错位的临床大型语言模型
IF 18.8 1区 计算机科学
Nature Machine Intelligence Pub Date : 2024-11-18 DOI: 10.1038/s42256-024-00929-0
Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian
{"title":"Clinical large language models with misplaced focus","authors":"Zining Luo, Haowei Ma, Zhiwu Li, Yuquan Chen, Yixin Sun, Aimin Hu, Jiang Yu, Yang Qiao, Junxian Gu, Hongying Li, Xuxi Peng, Dunrui Wang, Ying Liu, Zhenglong Liu, Jiebin Xie, Zhen Jiang, Gang Tian","doi":"10.1038/s42256-024-00929-0","DOIUrl":"10.1038/s42256-024-00929-0","url":null,"abstract":"","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 12","pages":"1411-1412"},"PeriodicalIF":18.8,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信