{"title":"用于二维材料可扩展和智能表征的零射自主显微镜","authors":"Jingyun Yang, , , Ruoyan Avery Yin, , , Chi Jiang, , , Yuepeng Hu, , , Xiaokai Zhu, , , Xingjian Hu, , , Sutharsika Kumar, , , Samantha K. Holmes, , , Xiao Wang, , , Xiaohua Zhai, , , Keran Rong, , , Yunyue Zhu, , , Tianyi Zhang, , , Zongyou Yin, , , Yuan Cao, , , Haoning Tang, , , Aaron D. Franklin, , , Jing Kong, , , Neil Zhenqiang Gong, , , Zhichu Ren*, , and , Haozhe Wang*, ","doi":"10.1021/acsnano.5c09057","DOIUrl":null,"url":null,"abstract":"<p >Characterization of atomic-scale materials traditionally requires human experts with months to years of specialized training. Even for trained human operators, accurate and reliable characterization remains challenging when examining newly discovered materials such as two-dimensional (2D) structures. This bottleneck drives demand for fully autonomous experimentation systems capable of comprehending research objectives without requiring large training data sets. In this work, we present ATOMIC (Autonomous Technology for Optical Microscopy & Intelligent Characterization), an end-to-end framework that integrates foundation models to enable fully autonomous, zero-shot characterization of 2D materials. Our system integrates the vision foundation model (i.e., Segment Anything Model), large language models (i.e., ChatGPT), unsupervised clustering, and topological analysis to automate microscope control, sample scanning, image segmentation, and intelligent analysis through prompt engineering, eliminating the need for additional training. When analyzing typical MoS<sub>2</sub> samples, our approach achieves 99.7% segmentation accuracy for single layer identification, which is equivalent to that of human experts. In addition, the integrated model is able to detect grain boundary slits that are challenging to identify with human eyes. Furthermore, the system retains robust accuracy despite variable conditions, including defocus, color-temperature fluctuations, and exposure variations. It is applicable to a broad spectrum of common 2D materials─including graphene, MoS<sub>2</sub>, WSe<sub>2</sub>, SnSe─regardless of whether they were fabricated via top-down or bottom-up methods. This work represents the implementation of foundation models to achieve autonomous analysis, providing a scalable and data-efficient characterization paradigm that transforms the approach to nanoscale materials research.</p>","PeriodicalId":21,"journal":{"name":"ACS Nano","volume":"19 40","pages":"35493–35502"},"PeriodicalIF":16.0000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Zero-Shot Autonomous Microscopy for Scalable and Intelligent Characterization of 2D Materials\",\"authors\":\"Jingyun Yang, , , Ruoyan Avery Yin, , , Chi Jiang, , , Yuepeng Hu, , , Xiaokai Zhu, , , Xingjian Hu, , , Sutharsika Kumar, , , Samantha K. Holmes, , , Xiao Wang, , , Xiaohua Zhai, , , Keran Rong, , , Yunyue Zhu, , , Tianyi Zhang, , , Zongyou Yin, , , Yuan Cao, , , Haoning Tang, , , Aaron D. Franklin, , , Jing Kong, , , Neil Zhenqiang Gong, , , Zhichu Ren*, , and , Haozhe Wang*, \",\"doi\":\"10.1021/acsnano.5c09057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p >Characterization of atomic-scale materials traditionally requires human experts with months to years of specialized training. Even for trained human operators, accurate and reliable characterization remains challenging when examining newly discovered materials such as two-dimensional (2D) structures. This bottleneck drives demand for fully autonomous experimentation systems capable of comprehending research objectives without requiring large training data sets. In this work, we present ATOMIC (Autonomous Technology for Optical Microscopy & Intelligent Characterization), an end-to-end framework that integrates foundation models to enable fully autonomous, zero-shot characterization of 2D materials. Our system integrates the vision foundation model (i.e., Segment Anything Model), large language models (i.e., ChatGPT), unsupervised clustering, and topological analysis to automate microscope control, sample scanning, image segmentation, and intelligent analysis through prompt engineering, eliminating the need for additional training. When analyzing typical MoS<sub>2</sub> samples, our approach achieves 99.7% segmentation accuracy for single layer identification, which is equivalent to that of human experts. In addition, the integrated model is able to detect grain boundary slits that are challenging to identify with human eyes. Furthermore, the system retains robust accuracy despite variable conditions, including defocus, color-temperature fluctuations, and exposure variations. It is applicable to a broad spectrum of common 2D materials─including graphene, MoS<sub>2</sub>, WSe<sub>2</sub>, SnSe─regardless of whether they were fabricated via top-down or bottom-up methods. This work represents the implementation of foundation models to achieve autonomous analysis, providing a scalable and data-efficient characterization paradigm that transforms the approach to nanoscale materials research.</p>\",\"PeriodicalId\":21,\"journal\":{\"name\":\"ACS Nano\",\"volume\":\"19 40\",\"pages\":\"35493–35502\"},\"PeriodicalIF\":16.0000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Nano\",\"FirstCategoryId\":\"88\",\"ListUrlMain\":\"https://pubs.acs.org/doi/10.1021/acsnano.5c09057\",\"RegionNum\":1,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Nano","FirstCategoryId":"88","ListUrlMain":"https://pubs.acs.org/doi/10.1021/acsnano.5c09057","RegionNum":1,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
Zero-Shot Autonomous Microscopy for Scalable and Intelligent Characterization of 2D Materials
Characterization of atomic-scale materials traditionally requires human experts with months to years of specialized training. Even for trained human operators, accurate and reliable characterization remains challenging when examining newly discovered materials such as two-dimensional (2D) structures. This bottleneck drives demand for fully autonomous experimentation systems capable of comprehending research objectives without requiring large training data sets. In this work, we present ATOMIC (Autonomous Technology for Optical Microscopy & Intelligent Characterization), an end-to-end framework that integrates foundation models to enable fully autonomous, zero-shot characterization of 2D materials. Our system integrates the vision foundation model (i.e., Segment Anything Model), large language models (i.e., ChatGPT), unsupervised clustering, and topological analysis to automate microscope control, sample scanning, image segmentation, and intelligent analysis through prompt engineering, eliminating the need for additional training. When analyzing typical MoS2 samples, our approach achieves 99.7% segmentation accuracy for single layer identification, which is equivalent to that of human experts. In addition, the integrated model is able to detect grain boundary slits that are challenging to identify with human eyes. Furthermore, the system retains robust accuracy despite variable conditions, including defocus, color-temperature fluctuations, and exposure variations. It is applicable to a broad spectrum of common 2D materials─including graphene, MoS2, WSe2, SnSe─regardless of whether they were fabricated via top-down or bottom-up methods. This work represents the implementation of foundation models to achieve autonomous analysis, providing a scalable and data-efficient characterization paradigm that transforms the approach to nanoscale materials research.
期刊介绍:
ACS Nano, published monthly, serves as an international forum for comprehensive articles on nanoscience and nanotechnology research at the intersections of chemistry, biology, materials science, physics, and engineering. The journal fosters communication among scientists in these communities, facilitating collaboration, new research opportunities, and advancements through discoveries. ACS Nano covers synthesis, assembly, characterization, theory, and simulation of nanostructures, nanobiotechnology, nanofabrication, methods and tools for nanoscience and nanotechnology, and self- and directed-assembly. Alongside original research articles, it offers thorough reviews, perspectives on cutting-edge research, and discussions envisioning the future of nanoscience and nanotechnology.