{"title":"Deep learning-based multimodal analysis for transition-metal dichalcogenides","authors":"Shivani Bhawsar, Mengqi Fang, Abdus Salam Sarkar, Siwei Chen, Eui-Hyeok Yang","doi":"10.1557/s43577-024-00741-6","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Abstract</h3><p>In this study, we present a novel approach to enable high-throughput characterization of transition-metal dichalcogenides (TMDs) across various layers, including mono-, bi-, tri-, four, and multilayers, utilizing a generative deep learning-based image-to-image translation method. Graphical features, including contrast, color, shapes, flake sizes, and their distributions, were extracted using color-based segmentation of optical images, and Raman and photoluminescence spectra of chemical vapor deposition-grown and mechanically exfoliated TMDs. The labeled images to identify and characterize TMDs were generated using the pix2pix conditional generative adversarial network (cGAN), trained only on a limited data set. Furthermore, our model demonstrated versatility by successfully characterizing TMD heterostructures, showing adaptability across diverse material compositions.</p><h3 data-test=\"abstract-sub-heading\">Graphical abstract</h3><h3 data-test=\"abstract-sub-heading\">Impact Statement</h3><p>Deep learning has been used to identify and characterize transition-metal dichalcogenides (TMDs). Although studies leveraging convolutional neural networks have shown promise in analyzing the optical, physical, and electronic properties of TMDs, they need extensive data sets and show limited generalization capabilities with smaller data sets. This work introduces a transformative approach—a generative deep learning (DL)-based image-to-image translation method—for high-throughput TMD characterization. Our method, employing a DL-based pix2pix cGAN network, transcends traditional limitations by offering insights into the graphical features, layer numbers, and distributions of TMDs, even with limited data sets. Notably, we demonstrate the scalability of our model through successful characterization of different heterostructures, showcasing its adaptability across diverse material compositions.</p>","PeriodicalId":18828,"journal":{"name":"Mrs Bulletin","volume":"44 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mrs Bulletin","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1557/s43577-024-00741-6","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, we present a novel approach to enable high-throughput characterization of transition-metal dichalcogenides (TMDs) across various layers, including mono-, bi-, tri-, four, and multilayers, utilizing a generative deep learning-based image-to-image translation method. Graphical features, including contrast, color, shapes, flake sizes, and their distributions, were extracted using color-based segmentation of optical images, and Raman and photoluminescence spectra of chemical vapor deposition-grown and mechanically exfoliated TMDs. The labeled images to identify and characterize TMDs were generated using the pix2pix conditional generative adversarial network (cGAN), trained only on a limited data set. Furthermore, our model demonstrated versatility by successfully characterizing TMD heterostructures, showing adaptability across diverse material compositions.
Graphical abstract
Impact Statement
Deep learning has been used to identify and characterize transition-metal dichalcogenides (TMDs). Although studies leveraging convolutional neural networks have shown promise in analyzing the optical, physical, and electronic properties of TMDs, they need extensive data sets and show limited generalization capabilities with smaller data sets. This work introduces a transformative approach—a generative deep learning (DL)-based image-to-image translation method—for high-throughput TMD characterization. Our method, employing a DL-based pix2pix cGAN network, transcends traditional limitations by offering insights into the graphical features, layer numbers, and distributions of TMDs, even with limited data sets. Notably, we demonstrate the scalability of our model through successful characterization of different heterostructures, showcasing its adaptability across diverse material compositions.
期刊介绍:
MRS Bulletin is one of the most widely recognized and highly respected publications in advanced materials research. Each month, the Bulletin provides a comprehensive overview of a specific materials theme, along with industry and policy developments, and MRS and materials-community news and events. Written by leading experts, the overview articles are useful references for specialists, but are also presented at a level understandable to a broad scientific audience.