An SEM-based deep defect classification system for VSB mask writer that works with die-to-die and die-to-database inspection methods using multiple digital twins built with the state-of-the-art neural networks

A. Baranwal, Suhas Pillai, T. Nguyen, J. Yashima, Jim Dewitt, N. Nakayamada, A. Fujimura
{"title":"An SEM-based deep defect classification system for VSB mask writer that works with die-to-die and die-to-database inspection methods using multiple digital twins built with the state-of-the-art neural networks","authors":"A. Baranwal, Suhas Pillai, T. Nguyen, J. Yashima, Jim Dewitt, N. Nakayamada, A. Fujimura","doi":"10.1117/12.2601004","DOIUrl":null,"url":null,"abstract":"The two standard reticle defect inspection methods are die-to-die and die-to-database. The die-to-die inspection method compares images from the two dice on the same reticle to identify any defect. However, the die-to-database inspection method compares images from the reticle with the design data (CAD). The previous year, we built an SEM-based VSB writer classification system for die-to-die inspection that used state-of-the-art deep learning models to identify errors such as shape, position, and dose [1]. Using the deep neural networks and DL-based SEM digital twins [2], we showed better accuracy than the average human expert in classifying SEM-based defects. However, a limitation remained that the DL model wasn’t aware of chrome and glass regions, just from the input SEM. This information is helpful to make better decisions in classifying some typical errors achieving higher accuracy. In the current paper, we improve the accuracy of the existing classifier by enhancing the underlying deep learning model and supplementing it with the recognition of chrome and glass (exposed and unexposed) regions further. We make it possible with yet another DL-based SEM2CAD digital twin to automatically identify exposed/unexposed areas from the SEM and augment manual input by the expert to it. We feed this new information into the SEM classifier that currently takes a reference and error SEM image for more accurate results. In addition, we also built an SEM-based defect classification system for the die-to-database inspection to categorize various types of VSB mask writer defects, which requires defect SEM images and the reference CAD. Using several deep neural network models and digital twins, in this paper, we provide a production-grade system for the VSB writer’s SEM-based defect classification that works for both die-to-die and die-to-database inspection methods.","PeriodicalId":412383,"journal":{"name":"Photomask Technology 2021","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photomask Technology 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2601004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The two standard reticle defect inspection methods are die-to-die and die-to-database. The die-to-die inspection method compares images from the two dice on the same reticle to identify any defect. However, the die-to-database inspection method compares images from the reticle with the design data (CAD). The previous year, we built an SEM-based VSB writer classification system for die-to-die inspection that used state-of-the-art deep learning models to identify errors such as shape, position, and dose [1]. Using the deep neural networks and DL-based SEM digital twins [2], we showed better accuracy than the average human expert in classifying SEM-based defects. However, a limitation remained that the DL model wasn’t aware of chrome and glass regions, just from the input SEM. This information is helpful to make better decisions in classifying some typical errors achieving higher accuracy. In the current paper, we improve the accuracy of the existing classifier by enhancing the underlying deep learning model and supplementing it with the recognition of chrome and glass (exposed and unexposed) regions further. We make it possible with yet another DL-based SEM2CAD digital twin to automatically identify exposed/unexposed areas from the SEM and augment manual input by the expert to it. We feed this new information into the SEM classifier that currently takes a reference and error SEM image for more accurate results. In addition, we also built an SEM-based defect classification system for the die-to-database inspection to categorize various types of VSB mask writer defects, which requires defect SEM images and the reference CAD. Using several deep neural network models and digital twins, in this paper, we provide a production-grade system for the VSB writer’s SEM-based defect classification that works for both die-to-die and die-to-database inspection methods.
一种基于sem的VSB掩模编码器深度缺陷分类系统,该系统使用最先进的神经网络构建的多个数字双胞胎,可以使用模对模和模对数据库的检测方法
两种标准的网线缺陷检测方法是模对模和模对库。模对模检验方法比较来自同一标线上的两个骰子的图像,以识别任何缺陷。然而,模具到数据库的检查方法比较从网线图像与设计数据(CAD)。去年,我们建立了一个基于sem的VSB编写器分类系统,用于模具到模具的检查,该系统使用最先进的深度学习模型来识别形状、位置和剂量等错误。利用深度神经网络和基于dl的SEM数字双胞胎[2],我们在SEM缺陷分类方面表现出比一般人类专家更好的准确性。然而,一个限制仍然存在,即DL模型不知道铬和玻璃区域,只是从输入的SEM。这些信息有助于在对一些典型错误进行分类时做出更好的决策,从而获得更高的准确性。在本文中,我们通过增强底层深度学习模型并进一步补充对铬和玻璃(暴露和未暴露)区域的识别来提高现有分类器的准确性。我们可以使用另一个基于dl的SEM2CAD数字双胞胎来自动识别SEM中的暴露/未暴露区域,并增加专家对其的手动输入。我们将这些新信息输入到SEM分类器中,该分类器目前采用参考和误差SEM图像以获得更准确的结果。此外,我们还建立了一个基于SEM的缺陷分类系统,用于模具到数据库的检测,对各种类型的VSB掩模写入器缺陷进行分类,这需要缺陷SEM图像和参考CAD。在本文中,我们使用几个深度神经网络模型和数字孪生,为VSB编码器基于sem的缺陷分类提供了一个生产级系统,该系统适用于模具到模具和模具到数据库的检测方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信