Abdalaziz Awad, Philipp Brendel, Dereje S. Woldeamanual, A. Rosskopf, A. Erdmann
{"title":"使用数据高效生成网络的EUV光刻图像的准确预测","authors":"Abdalaziz Awad, Philipp Brendel, Dereje S. Woldeamanual, A. Rosskopf, A. Erdmann","doi":"10.1117/12.2597309","DOIUrl":null,"url":null,"abstract":"We implement a data efficient approach to train a conditional generative adversarial network (cGAN) \nto predict 3D mask model aerial images, which involves providing the cGAN with approximated 2D mask model images as inputs and 3D mask model images as outputs. This approach takes advantage of the similarity between the images obtained from both computation models and the computational efficiency of the 2D mask model simulations, which allows the network to train on a reduced amount of training data compared to approaches previously implemented to accurately predict the 3D mask model images. We further demonstrate that the proposed method provides an accuracy improvement over training the network with the mask pattern layouts as inputs. \nPrevious studies have shown that such cGAN architecture is proficient for generalized and complex image-to-image translation tasks. In this work, we demonstrate that adjustments to the weighing of the generator and discriminator losses can significantly improve the accuracy of the network from a lithographic standpoint Our initial tests indicate that only training the generator part of the cGAN can be beneficial to the accuracy while further reducing computational overhead. The accuracy of the network-generated 3D mask model images is demonstrated with low errors of typical lithographic process metrics, such as the critical dimensions and local contrast. The networks predictions also yield substantially reduced the errors compared to the 2D mask model while being on the same level of low computational demands.","PeriodicalId":431264,"journal":{"name":"Computational Optics 2021","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Accurate prediction of EUV lithographic images using data-efficient generative networks\",\"authors\":\"Abdalaziz Awad, Philipp Brendel, Dereje S. Woldeamanual, A. Rosskopf, A. Erdmann\",\"doi\":\"10.1117/12.2597309\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We implement a data efficient approach to train a conditional generative adversarial network (cGAN) \\nto predict 3D mask model aerial images, which involves providing the cGAN with approximated 2D mask model images as inputs and 3D mask model images as outputs. This approach takes advantage of the similarity between the images obtained from both computation models and the computational efficiency of the 2D mask model simulations, which allows the network to train on a reduced amount of training data compared to approaches previously implemented to accurately predict the 3D mask model images. We further demonstrate that the proposed method provides an accuracy improvement over training the network with the mask pattern layouts as inputs. \\nPrevious studies have shown that such cGAN architecture is proficient for generalized and complex image-to-image translation tasks. In this work, we demonstrate that adjustments to the weighing of the generator and discriminator losses can significantly improve the accuracy of the network from a lithographic standpoint Our initial tests indicate that only training the generator part of the cGAN can be beneficial to the accuracy while further reducing computational overhead. The accuracy of the network-generated 3D mask model images is demonstrated with low errors of typical lithographic process metrics, such as the critical dimensions and local contrast. The networks predictions also yield substantially reduced the errors compared to the 2D mask model while being on the same level of low computational demands.\",\"PeriodicalId\":431264,\"journal\":{\"name\":\"Computational Optics 2021\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computational Optics 2021\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2597309\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Optics 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2597309","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Accurate prediction of EUV lithographic images using data-efficient generative networks
We implement a data efficient approach to train a conditional generative adversarial network (cGAN)
to predict 3D mask model aerial images, which involves providing the cGAN with approximated 2D mask model images as inputs and 3D mask model images as outputs. This approach takes advantage of the similarity between the images obtained from both computation models and the computational efficiency of the 2D mask model simulations, which allows the network to train on a reduced amount of training data compared to approaches previously implemented to accurately predict the 3D mask model images. We further demonstrate that the proposed method provides an accuracy improvement over training the network with the mask pattern layouts as inputs.
Previous studies have shown that such cGAN architecture is proficient for generalized and complex image-to-image translation tasks. In this work, we demonstrate that adjustments to the weighing of the generator and discriminator losses can significantly improve the accuracy of the network from a lithographic standpoint Our initial tests indicate that only training the generator part of the cGAN can be beneficial to the accuracy while further reducing computational overhead. The accuracy of the network-generated 3D mask model images is demonstrated with low errors of typical lithographic process metrics, such as the critical dimensions and local contrast. The networks predictions also yield substantially reduced the errors compared to the 2D mask model while being on the same level of low computational demands.