{"title":"无梯度神经拓扑优化","authors":"Gawel Kus, Miguel A. Bessa","doi":"arxiv-2403.04937","DOIUrl":null,"url":null,"abstract":"Gradient-free optimizers allow for tackling problems regardless of the\nsmoothness or differentiability of their objective function, but they require\nmany more iterations to converge when compared to gradient-based algorithms.\nThis has made them unviable for topology optimization due to the high\ncomputational cost per iteration and high dimensionality of these problems. We\npropose a pre-trained neural reparameterization strategy that leads to at least\none order of magnitude decrease in iteration count when optimizing the designs\nin latent space, as opposed to the conventional approach without latent\nreparameterization. We demonstrate this via extensive computational experiments\nin- and out-of-distribution with the training data. Although gradient-based\ntopology optimization is still more efficient for differentiable problems, such\nas compliance optimization of structures, we believe this work will open up a\nnew path for problems where gradient information is not readily available (e.g.\nfracture).","PeriodicalId":501061,"journal":{"name":"arXiv - CS - Numerical Analysis","volume":"20 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gradient-free neural topology optimization\",\"authors\":\"Gawel Kus, Miguel A. Bessa\",\"doi\":\"arxiv-2403.04937\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gradient-free optimizers allow for tackling problems regardless of the\\nsmoothness or differentiability of their objective function, but they require\\nmany more iterations to converge when compared to gradient-based algorithms.\\nThis has made them unviable for topology optimization due to the high\\ncomputational cost per iteration and high dimensionality of these problems. We\\npropose a pre-trained neural reparameterization strategy that leads to at least\\none order of magnitude decrease in iteration count when optimizing the designs\\nin latent space, as opposed to the conventional approach without latent\\nreparameterization. We demonstrate this via extensive computational experiments\\nin- and out-of-distribution with the training data. Although gradient-based\\ntopology optimization is still more efficient for differentiable problems, such\\nas compliance optimization of structures, we believe this work will open up a\\nnew path for problems where gradient information is not readily available (e.g.\\nfracture).\",\"PeriodicalId\":501061,\"journal\":{\"name\":\"arXiv - CS - Numerical Analysis\",\"volume\":\"20 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Numerical Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2403.04937\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.04937","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Gradient-free optimizers allow for tackling problems regardless of the
smoothness or differentiability of their objective function, but they require
many more iterations to converge when compared to gradient-based algorithms.
This has made them unviable for topology optimization due to the high
computational cost per iteration and high dimensionality of these problems. We
propose a pre-trained neural reparameterization strategy that leads to at least
one order of magnitude decrease in iteration count when optimizing the designs
in latent space, as opposed to the conventional approach without latent
reparameterization. We demonstrate this via extensive computational experiments
in- and out-of-distribution with the training data. Although gradient-based
topology optimization is still more efficient for differentiable problems, such
as compliance optimization of structures, we believe this work will open up a
new path for problems where gradient information is not readily available (e.g.
fracture).