Evaluating Designer Learning and Performance in Interactive Deep Generative Design

Ashish M. Chaudhari, Daniel Selva
{"title":"Evaluating Designer Learning and Performance in Interactive Deep Generative Design","authors":"Ashish M. Chaudhari, Daniel Selva","doi":"10.1115/detc2022-90477","DOIUrl":null,"url":null,"abstract":"\n Deep generative models have shown significant promise to improve performance in design space exploration (DSE), but they lack interpretability. A component of interpretability in DSE is helping designers learn how input design decisions influence multi-objective performance. This experimental study explores how human-machine collaboration influences both designer learning and design performance in deep learning-based DSE. A within-subject experiment is implemented with 42 subjects involving mechanical metamaterial design using a conditional variational auto-encoder. The independent variables in the experiment are two interactivity factors: (i) simulatability, e.g., manual design generation (high simulatability), manual feature-based design synthesis, and semi-automated feature-based synthesis (low simulatibility); and (ii) semanticity of features, e.g., meaningful versus abstract latent features. We perform assessment of designer learning using item response theory and design performance using metrics such as distance to utopia point and hypervolume improvement. The findings highlights a highly intertwined relationship between designer learning and design performance. Compared to manual design generation, the semi-automated synthesis generates designs closer to the utopia point. Still, it does not result in greater hyper-volume improvement. Further, the subjects learn the effects of semantic features better than abstract features, but only when the design performance is sensitive to those semantic features. Potential cognitive constructs, such as cognitive load and recognition heuristic, that may influence the interpretability of deep generative models are discussed.","PeriodicalId":394503,"journal":{"name":"Volume 3B: 48th Design Automation Conference (DAC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 3B: 48th Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2022-90477","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Deep generative models have shown significant promise to improve performance in design space exploration (DSE), but they lack interpretability. A component of interpretability in DSE is helping designers learn how input design decisions influence multi-objective performance. This experimental study explores how human-machine collaboration influences both designer learning and design performance in deep learning-based DSE. A within-subject experiment is implemented with 42 subjects involving mechanical metamaterial design using a conditional variational auto-encoder. The independent variables in the experiment are two interactivity factors: (i) simulatability, e.g., manual design generation (high simulatability), manual feature-based design synthesis, and semi-automated feature-based synthesis (low simulatibility); and (ii) semanticity of features, e.g., meaningful versus abstract latent features. We perform assessment of designer learning using item response theory and design performance using metrics such as distance to utopia point and hypervolume improvement. The findings highlights a highly intertwined relationship between designer learning and design performance. Compared to manual design generation, the semi-automated synthesis generates designs closer to the utopia point. Still, it does not result in greater hyper-volume improvement. Further, the subjects learn the effects of semantic features better than abstract features, but only when the design performance is sensitive to those semantic features. Potential cognitive constructs, such as cognitive load and recognition heuristic, that may influence the interpretability of deep generative models are discussed.
在交互式深度生成设计中评估设计师的学习和表现
深度生成模型在提高设计空间探索(DSE)性能方面表现出了显著的前景,但它们缺乏可解释性。DSE中可解释性的一个组成部分是帮助设计师了解输入设计决策如何影响多目标性能。本实验研究探讨了在基于深度学习的DSE中,人机协作如何影响设计师学习和设计性能。采用条件变分自编码器对42名受试者进行了涉及机械超材料设计的实验。实验中的自变量是两个交互性因素:(i)可模拟性,例如手动设计生成(高可模拟性)、手动基于特征的设计合成和半自动化基于特征的合成(低可模拟性);(ii)特征的语义性,例如,有意义的与抽象的潜在特征。我们使用项目反应理论对设计师学习进行评估,并使用诸如到乌托邦点的距离和超大体积改进等指标对设计性能进行评估。研究结果强调了设计师学习和设计绩效之间高度交织的关系。与人工设计生成相比,半自动化合成生成的设计更接近乌托邦点。尽管如此,它并没有带来更大的超容量改进。此外,当设计性能对语义特征敏感时,被试对语义特征的学习效果优于抽象特征。讨论了可能影响深度生成模型可解释性的潜在认知结构,如认知负荷和识别启发式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信