Mojtaba Arezoomand, Elliott J. Rouse, J. Austin-Breneman
{"title":"动态用户偏好设计的理论框架","authors":"Mojtaba Arezoomand, Elliott J. Rouse, J. Austin-Breneman","doi":"10.1115/detc2020-22460","DOIUrl":null,"url":null,"abstract":"\n A key assumption of new product development is that user requirements and related preferences do not vary on time scales of the process length. However, prior work has identified cases in which user preferences for product attributes can vary with time. This study proposes a method, Design for Dynamic User Preferences, which adapts reinforcement learning (RL) algorithms for designing physical systems whose functionality changes with user feedback. An illustrative case comprised of the design of a variable stiffness prosthetic ankle is presented to evaluate the potential usefulness of the framework. Lifetime user satisfaction for static and dynamic design strategies are compared over simulated user preferences under a number of conditions. Results suggest RL-based strategies outperform static strategies for cases with dynamic user preferences despite significantly less initial information. Within RL methods, upper-confidence bound policies led to higher user satisfaction on average. This study suggests that further investigation into RL-based design strategies is warranted for situations with possibly dynamic preferences.","PeriodicalId":415040,"journal":{"name":"Volume 11A: 46th Design Automation Conference (DAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Theoretical Framework for Design for Dynamic User Preferences\",\"authors\":\"Mojtaba Arezoomand, Elliott J. Rouse, J. Austin-Breneman\",\"doi\":\"10.1115/detc2020-22460\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n A key assumption of new product development is that user requirements and related preferences do not vary on time scales of the process length. However, prior work has identified cases in which user preferences for product attributes can vary with time. This study proposes a method, Design for Dynamic User Preferences, which adapts reinforcement learning (RL) algorithms for designing physical systems whose functionality changes with user feedback. An illustrative case comprised of the design of a variable stiffness prosthetic ankle is presented to evaluate the potential usefulness of the framework. Lifetime user satisfaction for static and dynamic design strategies are compared over simulated user preferences under a number of conditions. Results suggest RL-based strategies outperform static strategies for cases with dynamic user preferences despite significantly less initial information. Within RL methods, upper-confidence bound policies led to higher user satisfaction on average. This study suggests that further investigation into RL-based design strategies is warranted for situations with possibly dynamic preferences.\",\"PeriodicalId\":415040,\"journal\":{\"name\":\"Volume 11A: 46th Design Automation Conference (DAC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Volume 11A: 46th Design Automation Conference (DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/detc2020-22460\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 11A: 46th Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2020-22460","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Theoretical Framework for Design for Dynamic User Preferences
A key assumption of new product development is that user requirements and related preferences do not vary on time scales of the process length. However, prior work has identified cases in which user preferences for product attributes can vary with time. This study proposes a method, Design for Dynamic User Preferences, which adapts reinforcement learning (RL) algorithms for designing physical systems whose functionality changes with user feedback. An illustrative case comprised of the design of a variable stiffness prosthetic ankle is presented to evaluate the potential usefulness of the framework. Lifetime user satisfaction for static and dynamic design strategies are compared over simulated user preferences under a number of conditions. Results suggest RL-based strategies outperform static strategies for cases with dynamic user preferences despite significantly less initial information. Within RL methods, upper-confidence bound policies led to higher user satisfaction on average. This study suggests that further investigation into RL-based design strategies is warranted for situations with possibly dynamic preferences.