{"title":"作为设计研究验证的设计教学","authors":"P. Papalambros","doi":"10.1080/21650349.2019.1690963","DOIUrl":null,"url":null,"abstract":"Validation of design research results has been an ongoing challenge. Validating is defined as ‘supporting or corroborating on a sound or authoritative basis’ or ‘recognizing, establishing, or illustrating the worthiness or legitimacy of (something)’ (Merriam-Webster Online Dictionary). In scientific research we validate theory and results through observation, experimentation, and repeatability. What about design research? Much of our published work in design research struggles to claim validation if it even acknowledges such a need explicitly. When I propose a methodology for how a large design organization should conduct its design operations, how could I validate my proposal? Scientifically speaking, I should observe the organization in operation with and without using my method over a period of time and on several projects (to avoid one positive result being a fluke), and then compare the quality of the resulting designs. There is no chance I would find a company to agree to that, even if I paid them.Well, I could use some students, set up a controlled experiment, and observe the results. I could use acknowledged statistical metrics to confirm I conducted things right, although a convincing statistical significance is becoming much tighter (e.g., Benjamin et al. 2018, http://dx.doi.org/10. 1038/s41562-017-0189-z) and student experiments tend to be tainted by suspicions of convenience and expediency. I could at least work on repeatability so others can try my methods and hopefully get the same results. Alas, for quite some time now our research studies employ much too complicated computational models and methods to be able to describe them in a reasonably long paper and ‘supplementary material’ repositories are woefully sparsely used. Plus, graduate students who hold the keys to the codes move on and the new ones always want to start from scratch. These are legitimate ‘explanations’ and there are of course notable exceptions. But when it comes to design methods, research validation is really, really hard. And so, over the years I developed a very simple criterion on whether a method (mine or anybody else’s) is valid. I ask the question: ‘Can I honestly use it in my design course?’ and if I do, ‘Would my students honestly use it?’ I emphasize honestly because I want to make sure I will put this method in my syllabus believing that I will not waste my students' time, and that my students will use it because they perceive its value rather than just making me happy and claiming their deserved grade. The results are usually quite evident at the end of the course or even earlier, whether I ask the students or not. Students have ways of telling teachers quite clearly what they think – a kind of body language that experienced teachers quickly recognize. When I judge the results as positive, then I will try the method again next time I teach the course. Over time the method becomes part of my design teaching toolkit. The method has been validated! While I claim no scientific validation, the above certainly serves me as a teaching practitioner. But there is a broader issue here. If discipline is the creation and propagation of knowledge, what constitutes the body of knowledge in design as a discipline? Research is the generation of knowledge and teaching is its propagation. Presumably what we teach must be the guide in defining the body of knowledge. Disciplinary fields with a long history have well-developed instructional materials that are quite universally accepted and taught. Thermodynamics or Classical Literature come to mind (although the latter is increasingly challenged). This is clearly not the case in design, where we have no real ‘canonical’ textbooks that cover the basics. Moreover, until very recently design instructors have been mostly industry practitioners bringing the wealth of their experiences into the classroom and taking away the burden of teaching design from the academic career professors. Again, there have been exceptions, notably in certain INTERNATIONAL JOURNAL OF DESIGN CREATIVITY AND INNOVATION 2020, VOL. 8, NO. 1, 3–4 https://doi.org/10.1080/21650349.2019.1690963","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/21650349.2019.1690963","citationCount":"0","resultStr":"{\"title\":\"Design teaching as design research validation\",\"authors\":\"P. Papalambros\",\"doi\":\"10.1080/21650349.2019.1690963\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Validation of design research results has been an ongoing challenge. Validating is defined as ‘supporting or corroborating on a sound or authoritative basis’ or ‘recognizing, establishing, or illustrating the worthiness or legitimacy of (something)’ (Merriam-Webster Online Dictionary). In scientific research we validate theory and results through observation, experimentation, and repeatability. What about design research? Much of our published work in design research struggles to claim validation if it even acknowledges such a need explicitly. When I propose a methodology for how a large design organization should conduct its design operations, how could I validate my proposal? Scientifically speaking, I should observe the organization in operation with and without using my method over a period of time and on several projects (to avoid one positive result being a fluke), and then compare the quality of the resulting designs. There is no chance I would find a company to agree to that, even if I paid them.Well, I could use some students, set up a controlled experiment, and observe the results. I could use acknowledged statistical metrics to confirm I conducted things right, although a convincing statistical significance is becoming much tighter (e.g., Benjamin et al. 2018, http://dx.doi.org/10. 1038/s41562-017-0189-z) and student experiments tend to be tainted by suspicions of convenience and expediency. I could at least work on repeatability so others can try my methods and hopefully get the same results. Alas, for quite some time now our research studies employ much too complicated computational models and methods to be able to describe them in a reasonably long paper and ‘supplementary material’ repositories are woefully sparsely used. Plus, graduate students who hold the keys to the codes move on and the new ones always want to start from scratch. These are legitimate ‘explanations’ and there are of course notable exceptions. But when it comes to design methods, research validation is really, really hard. And so, over the years I developed a very simple criterion on whether a method (mine or anybody else’s) is valid. I ask the question: ‘Can I honestly use it in my design course?’ and if I do, ‘Would my students honestly use it?’ I emphasize honestly because I want to make sure I will put this method in my syllabus believing that I will not waste my students' time, and that my students will use it because they perceive its value rather than just making me happy and claiming their deserved grade. The results are usually quite evident at the end of the course or even earlier, whether I ask the students or not. Students have ways of telling teachers quite clearly what they think – a kind of body language that experienced teachers quickly recognize. When I judge the results as positive, then I will try the method again next time I teach the course. Over time the method becomes part of my design teaching toolkit. The method has been validated! While I claim no scientific validation, the above certainly serves me as a teaching practitioner. But there is a broader issue here. If discipline is the creation and propagation of knowledge, what constitutes the body of knowledge in design as a discipline? Research is the generation of knowledge and teaching is its propagation. Presumably what we teach must be the guide in defining the body of knowledge. Disciplinary fields with a long history have well-developed instructional materials that are quite universally accepted and taught. Thermodynamics or Classical Literature come to mind (although the latter is increasingly challenged). This is clearly not the case in design, where we have no real ‘canonical’ textbooks that cover the basics. Moreover, until very recently design instructors have been mostly industry practitioners bringing the wealth of their experiences into the classroom and taking away the burden of teaching design from the academic career professors. Again, there have been exceptions, notably in certain INTERNATIONAL JOURNAL OF DESIGN CREATIVITY AND INNOVATION 2020, VOL. 8, NO. 1, 3–4 https://doi.org/10.1080/21650349.2019.1690963\",\"PeriodicalId\":1,\"journal\":{\"name\":\"Accounts of Chemical Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":16.4000,\"publicationDate\":\"2020-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/21650349.2019.1690963\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Accounts of Chemical Research\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/21650349.2019.1690963\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/21650349.2019.1690963","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
设计研究结果的验证一直是一个持续的挑战。验证被定义为“在可靠或权威的基础上支持或确证”或“承认、确立或说明(某事)的价值或合法性”(韦氏在线词典)。在科学研究中,我们通过观察、实验和可重复性来验证理论和结果。那么设计研究呢?我们在设计研究方面发表的许多工作,即使明确承认这种需要,也很难得到验证。当我提出一个大型设计组织应该如何进行设计操作的方法时,我如何验证我的建议?科学地说,我应该在一段时间和几个项目中观察组织在使用和不使用我的方法的情况下的运行情况(避免一个积极的结果是侥幸),然后比较最终设计的质量。即使我付钱给他们,我也不可能找到一家同意这样做的公司。我可以找一些学生,做一个对照实验,然后观察结果。我可以使用公认的统计指标来确认我所做的事情是正确的,尽管令人信服的统计显著性正变得越来越紧密(例如,Benjamin et al. 2018, http://dx.doi.org/10)。1038/s41562-017-0189-z)和学生的实验往往被怀疑为方便和权宜之计。我至少可以研究可重复性,这样其他人就可以尝试我的方法,并希望得到相同的结果。唉,很长一段时间以来,我们的研究使用了太复杂的计算模型和方法,无法在一篇相当长的论文中描述它们,而且“补充材料”库的使用也很少。此外,掌握密码钥匙的研究生会继续前进,而新生总是想从头开始。这些都是合理的“解释”,当然也有明显的例外。但当涉及到设计方法时,研究验证真的非常非常困难。因此,多年来,我制定了一个非常简单的标准来判断一个方法(我的或其他人的)是否有效。我问自己:‘我能在我的设计课程中诚实地使用它吗?’如果我这样做了,‘我的学生会诚实地使用它吗?我之所以强调诚实,是因为我想确保我将把这种方法写入我的教学大纲,我相信我不会浪费学生的时间,我的学生会使用它,因为他们认识到它的价值,而不仅仅是让我高兴,并要求他们应得的分数。无论我是否问过学生,结果通常在课程结束甚至更早的时候就很明显了。学生有办法很清楚地告诉老师他们的想法——一种有经验的老师很快就能识别的肢体语言。当我判断结果是积极的,那么我将在下次教这门课时再次尝试这种方法。随着时间的推移,这种方法成为我设计教学工具包的一部分。该方法已经过验证!虽然我声称没有科学依据,但以上确实对我作为一名教学实践者有所帮助。但这里有一个更广泛的问题。如果学科是知识的创造和传播,那么什么构成了作为学科的设计中的知识主体?研究是知识的产生,教学是知识的传播。大概我们所教的一定是定义知识主体的指南。具有悠久历史的学科领域都有完善的教材,并被普遍接受和教授。我想到了热力学或古典文学(尽管后者正日益受到挑战)。在设计领域显然不是这样,我们没有涵盖基础知识的真正的“规范”教科书。此外,直到最近,设计讲师大多是行业从业者,将他们丰富的经验带入课堂,从学术生涯教授那里减轻了教学设计的负担。同样,也有例外,特别是在某些国际设计创意与创新杂志2020年,第8卷,第2期。1,3 - 4 https://doi.org/10.1080/21650349.2019.1690963
Validation of design research results has been an ongoing challenge. Validating is defined as ‘supporting or corroborating on a sound or authoritative basis’ or ‘recognizing, establishing, or illustrating the worthiness or legitimacy of (something)’ (Merriam-Webster Online Dictionary). In scientific research we validate theory and results through observation, experimentation, and repeatability. What about design research? Much of our published work in design research struggles to claim validation if it even acknowledges such a need explicitly. When I propose a methodology for how a large design organization should conduct its design operations, how could I validate my proposal? Scientifically speaking, I should observe the organization in operation with and without using my method over a period of time and on several projects (to avoid one positive result being a fluke), and then compare the quality of the resulting designs. There is no chance I would find a company to agree to that, even if I paid them.Well, I could use some students, set up a controlled experiment, and observe the results. I could use acknowledged statistical metrics to confirm I conducted things right, although a convincing statistical significance is becoming much tighter (e.g., Benjamin et al. 2018, http://dx.doi.org/10. 1038/s41562-017-0189-z) and student experiments tend to be tainted by suspicions of convenience and expediency. I could at least work on repeatability so others can try my methods and hopefully get the same results. Alas, for quite some time now our research studies employ much too complicated computational models and methods to be able to describe them in a reasonably long paper and ‘supplementary material’ repositories are woefully sparsely used. Plus, graduate students who hold the keys to the codes move on and the new ones always want to start from scratch. These are legitimate ‘explanations’ and there are of course notable exceptions. But when it comes to design methods, research validation is really, really hard. And so, over the years I developed a very simple criterion on whether a method (mine or anybody else’s) is valid. I ask the question: ‘Can I honestly use it in my design course?’ and if I do, ‘Would my students honestly use it?’ I emphasize honestly because I want to make sure I will put this method in my syllabus believing that I will not waste my students' time, and that my students will use it because they perceive its value rather than just making me happy and claiming their deserved grade. The results are usually quite evident at the end of the course or even earlier, whether I ask the students or not. Students have ways of telling teachers quite clearly what they think – a kind of body language that experienced teachers quickly recognize. When I judge the results as positive, then I will try the method again next time I teach the course. Over time the method becomes part of my design teaching toolkit. The method has been validated! While I claim no scientific validation, the above certainly serves me as a teaching practitioner. But there is a broader issue here. If discipline is the creation and propagation of knowledge, what constitutes the body of knowledge in design as a discipline? Research is the generation of knowledge and teaching is its propagation. Presumably what we teach must be the guide in defining the body of knowledge. Disciplinary fields with a long history have well-developed instructional materials that are quite universally accepted and taught. Thermodynamics or Classical Literature come to mind (although the latter is increasingly challenged). This is clearly not the case in design, where we have no real ‘canonical’ textbooks that cover the basics. Moreover, until very recently design instructors have been mostly industry practitioners bringing the wealth of their experiences into the classroom and taking away the burden of teaching design from the academic career professors. Again, there have been exceptions, notably in certain INTERNATIONAL JOURNAL OF DESIGN CREATIVITY AND INNOVATION 2020, VOL. 8, NO. 1, 3–4 https://doi.org/10.1080/21650349.2019.1690963
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.