{"title":"人工智能中的人体工程学:与机器学习和人工智能的设计和交互","authors":"N. Lau, Michael Hildebrandt, M. Jeon","doi":"10.1177/1064804620915238","DOIUrl":null,"url":null,"abstract":"Machine learning and artificial intelligence (AI) enable new types of autonomous systems that are changing our personal and professional lives. While there are plenty of stories about machine learning delivering the promise of a better future, such as autonomous vehicles for improving mobility, safety, and fuel efficiency, many examples have indicated great risks, such as bots on social media for spreading false information and manipulating public opinions. As machine learning approaches ubiquity in industrial systems and consumer products, Human Factors must innovate to support users in coping with emerging autonomous capabilities. The rise of machine learning technologies poses serious questions that have been discussed in the panels at the recent Annual Meetings of the Human Factors Ergonomics Society (Lau et al., 2018; Lau et al., 2019). How can we help users understand the autonomous capabilities developed through supervised or unsupervised learning? What kind of interactions could enhance cooperation between human and machine learning algorithms? We must also take advantage of machine learning techniques in advancing our own research and design science. How can we use machine learning in assessing human states and capabilities? How should we help incorporate human sensing into machine learning algorithms? In this special issue, we embrace the broad spectrum of research and design efforts that investigate machine learning for improving usability and safety of intelligent systems and consumer products. Our goal is to clarify the roles of Human Factors in contributing to a humanist perspective that considers the social, political, ethical and cultural factors of implementing AI into daily human–system interactions. At the same time, this special issue can only accommodate five articles, about one third of the submissions, after a rigorous peer-review process. So, what we have hoped to curate for the readers is an intellectually stimulating, short exhibit of our discipline in developing and applying next-generation AI. The first article in this special issue is a commentary by Hancock who envisions work to be eventually shared between self-evolving machines and humans. This vision of work challenges the Human Factors community to prepare for a future that requires designing interactions and user interfaces for machines whose behaviors we cannot fully anticipate and for work that we do not yet know. The second article by Zhang et al. speaks to the challenge of anticipating the consequences of machine learning in designing technology and making policy decisions. The authors use a speech recognition example to illustrate a violation of inclusivity in design. In their second example, they illustrate how a loan policy aimed at supporting a disadvantaged group ultimately harms the group in the long run. These examples raise questions about how Human Factors professionals can engage in a data-driven design process and how to develop “explainable” AI that presents the true behavior of the machine learning model to the user. Anthropomorphism is a much talkedabout concept to help design AI that humans can understand and interact with intuitively. Muller compares how deep neural networks and humans classify images to illustrate how their differences likely require appropriate interactivity to minimize the mismatch between how human and AI think about the intelligence of each other. The article highlights not only the need for interaction design but also the importance of understanding machine learning algorithms. The final two articles present empirical investigations on how ergonomic research can promote the appropriate use of machine learning tools. Wang et al. describe metrics of stability, robustness and sensitivity for aiding users to interpret prediction results of supervised learning algorithms. The authors illustrate how effective visualization of those metrics can improve decision making. Gilbank et al. describe a qualitative study with ten medical professionals using a machine learning–driven toxicity prediction tool that utilizes 10 years of historical data. The study presents the expectations and perspectives of medical professionals on AI, and the user interface design considerations for promoting trust and, ultimately use of the machine learning system. We hope that these five articles contribute to new thoughts and present exciting challenges. At the same time, we recognize that these articles only represent a speck of all the relevant Human Factors research and design issues related to machine learning and artificial intelligence. As we prepared this special issue, we realized that so much progress is needed to formulate design methods for machine learning. So, we agree with Hancock’s conclusion that our road ahead for ergonomics design of machine learning and artificial intelligence “promises to be a bumpy but exciting ride.”","PeriodicalId":357563,"journal":{"name":"Ergonomics in Design: The Quarterly of Human Factors Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Ergonomics in AI: Designing and Interacting With Machine Learning and AI\",\"authors\":\"N. Lau, Michael Hildebrandt, M. Jeon\",\"doi\":\"10.1177/1064804620915238\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning and artificial intelligence (AI) enable new types of autonomous systems that are changing our personal and professional lives. While there are plenty of stories about machine learning delivering the promise of a better future, such as autonomous vehicles for improving mobility, safety, and fuel efficiency, many examples have indicated great risks, such as bots on social media for spreading false information and manipulating public opinions. As machine learning approaches ubiquity in industrial systems and consumer products, Human Factors must innovate to support users in coping with emerging autonomous capabilities. The rise of machine learning technologies poses serious questions that have been discussed in the panels at the recent Annual Meetings of the Human Factors Ergonomics Society (Lau et al., 2018; Lau et al., 2019). How can we help users understand the autonomous capabilities developed through supervised or unsupervised learning? What kind of interactions could enhance cooperation between human and machine learning algorithms? We must also take advantage of machine learning techniques in advancing our own research and design science. How can we use machine learning in assessing human states and capabilities? How should we help incorporate human sensing into machine learning algorithms? In this special issue, we embrace the broad spectrum of research and design efforts that investigate machine learning for improving usability and safety of intelligent systems and consumer products. Our goal is to clarify the roles of Human Factors in contributing to a humanist perspective that considers the social, political, ethical and cultural factors of implementing AI into daily human–system interactions. At the same time, this special issue can only accommodate five articles, about one third of the submissions, after a rigorous peer-review process. So, what we have hoped to curate for the readers is an intellectually stimulating, short exhibit of our discipline in developing and applying next-generation AI. The first article in this special issue is a commentary by Hancock who envisions work to be eventually shared between self-evolving machines and humans. This vision of work challenges the Human Factors community to prepare for a future that requires designing interactions and user interfaces for machines whose behaviors we cannot fully anticipate and for work that we do not yet know. The second article by Zhang et al. speaks to the challenge of anticipating the consequences of machine learning in designing technology and making policy decisions. The authors use a speech recognition example to illustrate a violation of inclusivity in design. In their second example, they illustrate how a loan policy aimed at supporting a disadvantaged group ultimately harms the group in the long run. These examples raise questions about how Human Factors professionals can engage in a data-driven design process and how to develop “explainable” AI that presents the true behavior of the machine learning model to the user. Anthropomorphism is a much talkedabout concept to help design AI that humans can understand and interact with intuitively. Muller compares how deep neural networks and humans classify images to illustrate how their differences likely require appropriate interactivity to minimize the mismatch between how human and AI think about the intelligence of each other. The article highlights not only the need for interaction design but also the importance of understanding machine learning algorithms. The final two articles present empirical investigations on how ergonomic research can promote the appropriate use of machine learning tools. Wang et al. describe metrics of stability, robustness and sensitivity for aiding users to interpret prediction results of supervised learning algorithms. The authors illustrate how effective visualization of those metrics can improve decision making. Gilbank et al. describe a qualitative study with ten medical professionals using a machine learning–driven toxicity prediction tool that utilizes 10 years of historical data. The study presents the expectations and perspectives of medical professionals on AI, and the user interface design considerations for promoting trust and, ultimately use of the machine learning system. We hope that these five articles contribute to new thoughts and present exciting challenges. At the same time, we recognize that these articles only represent a speck of all the relevant Human Factors research and design issues related to machine learning and artificial intelligence. As we prepared this special issue, we realized that so much progress is needed to formulate design methods for machine learning. So, we agree with Hancock’s conclusion that our road ahead for ergonomics design of machine learning and artificial intelligence “promises to be a bumpy but exciting ride.”\",\"PeriodicalId\":357563,\"journal\":{\"name\":\"Ergonomics in Design: The Quarterly of Human Factors Applications\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ergonomics in Design: The Quarterly of Human Factors Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/1064804620915238\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ergonomics in Design: The Quarterly of Human Factors Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/1064804620915238","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
摘要
机器学习和人工智能(AI)使新型自主系统成为可能,正在改变我们的个人和职业生活。虽然有很多关于机器学习带来更美好未来的故事,比如提高机动性、安全性和燃油效率的自动驾驶汽车,但许多例子表明了巨大的风险,比如社交媒体上传播虚假信息和操纵公众舆论的机器人。随着机器学习在工业系统和消费产品中的普及,人为因素必须进行创新,以支持用户应对新兴的自主能力。机器学习技术的兴起提出了一些严重的问题,这些问题在最近的人因工学学会年会的小组讨论中得到了讨论(Lau等人,2018;Lau et al., 2019)。我们如何帮助用户理解通过监督学习或非监督学习开发的自主能力?什么样的交互可以增强人类和机器学习算法之间的合作?我们还必须利用机器学习技术来推进我们自己的研究和设计科学。我们如何使用机器学习来评估人类的状态和能力?我们应该如何帮助将人类感知整合到机器学习算法中?在本期特刊中,我们将广泛研究和设计机器学习,以提高智能系统和消费产品的可用性和安全性。我们的目标是澄清人为因素在促进人文主义视角中的作用,该视角考虑了将人工智能实施到日常人类系统交互中的社会、政治、伦理和文化因素。同时,经过严格的同行评议,本期特刊只能容纳五篇文章,约占投稿的三分之一。因此,我们希望为读者策划的是一个激发智力的简短展览,展示我们在开发和应用下一代人工智能方面的学科。本期特刊的第一篇文章是汉考克的一篇评论,他设想工作最终会在自我进化的机器和人类之间共享。这种工作愿景挑战了人为因素社区,要求他们为未来的机器设计交互和用户界面,这些机器的行为我们无法完全预测,而且我们还不知道它们的工作。Zhang等人的第二篇文章谈到了在设计技术和制定政策决策时预测机器学习后果的挑战。作者用一个语音识别的例子来说明在设计中违反包容性。在他们的第二个例子中,他们说明了旨在支持弱势群体的贷款政策最终如何从长远来看损害了该群体。这些例子提出了一些问题:人为因素专业人员如何参与数据驱动的设计过程,以及如何开发“可解释的”人工智能,向用户呈现机器学习模型的真实行为。拟人化是一个被广泛讨论的概念,它有助于设计人类可以直观地理解和互动的人工智能。Muller比较了深度神经网络和人类如何对图像进行分类,以说明它们之间的差异可能需要适当的互动,以尽量减少人类和人工智能对彼此智能的看法之间的不匹配。这篇文章不仅强调了交互设计的必要性,还强调了理解机器学习算法的重要性。最后两篇文章介绍了关于人体工程学研究如何促进机器学习工具的适当使用的实证调查。Wang等人描述了稳定性、鲁棒性和敏感性指标,以帮助用户解释监督学习算法的预测结果。作者说明了这些指标的有效可视化如何能够改进决策制定。Gilbank等人描述了一项定性研究,10名医学专业人员使用机器学习驱动的毒性预测工具,利用10年的历史数据。该研究提出了医疗专业人员对人工智能的期望和观点,以及促进信任和最终使用机器学习系统的用户界面设计考虑因素。我们希望这五篇文章能带来新的想法和令人兴奋的挑战。同时,我们认识到,这些文章只代表了与机器学习和人工智能相关的所有相关人为因素研究和设计问题的一小部分。当我们准备本期特刊时,我们意识到制定机器学习的设计方法需要取得如此大的进展。因此,我们同意汉考克的结论,即我们在机器学习和人工智能的人体工程学设计方面的未来之路“有望是一段颠簸但令人兴奋的旅程”。
Ergonomics in AI: Designing and Interacting With Machine Learning and AI
Machine learning and artificial intelligence (AI) enable new types of autonomous systems that are changing our personal and professional lives. While there are plenty of stories about machine learning delivering the promise of a better future, such as autonomous vehicles for improving mobility, safety, and fuel efficiency, many examples have indicated great risks, such as bots on social media for spreading false information and manipulating public opinions. As machine learning approaches ubiquity in industrial systems and consumer products, Human Factors must innovate to support users in coping with emerging autonomous capabilities. The rise of machine learning technologies poses serious questions that have been discussed in the panels at the recent Annual Meetings of the Human Factors Ergonomics Society (Lau et al., 2018; Lau et al., 2019). How can we help users understand the autonomous capabilities developed through supervised or unsupervised learning? What kind of interactions could enhance cooperation between human and machine learning algorithms? We must also take advantage of machine learning techniques in advancing our own research and design science. How can we use machine learning in assessing human states and capabilities? How should we help incorporate human sensing into machine learning algorithms? In this special issue, we embrace the broad spectrum of research and design efforts that investigate machine learning for improving usability and safety of intelligent systems and consumer products. Our goal is to clarify the roles of Human Factors in contributing to a humanist perspective that considers the social, political, ethical and cultural factors of implementing AI into daily human–system interactions. At the same time, this special issue can only accommodate five articles, about one third of the submissions, after a rigorous peer-review process. So, what we have hoped to curate for the readers is an intellectually stimulating, short exhibit of our discipline in developing and applying next-generation AI. The first article in this special issue is a commentary by Hancock who envisions work to be eventually shared between self-evolving machines and humans. This vision of work challenges the Human Factors community to prepare for a future that requires designing interactions and user interfaces for machines whose behaviors we cannot fully anticipate and for work that we do not yet know. The second article by Zhang et al. speaks to the challenge of anticipating the consequences of machine learning in designing technology and making policy decisions. The authors use a speech recognition example to illustrate a violation of inclusivity in design. In their second example, they illustrate how a loan policy aimed at supporting a disadvantaged group ultimately harms the group in the long run. These examples raise questions about how Human Factors professionals can engage in a data-driven design process and how to develop “explainable” AI that presents the true behavior of the machine learning model to the user. Anthropomorphism is a much talkedabout concept to help design AI that humans can understand and interact with intuitively. Muller compares how deep neural networks and humans classify images to illustrate how their differences likely require appropriate interactivity to minimize the mismatch between how human and AI think about the intelligence of each other. The article highlights not only the need for interaction design but also the importance of understanding machine learning algorithms. The final two articles present empirical investigations on how ergonomic research can promote the appropriate use of machine learning tools. Wang et al. describe metrics of stability, robustness and sensitivity for aiding users to interpret prediction results of supervised learning algorithms. The authors illustrate how effective visualization of those metrics can improve decision making. Gilbank et al. describe a qualitative study with ten medical professionals using a machine learning–driven toxicity prediction tool that utilizes 10 years of historical data. The study presents the expectations and perspectives of medical professionals on AI, and the user interface design considerations for promoting trust and, ultimately use of the machine learning system. We hope that these five articles contribute to new thoughts and present exciting challenges. At the same time, we recognize that these articles only represent a speck of all the relevant Human Factors research and design issues related to machine learning and artificial intelligence. As we prepared this special issue, we realized that so much progress is needed to formulate design methods for machine learning. So, we agree with Hancock’s conclusion that our road ahead for ergonomics design of machine learning and artificial intelligence “promises to be a bumpy but exciting ride.”