{"title":"将关爱视为人工智能的保障","authors":"Russell Suereth","doi":"10.1016/j.techum.2024.01.002","DOIUrl":null,"url":null,"abstract":"<div><div>The focus of this research is to consider whether a safeguard of caring can be designed into an artificial intelligent system. According to movies, books, and research experts, a superintelligence could harm humans in devastating ways. The purpose of safeguards is to keep such harmful situations from happening. The problem with safeguards in AI is that they are challenging to design. This article considers whether caring can be a safeguard in AI. It examines caring in our human world and how it has been vital to our existence. It also considers what caring could look like in AI and how we could begin to think about designing care in these systems. Additionally, it provides an overview of the LIDA cognitive architecture as a model for designing care in AI systems. The article employs a methodology focusing on a caring frame of mind and a caring environment for our work and research. This article contributes to the current research by creating a greater awareness of care as a safeguard and establishing an initial step toward designing care in AI. It shows that care is an essential aspect of our interactions with the world and how care can be a safeguard in AI.</div></div>","PeriodicalId":100956,"journal":{"name":"New Techno-Humanities","volume":"3 2","pages":"Pages 135-139"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Considering caring as a safeguard in artificial intelligence\",\"authors\":\"Russell Suereth\",\"doi\":\"10.1016/j.techum.2024.01.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The focus of this research is to consider whether a safeguard of caring can be designed into an artificial intelligent system. According to movies, books, and research experts, a superintelligence could harm humans in devastating ways. The purpose of safeguards is to keep such harmful situations from happening. The problem with safeguards in AI is that they are challenging to design. This article considers whether caring can be a safeguard in AI. It examines caring in our human world and how it has been vital to our existence. It also considers what caring could look like in AI and how we could begin to think about designing care in these systems. Additionally, it provides an overview of the LIDA cognitive architecture as a model for designing care in AI systems. The article employs a methodology focusing on a caring frame of mind and a caring environment for our work and research. This article contributes to the current research by creating a greater awareness of care as a safeguard and establishing an initial step toward designing care in AI. It shows that care is an essential aspect of our interactions with the world and how care can be a safeguard in AI.</div></div>\",\"PeriodicalId\":100956,\"journal\":{\"name\":\"New Techno-Humanities\",\"volume\":\"3 2\",\"pages\":\"Pages 135-139\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Techno-Humanities\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2664329424000025\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Techno-Humanities","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2664329424000025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Considering caring as a safeguard in artificial intelligence
The focus of this research is to consider whether a safeguard of caring can be designed into an artificial intelligent system. According to movies, books, and research experts, a superintelligence could harm humans in devastating ways. The purpose of safeguards is to keep such harmful situations from happening. The problem with safeguards in AI is that they are challenging to design. This article considers whether caring can be a safeguard in AI. It examines caring in our human world and how it has been vital to our existence. It also considers what caring could look like in AI and how we could begin to think about designing care in these systems. Additionally, it provides an overview of the LIDA cognitive architecture as a model for designing care in AI systems. The article employs a methodology focusing on a caring frame of mind and a caring environment for our work and research. This article contributes to the current research by creating a greater awareness of care as a safeguard and establishing an initial step toward designing care in AI. It shows that care is an essential aspect of our interactions with the world and how care can be a safeguard in AI.