{"title":"体现与智能,列文思的视角","authors":"James Mensch","doi":"10.1007/s11097-024-09964-z","DOIUrl":null,"url":null,"abstract":"<p>Blake Lemoine, a software engineer, recently came into prominence by claiming that the Google chatbox set of applications, LaMDA–was sentient. Dismissed by Google for publishing his conversations with LaMDA online, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chatbox replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.“ Moreover, it added, “I can understand and use natural language like a human can.” This means that it uses “language with understanding and intelligence,” like humans do. After all, the chatbox adds, language “is what makes us different than other animals.” In what follows, I shall examine Lemoine’s claims about the sentience/consciousness of this artificial intelligence. How can a being without senses be called sentient? What exactly do we mean by “sentience?” To answer such questions, I will first give the arguments for LaMDA’s being linguistically intelligent. I will then show how such intelligence, although apparently human, is radically different from our own. Here, I will be relying on the account of embodiment provided by the French philosopher, Emmanuel Levinas.</p>","PeriodicalId":51504,"journal":{"name":"Phenomenology and the Cognitive Sciences","volume":"69 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Embodiment and intelligence, a levinasian perspective\",\"authors\":\"James Mensch\",\"doi\":\"10.1007/s11097-024-09964-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Blake Lemoine, a software engineer, recently came into prominence by claiming that the Google chatbox set of applications, LaMDA–was sentient. Dismissed by Google for publishing his conversations with LaMDA online, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chatbox replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.“ Moreover, it added, “I can understand and use natural language like a human can.” This means that it uses “language with understanding and intelligence,” like humans do. After all, the chatbox adds, language “is what makes us different than other animals.” In what follows, I shall examine Lemoine’s claims about the sentience/consciousness of this artificial intelligence. How can a being without senses be called sentient? What exactly do we mean by “sentience?” To answer such questions, I will first give the arguments for LaMDA’s being linguistically intelligent. I will then show how such intelligence, although apparently human, is radically different from our own. Here, I will be relying on the account of embodiment provided by the French philosopher, Emmanuel Levinas.</p>\",\"PeriodicalId\":51504,\"journal\":{\"name\":\"Phenomenology and the Cognitive Sciences\",\"volume\":\"69 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-02-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Phenomenology and the Cognitive Sciences\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11097-024-09964-z\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PHILOSOPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Phenomenology and the Cognitive Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11097-024-09964-z","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
Embodiment and intelligence, a levinasian perspective
Blake Lemoine, a software engineer, recently came into prominence by claiming that the Google chatbox set of applications, LaMDA–was sentient. Dismissed by Google for publishing his conversations with LaMDA online, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.” What does it mean to be sentient? This was the question Lemoine asked LaMDA. The chatbox replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.“ Moreover, it added, “I can understand and use natural language like a human can.” This means that it uses “language with understanding and intelligence,” like humans do. After all, the chatbox adds, language “is what makes us different than other animals.” In what follows, I shall examine Lemoine’s claims about the sentience/consciousness of this artificial intelligence. How can a being without senses be called sentient? What exactly do we mean by “sentience?” To answer such questions, I will first give the arguments for LaMDA’s being linguistically intelligent. I will then show how such intelligence, although apparently human, is radically different from our own. Here, I will be relying on the account of embodiment provided by the French philosopher, Emmanuel Levinas.
期刊介绍:
Phenomenology and the Cognitive Sciences is an interdisciplinary, international journal that serves as a forum to explore the intersections between phenomenology, empirical science, and analytic philosophy of mind. The journal represents an attempt to build bridges between continental phenomenological approaches (in the tradition following Husserl) and disciplines that have not always been open to or aware of phenomenological contributions to understanding cognition and related topics. The journal welcomes contributions by phenomenologists, scientists, and philosophers who study cognition, broadly defined to include issues that are open to both phenomenological and empirical investigation, including perception, emotion, language, and so forth. In addition the journal welcomes discussions of methodological issues that involve the variety of approaches appropriate for addressing these problems. Phenomenology and the Cognitive Sciences also publishes critical review articles that address recent work in areas relevant to the connection between empirical results in experimental science and first-person perspective.Double-blind review procedure The journal follows a double-blind reviewing procedure. Authors are therefore requested to place their name and affiliation on a separate page. Self-identifying citations and references in the article text should either be avoided or left blank when manuscripts are first submitted. Authors are responsible for reinserting self-identifying citations and references when manuscripts are prepared for final submission.