{"title":"生成式人工智能的危险","authors":"Luke Tredinnick, Claire Laybats","doi":"10.1177/02663821231183756","DOIUrl":null,"url":null,"abstract":"2023 looks set to become the year that anxieties about the risks posed by artificial intelligence (AI) escape from their safe confines in techno-sociological debates into the wider public consciousness. Barely a week has gone by without a new warning about the threat of AI and the potentially dire consequences of emergent technology. In the last month alone dozens of stories have appeared in the world’s press. Tech leaders and academics issued a statement warning that AI poses a risk of human extinction and should be treated as “a global priority alongside other societal-scale risks such as pandemics and nuclear war” (Centre for AI Safety, 2023). Professor Stuart Russell was reported as stating that “if we don’t control our own civilisation, we have no say in whether we continue to exist.” (Taylor, 2023). An article in BMJ Global Health was published warning of the existential threat of AI (Federspiel et al., 2023). Geoffrey Hinton – widely described as the “godfather of AI” warned of a “serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control” (Allyn, 2023). A simulated trial of AI drones was reported to have developed “highly unexpected strategies” included “killing” its operators to allow it to complete its mission (Guardian, 2023). On top of this have been hundreds of opinion articles and other news items addressing the risk of AI. The average news junkie could be forgiven for thinking that the technological singularity – a longstanding fear about runaway AI driven technological advancement is only weeks or months away. This sudden panic about the future of AI is in large part a product of the success of large language models and emerging forms of generative AI particularly in music and image creation. There is something uncanny about the apparent human-level of understanding of the latest generative AI technologies, which can respond with remarkable prescience to often quite vague requests and generate apparently spontaneous and humanly meaningful outputs. Interacting with ChatGPT can give the impression of communication with a conscious and self-aware machine. But this experience reveals more about what it means to be human, that it does about the abilities of technology. We are predisposed to perceive motivation and understanding in the acts of others, and generative AI has reached the point where it can trick us now and then into seeing motivations that are not there. Fortunately the current threat of AI is vastly overstated and the technological singularity remains a distant theoretical danger. We are not really significantly closer to the emergence of Artificial General Intelligence, and however uncanny the experience of interacting with large language models, they remain resolutely dumb, lacking anything that can be interpreted as true understanding. But while the current generation of AI is are not about to develop autonomous dangerous behaviours, nevertheless it does present new challenges for regulation, law, and professional practice. These challenges include:","PeriodicalId":39735,"journal":{"name":"Business Information Review","volume":"40 1","pages":"46 - 48"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"The dangers of generative artificial intelligence\",\"authors\":\"Luke Tredinnick, Claire Laybats\",\"doi\":\"10.1177/02663821231183756\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"2023 looks set to become the year that anxieties about the risks posed by artificial intelligence (AI) escape from their safe confines in techno-sociological debates into the wider public consciousness. Barely a week has gone by without a new warning about the threat of AI and the potentially dire consequences of emergent technology. In the last month alone dozens of stories have appeared in the world’s press. Tech leaders and academics issued a statement warning that AI poses a risk of human extinction and should be treated as “a global priority alongside other societal-scale risks such as pandemics and nuclear war” (Centre for AI Safety, 2023). Professor Stuart Russell was reported as stating that “if we don’t control our own civilisation, we have no say in whether we continue to exist.” (Taylor, 2023). An article in BMJ Global Health was published warning of the existential threat of AI (Federspiel et al., 2023). Geoffrey Hinton – widely described as the “godfather of AI” warned of a “serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control” (Allyn, 2023). A simulated trial of AI drones was reported to have developed “highly unexpected strategies” included “killing” its operators to allow it to complete its mission (Guardian, 2023). On top of this have been hundreds of opinion articles and other news items addressing the risk of AI. The average news junkie could be forgiven for thinking that the technological singularity – a longstanding fear about runaway AI driven technological advancement is only weeks or months away. This sudden panic about the future of AI is in large part a product of the success of large language models and emerging forms of generative AI particularly in music and image creation. There is something uncanny about the apparent human-level of understanding of the latest generative AI technologies, which can respond with remarkable prescience to often quite vague requests and generate apparently spontaneous and humanly meaningful outputs. Interacting with ChatGPT can give the impression of communication with a conscious and self-aware machine. But this experience reveals more about what it means to be human, that it does about the abilities of technology. We are predisposed to perceive motivation and understanding in the acts of others, and generative AI has reached the point where it can trick us now and then into seeing motivations that are not there. Fortunately the current threat of AI is vastly overstated and the technological singularity remains a distant theoretical danger. We are not really significantly closer to the emergence of Artificial General Intelligence, and however uncanny the experience of interacting with large language models, they remain resolutely dumb, lacking anything that can be interpreted as true understanding. But while the current generation of AI is are not about to develop autonomous dangerous behaviours, nevertheless it does present new challenges for regulation, law, and professional practice. These challenges include:\",\"PeriodicalId\":39735,\"journal\":{\"name\":\"Business Information Review\",\"volume\":\"40 1\",\"pages\":\"46 - 48\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Business Information Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/02663821231183756\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Economics, Econometrics and Finance\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Business Information Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02663821231183756","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Economics, Econometrics and Finance","Score":null,"Total":0}
2023 looks set to become the year that anxieties about the risks posed by artificial intelligence (AI) escape from their safe confines in techno-sociological debates into the wider public consciousness. Barely a week has gone by without a new warning about the threat of AI and the potentially dire consequences of emergent technology. In the last month alone dozens of stories have appeared in the world’s press. Tech leaders and academics issued a statement warning that AI poses a risk of human extinction and should be treated as “a global priority alongside other societal-scale risks such as pandemics and nuclear war” (Centre for AI Safety, 2023). Professor Stuart Russell was reported as stating that “if we don’t control our own civilisation, we have no say in whether we continue to exist.” (Taylor, 2023). An article in BMJ Global Health was published warning of the existential threat of AI (Federspiel et al., 2023). Geoffrey Hinton – widely described as the “godfather of AI” warned of a “serious danger that we’ll get things smarter than us fairly soon and that these things might get bad motives and take control” (Allyn, 2023). A simulated trial of AI drones was reported to have developed “highly unexpected strategies” included “killing” its operators to allow it to complete its mission (Guardian, 2023). On top of this have been hundreds of opinion articles and other news items addressing the risk of AI. The average news junkie could be forgiven for thinking that the technological singularity – a longstanding fear about runaway AI driven technological advancement is only weeks or months away. This sudden panic about the future of AI is in large part a product of the success of large language models and emerging forms of generative AI particularly in music and image creation. There is something uncanny about the apparent human-level of understanding of the latest generative AI technologies, which can respond with remarkable prescience to often quite vague requests and generate apparently spontaneous and humanly meaningful outputs. Interacting with ChatGPT can give the impression of communication with a conscious and self-aware machine. But this experience reveals more about what it means to be human, that it does about the abilities of technology. We are predisposed to perceive motivation and understanding in the acts of others, and generative AI has reached the point where it can trick us now and then into seeing motivations that are not there. Fortunately the current threat of AI is vastly overstated and the technological singularity remains a distant theoretical danger. We are not really significantly closer to the emergence of Artificial General Intelligence, and however uncanny the experience of interacting with large language models, they remain resolutely dumb, lacking anything that can be interpreted as true understanding. But while the current generation of AI is are not about to develop autonomous dangerous behaviours, nevertheless it does present new challenges for regulation, law, and professional practice. These challenges include:
Business Information ReviewEconomics, Econometrics and Finance-Economics, Econometrics and Finance (miscellaneous)
CiteScore
2.50
自引率
0.00%
发文量
22
期刊介绍:
Business Information Review (BIR) is concerned with information and knowledge management within organisations. To be successful organisations need to gain maximum value from exploiting relevant information and knowledge. BIR deals with information strategies and operational good practice across the range of activities required to deliver this information dividend. The journal aims to highlight developments in the economic, social and technological landscapes that will impact the way organisations operate. BIR also provides insights into the factors that contribute to individual professional success.