超越 "人工智能助推器

Q4 Social Sciences
Karen Yeung
{"title":"超越 \"人工智能助推器","authors":"Karen Yeung","doi":"10.1111/newe.12400","DOIUrl":null,"url":null,"abstract":"<p>‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI's threats and harms via technical fixes.</p><p>Such ‘techno-solutionist’ fantasies have hitherto dominated digital sector policy, in which AI is viewed as the key to solving society's most intractable ills. It is rooted in a pernicious fallacy that ‘regulation stifles innovation’ and must be strenuously avoided if the British economy is to thrive. AI boosterism accepts at face value the bold marketing claims of software vendors, naively believing that if an AI system can perform a given function, it will necessarily deliver its promised benefits in real-world settings once implemented.1 It also ignores the already-evident adverse impacts of AI systems, including ever-growing instances of ‘algorithmic injustice’ involving the use of automated systems which have resulted in human rights violations, particularly when used by public authorities to (a) inform (or automate) decisions about whether individuals are entitled to benefits and services or (b) subject them to unwanted investigation or detention on the basis that they have been computationally evaluated as ‘risky’.2 Likewise, it conveniently ignores the systemic adverse impacts of algorithmic systems, including their ecological toll, the deepening concentration of economic power, and the erosion of democracy, as ever-more powerful tools are harnessed to propagate misinformation, exploitation and pervasive surveillance.3 AI sector growth cannot be justified at all costs, and whether bigger implies ‘better’, demands consideration of ‘better for whom?’ and ‘with respect to what norms, goals and collective values?’.</p><p>To deliver on its stated desire to ‘make AI work for everyone’,4 the new Labour government must change tack. It needs to abandon these false narratives and magical thinking and establish a regulatory governance framework that serves the public interest. In this article, I explain what this framework should consist of, beginning by clarifying what regulation is for and why it matters.</p><p>In constructing legal guardrails, the new government must focus on how and why digital systems can produce adverse impacts. Algorithmic systems can have capabilities far beyond those imaginable when most of our legal rules and frameworks were established. Legislators must now grapple with their unique risks, whether algorithms take a simple, rule-based form or rely on deep learning techniques, particularly when deployed in ways that have safety-critical or rights-critical consequences: in other words, if they have ‘high-stakes’ implications. This is precisely the focus of the collaborative analysis undertaken by an informal group of UK sector regulators, the Digital Regulation Cooperation Forum (DRCF).6 Academic research demonstrates that the adverse impacts of algorithmic systems arise outside the remit of existing sectoral regulators, are often unintended and many are opaque, particularly those resulting in violations to human rights and/or the corrosion of our social, political and cultural fabric.7 Yet, contrary to the UK's 2023 white paper on AI, expecting the DRCF to provide comprehensive and effective oversight simply by publishing a set of high-level, non-binding ‘motherhood and apple-pie’ principles concerning transparency and fairness, without additional legislative measures, information-gathering powers or additional resources, defies common sense.8</p><p>Establishing a trustworthy basis for algorithmic systems requires a comprehensive oversight regime that supports cross-sectoral coordination to provide a clear, stable framework of legally binding rules, monitored and enforced by an independent, skilled and properly resourced regulator, accountable to parliament and equipped with information gathering and investigative powers to apply those rules in a fair, transparent and effective manner without fear or favour.</p><p>The portrayal of legal regulation as the enemy of innovation, peddled by those in thrall to techno-solutionism, fails to acknowledge that contemporary pharmaceutical regulation, despite many shortcomings, enabled the development and rollout of safe, effective and affordable Covid-19 vaccines at unprecedented speed. It is a powerful, recent demonstration that effective regulatory oversight is legally, institutionally and politically possible in the service of the public good. In earlier decades, snake-oil salesmen were commonplace in the absence of clear legal frameworks and effective independent oversight and enforcement to ensure that new drugs were both safe and efficacious. It was not until the devastation of Thalidomide that the legal and institutional reforms needed to ensure the efficacy and safety of medicines were put in place and taken seriously.</p><p>History indicates that if we wish to facilitate the development of socially beneficial yet powerful new technologies, we must establish a legitimate and effective regulatory framework to protect people and communities from harms and wrongs. A clear-eyed vision of an AI-enabled Britain would begin by recognising that we are only at the beginning of the AI ‘revolution’. Its promise to make our lives ‘better’ remains marketing rhetoric, without robust evidence demonstrating how, and how much, AI and other forms of digital automation actually serve the needs of people in specific contexts and circumstances, and at what cost.</p><p>Hence, our challenge is to <i>learn</i> how to build and collaborate effectively with machines in ways that enhance human flourishing, while taking account of their costs: not merely the direct cost of adoption, but also those less visible but perhaps more serious adverse impacts. Instead of AI boosterism and a misguided belief in magic bullets,20 the UK needs a clear and effective regulatory governance framework that establishes and nourishes a trustworthy ecosystem, thereby fostering the development and sensitive implementation of automated systems that deliver real-world benefits to all its people.</p>","PeriodicalId":37420,"journal":{"name":"IPPR Progressive Review","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/newe.12400","citationCount":"0","resultStr":"{\"title\":\"Beyond ‘AI boosterism’\",\"authors\":\"Karen Yeung\",\"doi\":\"10.1111/newe.12400\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI's threats and harms via technical fixes.</p><p>Such ‘techno-solutionist’ fantasies have hitherto dominated digital sector policy, in which AI is viewed as the key to solving society's most intractable ills. It is rooted in a pernicious fallacy that ‘regulation stifles innovation’ and must be strenuously avoided if the British economy is to thrive. AI boosterism accepts at face value the bold marketing claims of software vendors, naively believing that if an AI system can perform a given function, it will necessarily deliver its promised benefits in real-world settings once implemented.1 It also ignores the already-evident adverse impacts of AI systems, including ever-growing instances of ‘algorithmic injustice’ involving the use of automated systems which have resulted in human rights violations, particularly when used by public authorities to (a) inform (or automate) decisions about whether individuals are entitled to benefits and services or (b) subject them to unwanted investigation or detention on the basis that they have been computationally evaluated as ‘risky’.2 Likewise, it conveniently ignores the systemic adverse impacts of algorithmic systems, including their ecological toll, the deepening concentration of economic power, and the erosion of democracy, as ever-more powerful tools are harnessed to propagate misinformation, exploitation and pervasive surveillance.3 AI sector growth cannot be justified at all costs, and whether bigger implies ‘better’, demands consideration of ‘better for whom?’ and ‘with respect to what norms, goals and collective values?’.</p><p>To deliver on its stated desire to ‘make AI work for everyone’,4 the new Labour government must change tack. It needs to abandon these false narratives and magical thinking and establish a regulatory governance framework that serves the public interest. In this article, I explain what this framework should consist of, beginning by clarifying what regulation is for and why it matters.</p><p>In constructing legal guardrails, the new government must focus on how and why digital systems can produce adverse impacts. Algorithmic systems can have capabilities far beyond those imaginable when most of our legal rules and frameworks were established. Legislators must now grapple with their unique risks, whether algorithms take a simple, rule-based form or rely on deep learning techniques, particularly when deployed in ways that have safety-critical or rights-critical consequences: in other words, if they have ‘high-stakes’ implications. This is precisely the focus of the collaborative analysis undertaken by an informal group of UK sector regulators, the Digital Regulation Cooperation Forum (DRCF).6 Academic research demonstrates that the adverse impacts of algorithmic systems arise outside the remit of existing sectoral regulators, are often unintended and many are opaque, particularly those resulting in violations to human rights and/or the corrosion of our social, political and cultural fabric.7 Yet, contrary to the UK's 2023 white paper on AI, expecting the DRCF to provide comprehensive and effective oversight simply by publishing a set of high-level, non-binding ‘motherhood and apple-pie’ principles concerning transparency and fairness, without additional legislative measures, information-gathering powers or additional resources, defies common sense.8</p><p>Establishing a trustworthy basis for algorithmic systems requires a comprehensive oversight regime that supports cross-sectoral coordination to provide a clear, stable framework of legally binding rules, monitored and enforced by an independent, skilled and properly resourced regulator, accountable to parliament and equipped with information gathering and investigative powers to apply those rules in a fair, transparent and effective manner without fear or favour.</p><p>The portrayal of legal regulation as the enemy of innovation, peddled by those in thrall to techno-solutionism, fails to acknowledge that contemporary pharmaceutical regulation, despite many shortcomings, enabled the development and rollout of safe, effective and affordable Covid-19 vaccines at unprecedented speed. It is a powerful, recent demonstration that effective regulatory oversight is legally, institutionally and politically possible in the service of the public good. In earlier decades, snake-oil salesmen were commonplace in the absence of clear legal frameworks and effective independent oversight and enforcement to ensure that new drugs were both safe and efficacious. It was not until the devastation of Thalidomide that the legal and institutional reforms needed to ensure the efficacy and safety of medicines were put in place and taken seriously.</p><p>History indicates that if we wish to facilitate the development of socially beneficial yet powerful new technologies, we must establish a legitimate and effective regulatory framework to protect people and communities from harms and wrongs. A clear-eyed vision of an AI-enabled Britain would begin by recognising that we are only at the beginning of the AI ‘revolution’. Its promise to make our lives ‘better’ remains marketing rhetoric, without robust evidence demonstrating how, and how much, AI and other forms of digital automation actually serve the needs of people in specific contexts and circumstances, and at what cost.</p><p>Hence, our challenge is to <i>learn</i> how to build and collaborate effectively with machines in ways that enhance human flourishing, while taking account of their costs: not merely the direct cost of adoption, but also those less visible but perhaps more serious adverse impacts. Instead of AI boosterism and a misguided belief in magic bullets,20 the UK needs a clear and effective regulatory governance framework that establishes and nourishes a trustworthy ecosystem, thereby fostering the development and sensitive implementation of automated systems that deliver real-world benefits to all its people.</p>\",\"PeriodicalId\":37420,\"journal\":{\"name\":\"IPPR Progressive Review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/newe.12400\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IPPR Progressive Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/newe.12400\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IPPR Progressive Review","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/newe.12400","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

早在几十年前,由于缺乏明确的法律框架和有效的独立监督与执法,蛇头推销员屡见不鲜,以确保新药既安全又有效。历史表明,如果我们希望促进有益于社会但功能强大的新技术的发展,我们就必须建立一个合法有效的监管框架,以保护人们和社区免受伤害和不法行为的侵害。一个对人工智能化的英国有清醒认识的愿景首先要认识到,我们仅仅处于人工智能 "革命 "的开端。因此,我们面临的挑战是学习如何以促进人类繁荣的方式与机器进行有效的合作,同时考虑到机器的成本:不仅是采用机器的直接成本,还有那些不太明显但可能更严重的负面影响。英国需要一个清晰有效的监管治理框架,以建立和培育一个值得信赖的生态系统,从而促进自动化系统的开发和敏感实施,为全人类带来实实在在的好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Beyond ‘AI boosterism’

‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI's threats and harms via technical fixes.

Such ‘techno-solutionist’ fantasies have hitherto dominated digital sector policy, in which AI is viewed as the key to solving society's most intractable ills. It is rooted in a pernicious fallacy that ‘regulation stifles innovation’ and must be strenuously avoided if the British economy is to thrive. AI boosterism accepts at face value the bold marketing claims of software vendors, naively believing that if an AI system can perform a given function, it will necessarily deliver its promised benefits in real-world settings once implemented.1 It also ignores the already-evident adverse impacts of AI systems, including ever-growing instances of ‘algorithmic injustice’ involving the use of automated systems which have resulted in human rights violations, particularly when used by public authorities to (a) inform (or automate) decisions about whether individuals are entitled to benefits and services or (b) subject them to unwanted investigation or detention on the basis that they have been computationally evaluated as ‘risky’.2 Likewise, it conveniently ignores the systemic adverse impacts of algorithmic systems, including their ecological toll, the deepening concentration of economic power, and the erosion of democracy, as ever-more powerful tools are harnessed to propagate misinformation, exploitation and pervasive surveillance.3 AI sector growth cannot be justified at all costs, and whether bigger implies ‘better’, demands consideration of ‘better for whom?’ and ‘with respect to what norms, goals and collective values?’.

To deliver on its stated desire to ‘make AI work for everyone’,4 the new Labour government must change tack. It needs to abandon these false narratives and magical thinking and establish a regulatory governance framework that serves the public interest. In this article, I explain what this framework should consist of, beginning by clarifying what regulation is for and why it matters.

In constructing legal guardrails, the new government must focus on how and why digital systems can produce adverse impacts. Algorithmic systems can have capabilities far beyond those imaginable when most of our legal rules and frameworks were established. Legislators must now grapple with their unique risks, whether algorithms take a simple, rule-based form or rely on deep learning techniques, particularly when deployed in ways that have safety-critical or rights-critical consequences: in other words, if they have ‘high-stakes’ implications. This is precisely the focus of the collaborative analysis undertaken by an informal group of UK sector regulators, the Digital Regulation Cooperation Forum (DRCF).6 Academic research demonstrates that the adverse impacts of algorithmic systems arise outside the remit of existing sectoral regulators, are often unintended and many are opaque, particularly those resulting in violations to human rights and/or the corrosion of our social, political and cultural fabric.7 Yet, contrary to the UK's 2023 white paper on AI, expecting the DRCF to provide comprehensive and effective oversight simply by publishing a set of high-level, non-binding ‘motherhood and apple-pie’ principles concerning transparency and fairness, without additional legislative measures, information-gathering powers or additional resources, defies common sense.8

Establishing a trustworthy basis for algorithmic systems requires a comprehensive oversight regime that supports cross-sectoral coordination to provide a clear, stable framework of legally binding rules, monitored and enforced by an independent, skilled and properly resourced regulator, accountable to parliament and equipped with information gathering and investigative powers to apply those rules in a fair, transparent and effective manner without fear or favour.

The portrayal of legal regulation as the enemy of innovation, peddled by those in thrall to techno-solutionism, fails to acknowledge that contemporary pharmaceutical regulation, despite many shortcomings, enabled the development and rollout of safe, effective and affordable Covid-19 vaccines at unprecedented speed. It is a powerful, recent demonstration that effective regulatory oversight is legally, institutionally and politically possible in the service of the public good. In earlier decades, snake-oil salesmen were commonplace in the absence of clear legal frameworks and effective independent oversight and enforcement to ensure that new drugs were both safe and efficacious. It was not until the devastation of Thalidomide that the legal and institutional reforms needed to ensure the efficacy and safety of medicines were put in place and taken seriously.

History indicates that if we wish to facilitate the development of socially beneficial yet powerful new technologies, we must establish a legitimate and effective regulatory framework to protect people and communities from harms and wrongs. A clear-eyed vision of an AI-enabled Britain would begin by recognising that we are only at the beginning of the AI ‘revolution’. Its promise to make our lives ‘better’ remains marketing rhetoric, without robust evidence demonstrating how, and how much, AI and other forms of digital automation actually serve the needs of people in specific contexts and circumstances, and at what cost.

Hence, our challenge is to learn how to build and collaborate effectively with machines in ways that enhance human flourishing, while taking account of their costs: not merely the direct cost of adoption, but also those less visible but perhaps more serious adverse impacts. Instead of AI boosterism and a misguided belief in magic bullets,20 the UK needs a clear and effective regulatory governance framework that establishes and nourishes a trustworthy ecosystem, thereby fostering the development and sensitive implementation of automated systems that deliver real-world benefits to all its people.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IPPR Progressive Review
IPPR Progressive Review Social Sciences-Political Science and International Relations
CiteScore
0.50
自引率
0.00%
发文量
43
期刊介绍: The permafrost of no alternatives has cracked; the horizon of political possibilities is expanding. IPPR Progressive Review is a pluralistic space to debate where next for progressives, examine the opportunities and challenges confronting us and ask the big questions facing our politics: transforming a failed economic model, renewing a frayed social contract, building a new relationship with Europe. Publishing the best writing in economics, politics and culture, IPPR Progressive Review explores how we can best build a more equal, humane and prosperous society.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信