We, the Robots? Regulating Artificial Intelligence and the Limits of the Law by SIMON CHESTERMAN [Cambridge University Press, Cambridge, 2021, 310pp, ISBN: 978-1-31-651768-0, £29.99 (h/bk)]
{"title":"We, the Robots? Regulating Artificial Intelligence and the Limits of the Law by SIMON CHESTERMAN [Cambridge University Press, Cambridge, 2021, 310pp, ISBN: 978-1-31-651768-0, £29.99 (h/bk)]","authors":"Ryan Abbott","doi":"10.1017/s0020589322000410","DOIUrl":null,"url":null,"abstract":"Simon Chesterman has published a bold and ambitious book. It surveys the challenges posed by artificial intelligence (AI) and provides regulators a road map for how best to engage with those challenges to improve public welfare. AI regulation is an important and timely subject. Even in the short time since the book’s publication in 2021, AI has improved significantly in terms of its capabilities and adoption. Consider, for instance, the case of self-driving vehicles which Chesterman uses to illustrate liability issues—in 2022, the company Cruise launched the first commercial self-driving car service in San Francisco. Chesterman also examines AI generating creative works and copyright implications—again in 2022, commercially valuable AI-generated works are now being made at scale thanks to systems like DALL⋅E 2. The view that it is premature to be regulating mindful of AI now appears Luddite. Chesterman makes a good case for why AI is worthy of special regulatory consideration. While AI has been around for decades, and other frontier technologies may also not fit seamlessly into existing governance frameworks, Chesterman argues that modern AI is disruptive due mainly to its speed, autonomy and opacity. For example, historically court filings, while public documents, were kept ‘practically obscure’ due to an overwhelming number of court filings and high search costs. AI now allows just about anyone to search these filings in moments. This has major practical implications for privacy, even though the underlying public nature of court filings has not changed. As another example, facial recognition in public spaces by law enforcement is an ancient practice. But the ability of AI simultaneously to track every person in a public space and use that information to determine someone’s political affiliations (based on the locations they visit and the purchases they make) has similar—and worrying —privacy implications. Chesterman examines how existing laws deal with AI, and how these laws might change. While most of the English literature on AI regulation is rooted in American and European approaches, Chesterman’s book usefully engages with Asian, and particularly Chinese and Singaporean, regulatory efforts. He argues that the primary responsibility for regulating AI must fall to State governments, which can do so by leveraging responsibility, personality and transparency. For instance, States must ensure appropriate responsibility for the acts and omissions of AI, which can involve special product liability rules, insurance schemes and preventing the outsourcing of liability. Chesterman argues against legal personality for AI systems, but notes that it may be necessary in the future depending on how technology evolves. He also engages with the explainability and transparency of AI systems and decision-making, and how these can be supplemented with tools like audits 272 International and Comparative Law Quarterly","PeriodicalId":47350,"journal":{"name":"International & Comparative Law Quarterly","volume":"72 1","pages":"272 - 273"},"PeriodicalIF":1.6000,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International & Comparative Law Quarterly","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1017/s0020589322000410","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Simon Chesterman has published a bold and ambitious book. It surveys the challenges posed by artificial intelligence (AI) and provides regulators a road map for how best to engage with those challenges to improve public welfare. AI regulation is an important and timely subject. Even in the short time since the book’s publication in 2021, AI has improved significantly in terms of its capabilities and adoption. Consider, for instance, the case of self-driving vehicles which Chesterman uses to illustrate liability issues—in 2022, the company Cruise launched the first commercial self-driving car service in San Francisco. Chesterman also examines AI generating creative works and copyright implications—again in 2022, commercially valuable AI-generated works are now being made at scale thanks to systems like DALL⋅E 2. The view that it is premature to be regulating mindful of AI now appears Luddite. Chesterman makes a good case for why AI is worthy of special regulatory consideration. While AI has been around for decades, and other frontier technologies may also not fit seamlessly into existing governance frameworks, Chesterman argues that modern AI is disruptive due mainly to its speed, autonomy and opacity. For example, historically court filings, while public documents, were kept ‘practically obscure’ due to an overwhelming number of court filings and high search costs. AI now allows just about anyone to search these filings in moments. This has major practical implications for privacy, even though the underlying public nature of court filings has not changed. As another example, facial recognition in public spaces by law enforcement is an ancient practice. But the ability of AI simultaneously to track every person in a public space and use that information to determine someone’s political affiliations (based on the locations they visit and the purchases they make) has similar—and worrying —privacy implications. Chesterman examines how existing laws deal with AI, and how these laws might change. While most of the English literature on AI regulation is rooted in American and European approaches, Chesterman’s book usefully engages with Asian, and particularly Chinese and Singaporean, regulatory efforts. He argues that the primary responsibility for regulating AI must fall to State governments, which can do so by leveraging responsibility, personality and transparency. For instance, States must ensure appropriate responsibility for the acts and omissions of AI, which can involve special product liability rules, insurance schemes and preventing the outsourcing of liability. Chesterman argues against legal personality for AI systems, but notes that it may be necessary in the future depending on how technology evolves. He also engages with the explainability and transparency of AI systems and decision-making, and how these can be supplemented with tools like audits 272 International and Comparative Law Quarterly
期刊介绍:
The International & Comparative Law Quarterly (ICLQ) publishes papers on public and private international law, comparative law, human rights and European law, and is one of the world''s leading journals covering all these areas. Since it was founded in 1952 the ICLQ has built a reputation for publishing innovative and original articles within the various fields, and also spanning them, exploring the connections between the subject areas. It offers both academics and practitioners wide topical coverage, without compromising rigorous editorial standards. The ICLQ attracts scholarship of the highest standard from around the world, which contributes to the maintenance of its truly international frame of reference. The ''Shorter Articles and Notes'' section enables the discussion of contemporary legal issues and ''Book Reviews'' highlight the most important new publications in these various fields. The ICLQ is the journal of the British Institute of International and Comparative Law, and is published by Cambridge University Press.