{"title":"Meta-Regulation: An ideal alternative to the primary responsibility as the regulatory model of generative AI in China","authors":"Huijuan Dong , Junkai Chen","doi":"10.1016/j.clsr.2024.106016","DOIUrl":null,"url":null,"abstract":"<div><p>Generative AI with stronger responsiveness and emergent abilities has triggered a global boom and is facing challenges such as data compliance risks during the pretraining process and risks of generating fake information, which has raised concerns among global regulatory authorities. The European Union, United States, United Kingdom, and other countries and regions are gradually establishing risk-based, scenario-based, and outcome-based governance models for generative AI. China recently introduced new regulations for the management of generative AI, which adopt a governance model focusing on generative AI service providers. It suggests that China is continuing the principle of primary responsibility in Internet governance, which encompasses legal responsibility, contractual obligations, and ethical responsibility. However, the governance model based on primary responsibility emphasizes the accountability of generative AI model service providers, with relatively limited regulation on other important entities such as users and large-scale dissemination platforms, which may not be conducive to achieving China's regulatory goals for the AI industry. In comparison, the Meta-Regulation model could be an ideal alternative for China. As a classic theory explaining the public-private relationship, the ‘Meta-Regulation’ aligns with the generative AI governance requirements. Based on the Meta-Regulation theory, the governance of generative AI in China should move towards a direction of emphasizing safety, transparency, collaborative governance, and accountability. In line with this, it is necessary to include users and large-scale dissemination platforms within the regulatory scope and establish overarching governance objectives that ensure the responsible distribution of duties among stakeholders, with regulatory authorities assuming ultimate oversight responsibility and technical coordination. At the level of specific improvement measures, it is possible to integrate the three stages of model development, usage, and content dissemination of generative AI. During the model development stage, generative AI providers have specific transparency obligations. In the usage stage, a self-regulatory system centered around platform autonomy should be constructed. In the content dissemination stage, the proactive notification obligations of the dissemination platforms should be clearly defined. Additionally, the enforcement of technical interoperability requirements is necessary, thereby promoting the orderly development of generative AI applications.</p></div>","PeriodicalId":51516,"journal":{"name":"Computer Law & Security Review","volume":"54 ","pages":"Article 106016"},"PeriodicalIF":3.3000,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Law & Security Review","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0267364924000827","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 0
Abstract
Generative AI with stronger responsiveness and emergent abilities has triggered a global boom and is facing challenges such as data compliance risks during the pretraining process and risks of generating fake information, which has raised concerns among global regulatory authorities. The European Union, United States, United Kingdom, and other countries and regions are gradually establishing risk-based, scenario-based, and outcome-based governance models for generative AI. China recently introduced new regulations for the management of generative AI, which adopt a governance model focusing on generative AI service providers. It suggests that China is continuing the principle of primary responsibility in Internet governance, which encompasses legal responsibility, contractual obligations, and ethical responsibility. However, the governance model based on primary responsibility emphasizes the accountability of generative AI model service providers, with relatively limited regulation on other important entities such as users and large-scale dissemination platforms, which may not be conducive to achieving China's regulatory goals for the AI industry. In comparison, the Meta-Regulation model could be an ideal alternative for China. As a classic theory explaining the public-private relationship, the ‘Meta-Regulation’ aligns with the generative AI governance requirements. Based on the Meta-Regulation theory, the governance of generative AI in China should move towards a direction of emphasizing safety, transparency, collaborative governance, and accountability. In line with this, it is necessary to include users and large-scale dissemination platforms within the regulatory scope and establish overarching governance objectives that ensure the responsible distribution of duties among stakeholders, with regulatory authorities assuming ultimate oversight responsibility and technical coordination. At the level of specific improvement measures, it is possible to integrate the three stages of model development, usage, and content dissemination of generative AI. During the model development stage, generative AI providers have specific transparency obligations. In the usage stage, a self-regulatory system centered around platform autonomy should be constructed. In the content dissemination stage, the proactive notification obligations of the dissemination platforms should be clearly defined. Additionally, the enforcement of technical interoperability requirements is necessary, thereby promoting the orderly development of generative AI applications.
期刊介绍:
CLSR publishes refereed academic and practitioner papers on topics such as Web 2.0, IT security, Identity management, ID cards, RFID, interference with privacy, Internet law, telecoms regulation, online broadcasting, intellectual property, software law, e-commerce, outsourcing, data protection, EU policy, freedom of information, computer security and many other topics. In addition it provides a regular update on European Union developments, national news from more than 20 jurisdictions in both Europe and the Pacific Rim. It is looking for papers within the subject area that display good quality legal analysis and new lines of legal thought or policy development that go beyond mere description of the subject area, however accurate that may be.