Introduction
In a society where increasingly more aspects of daily life are influenced by algorithms and machine learning, it is important to consider how these technologies should be regulated in order not to degrade quality of life. As of October 2023, more than forty countries have established or are actively creating AI regulation (Whyman, 2023). While some legislative bodies are opting for a loose set of rules to guide AI developers, many others are creating strict legislation governing the development and use of AI. Among them are the European Union and China, both of whom are choosing to create strict AI regulation regimes rather than loose guidelines. Both are poised to influence the establishment of AI regulation in other regions, but vary greatly in the way that this regulation is approached.
The regulatory bodies of both the EU and China are following previous patterns in the development of their AI legislation. Predictably, the Chinese set of regulations for AI focuses on information control and the rigorous defence of government priorities. The European Union is also regulating their AI development through the lens of the EU Charter of Fundamental Rights. The AI Act is no exception, focusing heavily on the ethical impact of technology on the general population. Regardless of the approach, both regulatory bodies believe that AI has the power to fundamentally change the course of society and have taken pains to keep legislative pace with technological development.
Beijing’s AI Regulatory Regime
Various state institutions, such as the Cyberspace Administration of China (CAC), the Ministry of Science and Technology (MOST) in conjunction with academics and bureaucrats, have been working toward a semblance of an AI regulatory regime in China since 2017, when the New Generation AI Development plan was published. This plan outlined China’s timetable for developing governance regulations for AI through 2030, as well as the hope to encourage the development of AI technology domestically (China State Council, 2017). Since then, China’s AI regulatory regime has grown in leaps and bounds to become what many describe as a ‘first mover’ in AI governance (Sheehan 2023, Roberts and Hine, 2023, MacCarthy, 2023). A ‘first mover’ is exactly what it sounds like - the first of many to establish a precedent that others may follow.
This ‘first mover advantage,’ known as the possible influence gained from being able to set regulatory precedent, specifically addresses generative AI (Sheehan 2023, Roberts and Hine, 2023). Unlike the EU approach, which is a horizontal, umbrella set of protections meant to cover any possible instance, the Chinese regulatory approach to AI is vertical. By regulating the tool itself and setting standards for those who develop and distribute generative AI, China hopes to mitigate the consequences and enhance the benefits of AI with a much more narrow lens (Sheehan, 2023). Oftentimes, such an approach allows for faster regulation because it does not attempt to cover broad swathes of legislative territory, instead starting with the root and working outward.
The speed of regulation that China has been able to achieve is the product of a vertical legislative focus and the willingness to build up through different iterations of regulation over time. Namely, between 2017 and 2023, there have been nine regulatory documents released relating to AI governance (Sheehan, 2023). Each of these builds upon the other, making this an iterative regulatory structure. With each iteration, China’s AI regulations have grown in scope and finesse, enhancing the ability to respond quickly to new developments in AI. If there are aspects that were overlooked in a previously released document, officials can quickly pivot and publish a new iteration to fill this gap (Forbes, 2023). In addition, those officials and interested parties working on these documents have improved their competence with every passing year, and in turn, publish increasingly relevant and useful regulatory documentation (Sheehan, 2023).
Important Regulatory Documentation
Because Beijing’s AI regulation regime builds upon itself, the most impactful documents tend to be the most recent iteration. This does not dismiss the importance of the previous regulations for AI, but instead calls back to them through reference in the most recent documentation. As of June 2023, China’s State Council announced that it would be preparing a new, comprehensive and horizontal, regulatory measure to encompass previous vertical iterations (Sheehan, 2023). This announcement makes the previous documentation all the more important to study in order to understand the possible direction of this forthcoming law. To this effect, there are three documents of note to keep in mind before China’s draft of their AI law goes to the National People’s Congress.
The first is the “Provisions on the Management of Algorithmic Recommendations in Internet Information Services,” published in December of 2021 (Cyberspace Administration of China, 2021; Webster et al., 2021). These regulatory measures were published due to concerns about algorithmic control of online content and dissemination. As a government that takes great care to maintain a certain online environment emphasising Chinese values and suppressing immoral content, this was a natural priority for Beijing. These provisions required platforms to intervene concerning specific topics of government concern, “uphold mainstream value orientations,” and “actively transmit positive energy” (Sheehan, 2023; MacCarthy, 2023). In opposition to many services that used algorithms at the time, this legislation also provided for the right of the individual user to turn off algorithmic recommendation services, delete data used to create these recommendations and provide disclaimers about the impact that it may have on the consumers- interests (Sheehan, 2023; Cyberspace Administration of China, 2021; Webster et al., 2021). However, the most important aspect of this document is the creation of the algorithm registry. This registry requires the developers of these algorithms to disclose the methods of training and deploying their products for public use. They are also required to disclose which datasets they use for training as well as complete a security self assessment report (Sheehan, 2023; Cyberspace Administration of China, 2021; Webster et al., 2021).
Within the following year, the Provisions on the Administration of Deep Synthesis Internet Information Services was published (Cyberspace Administration of China, 2022; China Law Translate, 2023). This set of regulations focused on the dissemination of ‘synthetic content,’ which is a term that was popularised by Tencent to replace the more political phrase of ‘deep fake’ (Sheehan, 2023). Deep synthesis services, like algorithms, were required to register their products with the same set of information disclosures. Also, like the previous set of provisions, synthetically generated content was required to conform to a similar set of information controls in order to not “disturb economic and social order” and to avoid “confusion and mislead the public.” In order to regulate synthetic content, real name registration and consent to edit personal information was required (Sheehan, 2023; Cyberspace Administration of China, 2022; China Law Translate, 2023).
Finally, in the most recent iteration of China’s AI regulations, the “Measures for the Management of Generative Artificial Intelligence Services” was published in July 2023 and became enforceable in August of the same year (Cyberspace Administration of China, 2023; Huang et al., 2023). This iteration focused on the regulation of generative AI text, which was not really addressed in the previous two documents. Like the others, generative text models must register their service and follow government information controls (Cyberspace Administration of China, 2023; Huang et al., 2023; Sheehan, 2023). However, this document added requirements for the accuracy of the data that was being used to train the AI. Originally, the developers were required to ensure the “truth, accuracy, objectivity, and diversity” of the data they used to train their generative AI. However, in a more recently published draft, this section was removed because of the almost impossible task that this would become, as well as clarifying that these regulations only apply to those AIs that are public facing (Sheehan, 2023; Roberts and Hine, 2023).
Brussels’ AI Regulatory Regime
Meanwhile, EU regulators were spurred into action regarding Artificial Intelligence governance in 2018. Increased emphasis was placed on the need to regulate AI after a report by McKinsey & Company in 2017, which stated that the EU was falling behind their competitors in AI technology investment (McKinsey Global Institute, 2017). One such competitor was China, who had previously published their intention to become a leader in developing AI by 2030. Since the McKinsey report, EU regulations for AI have also been purported to enjoy ‘first mover’ advantage for AI regulation (Siegmann and Anderljung, 2022; Ulnicane, 2022). However, this advantage comes from very different reasons than the Beijing ‘first mover advantage.’
Between 2018 and 2023, the EU has published seven different documents relating to AI governance. Most of these documents, in contrast to China’s iterative approach, were not enforceable in nature and were meant for agenda setting rather than regulation. Although not regulatory in nature, these documents established the base values that would later be found in the EU AI Act, published in December 2023 (European Commission, 2023). This regulatory document is horizontal in nature, aiming to regulate any and all instances of AI within the EU with this singular document. Because of its horizontal nature, official governance regulation took quite a long time in comparison in order to consider all the necessary pieces to such a comprehensive document.
Many have compared the significance of the AI Act to the EU GDPR, which became the model for many countries in dealing with data protection legislation. The comprehensiveness of the AI Act and the similarities of the GDPR development process mean that the EU potentially also has a ‘first mover advantage’ in AI governance (Siegmann and Anderljung, 2022; Forbes, 2023). Additionally, the AI Act sets precedents for other nations regarding the integration of human rights and AI governance (European Commission, 2018; Siegmann and Anderljung, 2022).
Important Regulatory Documentation
Right after the McKinsey report was published, the EU published an April 2018 communication on ‘Artificial Intelligence for Europe’ (European Commission, 2018). This EU Commission Communication established the regulatory agenda for the recently published AI Act in addition to three different goals that would be necessary in the future, when comprehensive regulation became feasible. The first was to build up the capacity of the EU regarding AI technology, which would be necessary in both public and private sectors. Secondly, widespread preparation for major socio-economic change would be necessary with the expansion of AI. Finally, and very distinctly different from China’s regulatory framework, any regulations on AI should ensure that human rights are respected by AI developed within the EU (Ulnicane, 2022). This was followed by several documents further exploring the ethical implications of AI legislation as well as building on the goals established in 2018 (Ulnicane, 2022). The main focus of the EU AI regulatory documentation should be the AI Act published in late 2023. This document was created to be horizontal, consisting of all possible iterations of AI regulation in one strict document. Different AI systems are sorted into four different risk categories, ranging from ‘Unacceptable Risk’ to ‘Minimal Risk,’ with specific guidelines, regulatory practices, and fines for each (Ulnicane, 2022; European Commission, 2023).
The Title II category, which details AI practices that are considered an unacceptable risk and are therefore prohibited, includes any AI that distorts the behaviour through manipulation of vulnerabilities, implements a ‘social score’ system for government use, or utilises ‘real time’ remote biometric identification by government authorities in public spaces. These categories have very few exceptions that would be considered legal use, and are penalised heavily if found to be in use. For prohibited items, the fine associated would include 6% of the global revenue made from the AI product or 30 million euros, determined by whichever amount is higher. This category is followed by Title III, which includes AI uses determined to be high-risk systems, such as non-public remote biometric identification, critical infrastructure management and operation, educational or employment decisions, access to public goods, law enforcement, cross border travel, and judicial or democratic administration. These systems must follow and complete a conformity assessment to assess the risk management system, data requirements, technical documentation, record keeping, system transparency, human oversight, accuracy, robustness, cybersecurity, and post market monitoring of each AI system considered to be high-risk. Violations of these regulations also come with a hefty fee between 4-6% of global revenue or 20-30 million euros, depending on the type of violation (Siegmann and Anderljung, 2022; European Commission, 2023).
The latter two categories are much less stringent, only requiring notification or voluntary compliance. Title IV, which is considered Limited Risk, includes AI systems that interact with ‘natural people’, recognise emotions, categorise based on biometrics, and generate or manipulate content to mimic reality. These systems, since they are considered low risk, only require an obligation to notify the user that they are using an AI system. If users are not notified, the developer of the AI is fined up to 4% of global revenue or 20 million euros. Finally, the minimal risk category of Title IX includes all other systems that do not already fit into a previous category. There are no fines or penalties associated with this category, given that it is meant to be a category for non-threatening AI systems. Because of this, these systems are only asked to comply with voluntary codes of conduct (Siegmann and Anderljung, 2022; European Commission, 2023).
Patterns and Comparison in Regimes
In comparison, these AI regulatory regimes have vastly different characteristics while still covering much of the same ground. While China regulates AI through the scope of a narrow generative AI lens built over several iterations of regulation, the EU groups all AI systems into one comprehensive categorisation system with sweeping requirements for all. Additionally, the focus on limiting government power in some respects in the EU through respect to human rights is diametrically opposed to the protection of government priorities in Chinese regulation. The EU aims to implement regulations for a diverse set of states, requiring regulations to adapt to each situation as necessary. With a looser sense of control, the EU needs to make strong regulation that is able to be interpreted to fit the circumstance. China, although comprising a large area, tends to aim for more centralised control and a firmer regulation regime than would be possible in the EU. Therefore, tighter control and a narrow focus on regulation through AI developers makes more sense, especially when you take into consideration the interaction between state and economy in China.
While these regulatory regimes are wildly different between China and the EU, both follow the general patterns for regulatory practice. In China, their regulatory documents focus on building guidelines over time through iterative legislation that protects government priorities and use, reduces government-defined practices of social harm, and means to lead to comprehensive regulation to finish out the governance regime. In the EU, the focus is on building one single comprehensive regulatory document that can set the example for many nations under one banner, with respect to various human rights considerations and limitations on government.
However, both documents have several similarities as well, but different in practice. Both regulatory regimes make extensive use of expert opinions, testimony, and practice. While the EU does so publicly through published communications and white papers, China does so in private consultation to make the regulatory documents themselves. Both recognise that AI has the profound possibility to change society through disinformation and manipulative content for the worse if not strictly regulated, even if the definitions of what entails ‘worse’ may differ. Additionally, both regimes place the regulatory burden on the AI developers, which in both cases, has been criticised heavily. Where China has minimised recent regulations regarding this burden, especially considering training data regulation, the EU has not.
Future Predictions and Conclusions
Both systems of legislation discussed here are too new to draw conclusions on impact just yet. However, a few predictions can be made from the published regulations. For both sets of regulation, it will quickly become apparent where it is too constrictive towards AI development. The easiest way to see this change over time would be AI investment under these new regimes. In the EU AI Act, there has been criticism regarding the regulations on data training (Siegmann and Anderljung, 2022). Much of the training data for AI comes from the internet, which does not guarantee accuracy, something that is required by the AI Act. If there is a significant slow in AI development investment in the EU, it could mean that these regulations are pinching too tightly for AI developers.
Additionally, it will be interesting to see which AI regulation regime will gain more traction within other states. The EU has strong regulatory power, which means that they are often trusted by others to follow their example in legislation (Siegmann and Anderljung, 2022). However, their sweeping regulation may not be replicable by all states, meaning that Chinese AI regulation may find a strong foothold with states looking for narrower regulation that is more realistic for them to enforce. While both regulatory structures have a ‘first mover advantage’ for different reasons, it is hard to say which will truly influence the growing development of AI regulation more.
ABOUT THE AUTHOR
Leanne Voshell is a researcher focused on technology policy and the associated developing regulatory environment between China, the EU, and the USA. She graduated from Leiden University with her Research Master's in 2022. Currently, she is looking for her start as a policy analyst and writes for European Guanxi. You can find her at LinkedIn here.
This article was edited by Sardor Allayarov and Alice Colantoni.
BIBLIOGRAPHY
China Law Translate, 2022. Provisions on the Management of Algorithmic Recommendations in Internet Information Services [Online], China Law Translate. Available from: https://www.chinalawtranslate.com/en/algorithms/ (Accessed 15 February 2024).
China Law Translate, 2023. Provisions on the Administration of Deep Synthesis Internet Information Services [Online], China Law Translate. Available from: https://www.chinalawtranslate.com/en/deep-synthesis/ (Accessed 15 February 2024).
China State Council, 2017. Notice of the Development Plan for the New Generation of Artificial Intelligence [Online], Central People’s Government of the People’s Republic of China. Available from: https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm (Accessed 15 February 2024).
Cyberspace Administration of China, 2021. Internet Information Service Algorithm Recommendation Management Regulations [Online], Central People’s Government of the People’s Republic of China. Available from: https://www.gov.cn/zhengce/zhengceku/2022-01/04/content_5666429.htm (Accessed 15 February 2024).
Cyberspace Administration of China, 2023. Measures for the Administration of Generative Artificial Intelligence Services [Online], Office of the Central Cyberspace Administration of China. Available from: http://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm (Accessed 15 February 2024).
Cyberspace Administration of China, 2022. Provisions on the In-depth Synthesis Management of Internet Information Services [Online], Central People’s Government of the People’s Republic of China. Available from: https://www.gov.cn/zhengce/zhengceku/2022-12/12/content_5731431.htm (Accessed 15 February 2024).
European Commission, 2018. Artificial Intelligence for Europe [Online], EUR-Lex. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN (Accessed 15 February 2024).
European Commission, 2024. The AI act explorer [Online], EU Artificial Intelligence Act. Available from: https://artificialintelligenceact.eu/ai-act-explorer/ (Accessed 15 February 2024).
Forbes EQ Brandvoice, 2023. How does China’s approach to AI regulation differ from the US and Eu? [Online], Forbes. Available from: https://www.forbes.com/sites/forbeseq/2023/07/18/how-does-chinas-approach-to-ai-regulation-differ-from-the-us-and-eu/ (Accessed 15 February 2024).
Full translation: China’s ‘new generation artificial intelligence development plan’ (2017) Graham Webster et al., 2021. DigiChina. Available from: https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/ (Accessed 15 February 2024).
MacCarthy, M., 2023. The US and its allies should engage with China on AI law and policy [Online], Brookings. Available from: https://www.brookings.edu/articles/the-us-and-its-allies-should-engage-with-china-on-ai-law-and-policy/ (Accessed 15 February 2024).
McKinsey Global Institute, 2017. DIGITIZATION, AI, AND THE FUTURE OF WORK: IMPERATIVES FOR EUROPE [Online], McKinsey & Company. Available from: https://www.mckinsey.com/featured-insights/digital-disruption/whats-now-and-next-in-analytics-ai-and-automation (Accessed 15 February 2024).
Roberts, H. and Hine, E., 2024. The future of AI policy in China [Online], East Asia Forum. Available from: https://www.eastasiaforum.org/2023/09/27/the-future-of-ai-policy-in-china/ (Accessed 15 February 2024).
Roberts, H. et al., 2021. ‘The Chinese approach to Artificial Intelligence: An analysis of policy, ethics, and regulation’, Philosophical Studies Series, pp. 47–79. doi:10.1007/978-3-030-81907-1_5.
Sheehan, M., 2023. China’s AI regulations and How They Get Made [Online], Carnegie Endowment for International Peace. Available from: https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117 (Accessed 15 February 2024).
Siegmann, C. and Anderljung, M., 2022. The Brussels effect and Artificial Intelligence: How EU regulation will impact the global AI market, arXiv.org. Available from: https://arxiv.org/abs/2208.12645 (Accessed 15 February 2024).
Translation: Measures for the management of Generative Artificial Intelligence Services (draft for comment) – April 2023 Seaton Huang et al., 2023. DigiChina. Available from: https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/ (Accessed 15 February 2024).
Ulnicane, I., 2022. ‘Artificial Intelligence in the European Union’, The Routledge Handbook of European Integrations, pp. 254–269. doi:10.4324/9780429262081-19.
Whyman, B., 2023. AI regulation is coming- what is the likely outcome? [Online], CSIS. Available from: https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likely-outcome (Accessed 15 February 2024).
コメント