Mug

adesso Blog

Artificial intelligence (AI) technology is set to play a key role in shaping the coming industrial revolution. The emergence of large (one even might say essential) enterprise-ready foundation models such as the ones offered by OpenAI (such as GPT-4) or Aleph Alpha (such as the Luminous series) are illustrative of this trend. While the potential unleashed by deploying such technologies is clear to see, this also introduces a variety of new risk vectors. To tackle this issue, new regulatory tools are emerging as the next generation of AI models enter the market. With the EU AI Act, the European Union is taking a leading role on the global stage. But what about other major powers like the US and the People’s Republic of China? In my blog post, I would like to outline the differences and similarities between the approaches each one is taking.

If you were to compare the approaches taken to regulating AI and AI-based systems by the US, the People’s Republic of China and Europe, it might seem at first glance that there is a world of difference between them. However, one thing they all share in common is that each of their strategies emerged around the same time in 2016. My colleague Christian Hammer has already described the European approach in his blog post ‘The future is now! We are shaping tomorrow’s world today’. Here, I would like to turn your attention to the strategies being pursued by the US and the People’s Republic of China. European AI regulations are only used for comparison purposes to highlight the differences. However, basic knowledge of the subject is required, which can be obtained by reading the blog post mentioned above, for example. Let us begin with the US, which has taken a more hands-off approach, and then move on from there to look at the PRC’s strategy that is more rigorous and focuses on speed.

AI regulation in the US – do the right thing with no direct mandates

A handful of AI regulations have been drafted in the US so far, though there is no unifying strategy. Instead, everything is being done at the state level. For example, at the local level, there are a number of regulations that stand out. These have a single focus, take AI decision-making for instance, and are being enacted in individual states in order to regulate recruitment and how bonuses are calculated (New York) or self-driving vehicles (California). There has also been a top-down effort at AI regulation with the Algorithmic Accountability Act, which, however, failed to pass in 2019 and 2022. In contrast, the US National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF) at the government’s behest. This document is not legally binding and is intended to serve solely as a proposal for action that is purely voluntary. The US places its faith in a general willingness of market players to meet high quality standards, this being driven by the relevant companies’ quest to compete and by market demand. Ultimately, the US approach seems rather fragmented, although recent steps by the FTC (Federal Trade Commission) and the FDA (Food and Drug Administration) to crackdown on ChatGPT are quite interesting.

AI regulation in the People’s Republic of China – an iterative approach to learning

In the People’s Republic of China, three focused laws have already been promulgated or are set to be in the near future:

  • Regulation on recommendation algorithms (2021/2022): This relates to the monitoring of electronic information, especially in social media. Under the regulation, recommendation algorithms should respect the ethical and moral values established by the government. This includes respecting the rights of users who should have the right to decide whether an algorithm is enabled or disabled and the right to transparency regarding the results generated by such algorithms based on their personal data.
  • Regulation on synthetic content (2022): This law also tackles the issue of the electronic monitoring of information. As one of its key points, synthesised information must also be marked as such.
  • Proposal on the regulation of generative AI (2023): Inspired by the 2022 law, this is also targeted at actual performance and the copyright of synthesised content.

The laws that have been promulgated so far primarily fall into the category of regulations on the use of AI in social media and data. Compared to the EU approach, these are developed relatively hastily to reflect current trends. However, the end goal of this iterative approach is to pass a similarly coherent AI law as the EU has.

Conclusion

In summary, the EU, the US and the PRC all want the same thing, that being a certain level of quality. However, the strategies they have opted for could not be more different. While the US provides a mechanism for companies to voluntarily tackle the issue of AI quality by adopting accompanying guidelines, the EU is for an all-encompassing deductive regulation that addresses all the issues at once. In contrast, the PRC is pursuing an inductive approach in which the best possible comprehensive AI regulation will be drafted in a series of small iterative steps.

You can find more exciting topics from the adesso world in our blog articles published so far.

Picture Lilian  Do Khac

Author Lilian Do Khac

Lilian Do Khac works on the design and implementation of AI solutions for data-driven decision support. Trustworthy AI requirements play a significant role in this. She is not only active in this field from an IT implementation perspective, but also as a scientist.

Save this page. Remove this page.