• EQS Cockpit
  • Whistleblowing
  • Insider Management
  • Policy manager
  • Investor Targeting
  • Disclosure
  • Webcast
  • Career
Back to overview

What Companies Need to Know About the EU’s AI Act

How ChatGPT’s release relates to the EU’s new AI Act and the regulation’s impact on business.

by Moritz Homann 2 min

    The release of ChatGPT made waves among both fans and critics of artificial intelligence. Enthusiastic initial use of the program quickly gave way to cautionary voices and, eventually, even a warning from leading AI entrepreneurs to slow down and agree on a new set of rules for the revolutionary technology.

    It has become clear that artificial intelligence must be regulated. Proposals for a voluntary AI code of conduct are now circulating and there have also been calls for an independent international authority to monitor AI.

    In December 2023, the EU reached a provisional agreement on its AI Act after a 37 hour negotiation session. Labelled “historic” by European Commissioner for the Internal Market Thierry Breton, the development is set to usher in the first comprehensive international law for the regulation of artificial intelligence.

    What is the EU planning?

    Brussels has aimed to establish a responsible approach to artificial intelligence. EU regulations are intended to ensure that new trends and technologies are harnessed for the good of the public while personal rights are protected.

    To regulate the field of artificial intelligence, AI application fields are being distinguished according to four risk tiers: low, medium, high and unacceptable.

    Unacceptable AI applications include real-time and remote facial recognition systems or voice-activated, behaviour-manipulating applications. This category takes into account applications used to monitor citizens that can therefore serve anti-democratic purposes.

    According to the plans to date, the EU classifies applications for biometric identification, for the operation of critical infrastructure, for educational and training purposes, for border controls as well as for the legal sector as being high-risk AI applications. The December 2023 agreement saw the inclusion of some exceptions in the case of biometric identification such as police using the technology in the case of an unexpected threat from a terrorist attack, the prosecution of serious crime or when searching for victims.

    The medium risk range encompasses generative AI such as ChatGPT with the ability to generate images and text. If the EU succeeds, ChatGPT will be subject to future transparency rules. Users must be informed that they are looking at an AI-generated image or text while manufacturers will have to ensure applications such as ChatGPT are not utilised to produce illegal content.

    The finer details of the Act have yet to be decided and it is unlikely to come into force until 2025 at the earliest.

    Protective standard or brake?

    Some AI experts believe that well-defined standards for the technology at a European level could become advantageous for EU member states because users would know what they are getting into.

    Nevertheless, more than 100 high-ranking European business representatives, including the CEOs of Siemens, Airbus and ARM, complained in a June 2023 letter that the planned law goes too far.

    Generative AI along the lines of ChatGPT that can generate images and text has proven a thorn in Brussels’ regulatory side. There are concerns that companies might have to establish separate compliance departments just to satisfy the EU’s transparency requirements for applications. The letter’s supporters argue that the effort and expense involved would jeopardise Europe’s competitiveness and force companies to move their operations abroad.

    European business representatives fear that regulation from Brussels will hinder the development of AI applications while largely unregulated competitors in the US will be able to press ahead unchecked. That’s why business leaders have called for closer coordination between the EU And US to establish a transatlantic regulatory framework.

    In fact, the US government has aready moved to regulate the industry and President Biden signed an executive order on “safe, secure, and trustworthy artificial intelligence” on October 30 2023. Whether the EU and the US will actually coordinate more closely in regulating AI is not yet assessable, but it is not inconceivable.

    The complete guide to policy management

    How to effectively create, implement and communicate compliance policies and measure the success of your policy program – for everyone who is responsible for Compliance policies in their organization

    Download now
    Moritz Homann
    Moritz Homann

    Managing Director Corporate Compliance – EQS Group | Moritz Homann is responsible for the department of Corporate Compliance products at EQS Group. In this function, he oversees the strategic development of digital workflow solutions tailored to meet the needs of Compliance Officers around the world.