Please choose your language:

Visit us in:
Barcelona, Copenhagen, Hamburg, Hong Kong, Kochi, London, Madrid, Milan, Munich, New York, Paris, Vienna, Zurich

Show locations
  • EQS Cockpit
  • Whistleblowing
  • Insider Management
  • Policy manager
  • Investor Targeting
  • Disclosure
  • Webcast
  • Career
Back to overview

EU AI Act: What Companies Need to Know  

The EU AI Act has been in force since 1 August 2024. What does the new law mean for companies and what action is needed now?

by Moritz Homann 2 min

    The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It sets out clear rules for responsible AI use and puts strong protections in place for individual rights. The law applies to organizations of all sizes in the EU which develop or use AI. It also applies to companies outside the EU if their systems are used within its borders. The rollout is phased. Key requirements take effect from 2 August 2025, with full compliance required by 2 August 2027.

    Now is the time for companies to assess how the law affects them and prepare to act.


    The Four Risk Categories

    The EU AI Act follows a risk-based approach, which means that stricter requirements apply to high-risk AI applications than to lower-risk ones. To this end, the Act distinguishes between four risk categories:

    • Unacceptable Risk
    • AI applications with unacceptable risk have been banned since 2 February 2025. These include systems for real-time facial recognition and behavioral manipulation, such as social scoring. AI systems designed to monitor individuals, and which could be exploited for anti-democratic purposes, are also included in this category.
    • High Risk
    • AI systems that could affect a person’s health, safety, or fundamental rights are classified as high-risk. These systems are subject to strict obligations and organizations are required to take risk mitigation measures. Examples include AI used in critical infrastructure like healthcare or transport. AI that profiles individuals also falls into this category. This could include recruitment tools that filter applicants automatically or systems in the financial sector that assess creditworthiness.
    • General-Purpose AI
    • This category includes generative AI such as ChatGPT or Midjourney. Applications like this are subject to transparency obligations. Developers and deployers must label deep fakes as such and disclose that a text was generated by AI if it provides information on matters of public interest. Manufacturers must also ensure that this AI cannot be used for the production of illegal content.
    • AI for Direct Human Interaction
    • Popular applications in this category include chatbots and virtual assistants. Here, the following rule applies: providers must disclose to end-users that they are interacting with an AI and not with a human. If the AI also belongs to the High-Risk or General-Purpose category, these obligations must also be met.

    What are the Requirements of the EU AI Act for High-Risk AI Systems?

    In principle, all AI systems are subject to documentation and transparency requirements. However, the EU AI Act requires high-risk AI systems meet particularly strict requirements, including:

    • Risk assessment regarding health, safety and fundamental rights
    • Comprehensive technical documentation and a quality management system
    • Oversight of data used, event logging, mandatory human oversight and requirements for data accuracy and security
    • Transparency for users and/or data subjects
    • A declaration of conformity, CE marking and registration in an EU database

    How Companies Can Comply with the EU AI Act

    The first step for companies is to identify which AI systems they use. Next, these systems must be classified by risk level. Each category comes with specific legal obligations. Digital tools—like the EQS Privacy Manager—can help. They support efficient AI assessments, enable proactive risk management and ensure audit-proof documentation.

    Under Article 4 of the AI Act, companies must also ensure that they have sufficient AI competency in their workforce. This includes offering e-learning and awareness training to promote a responsible approach to AI use and information about the risks. In addition, they should publish an AI policy and clearly communicate guidelines.

    Compliance with the EU AI Act is not a one-time task, but requires ongoing oversight. Companies are therefore advised to appoint an AI compliance officer to manage and monitor this process. Penalties for non-compliance are steep: between €750,000 and €35 million or 7% of global annual turnover.

    Why the EU AI Act Matters

    The launch of ChatGPT sparked both global excitement and concern. While many people were eager to explore its potential, others quickly raised warnings about the risks. Calls for regulation followed, including a public appeal from leading AI entrepreneurs to pause development and set clear rules.

    After 37 hours of negotiations, a provisional agreement on the AI Act was reached in December 2023. EU Commissioner Thierry Breton called it a “historic” step. The final text was published on 12 July 2024 and came into force on 1 August 2024.

     

    Criticism from Business

    While the EU takes pride in leading the way on AI regulation, the business community has voiced strong concerns. In June 2023, over 100 top European executives, including the CEOs of Siemens, Airbus, and ARM, signed an open letter warning that the proposed law went too far.

    Their main objection was the strict regulation of generative AI. They feared companies would need entire compliance departments just to meet the transparency rules. The cost and effort, they argued, could hurt Europe’s competitiveness and push innovation abroad. A recent Deloitte survey of 500 managers supports this view: more than half of respondents said regulation is holding back AI innovation.

    Their concerns could be justified. Although the EU hoped others would follow its lead, the opposite may now be true. One of Donald Trump’s first actions after returning to office was to scrap Joe Biden’s earlier AI regulation. His new “Stargate” project aims to invest $500 million in AI development— without regulatory limits.

    What Lies Ahead

    Some AI experts see the EU AI Act not as a burden, but as an opportunity. By building trust in AI among cutomers and partners, the law could give Germany and Europe a competitive edge. Companies that comply signal that they take social responsibility seriously, boosting their reputations. Put simply, an ethical approach to AI is essential for driving sustainable innovation and preventing misuse.

    We’re still at the start of the AI journey. No one can say exactly where it will lead. That’s why the EU has built flexibility into the law. The Act is designed to evolve alongside the technology.

    For companies, this means it’s worth investing in digital processes and a centralized platform for AI compliance now. The more agile your setup, the better equipped you’ll be to handle what comes next.

    Moritz Homann
    Moritz Homann

    Managing Director Corporate Compliance – EQS Group | Moritz Homann is responsible for the department of Corporate Compliance products at EQS Group. In this function, he oversees the strategic development of digital workflow solutions tailored to meet the needs of Compliance Officers around the world.

    Contact