News Banner for product updates, new resources & more goes here. Link
Please choose your language:

Visit us in:
Barcelona, Copenhagen, Hamburg, Hong Kong, Kochi, London, Madrid, Milan, Munich, New York, Paris, Vienna, Zurich

Show locations
  • EQS Cockpit
  • Whistleblowing
  • Insider Management
  • Policy manager
  • Investor Targeting
  • Disclosure
  • Webcast
  • Career
Request a demo
Ready to find out how EQS can make your workflows 10x more efficient? Schedule a zero-pressure demo to see how we can support your organization operationalize sustainability management.
  • Meet with an expert who will listen to your specific business needs
  • See our solutions in action, customized for you
Back to overview

How to comply with the AI Act?

by Thomas Vini Pires

Artificial Intelligence (AI) is transforming our society, our economy, and our daily lives at an unprecedented pace. In response to this rapid evolution, the European Union adopted the AI Act on May 31, 2024, a European regulation aimed at governing the use of AI to ensure it is ethical, safe, and respectful of fundamental rights.

Consequently, all organizations, whether large or small, that use or develop AI systems in Europe must comply with the AI Act.


Why comply with the AI Act?

On the one hand, it ensures that your organization meets legal standards and avoids potential sanctions (which can reach 7% of total worldwide annual turnover).

On the other hand, it strengthens the trust of customers and partners, and enhances the company’s reputation, particularly regarding a disruptive topic such as AI.

Finally, responsible and ethical AI is essential to support sustainable innovation and prevent potential abuses.

Steps to comply with the AI Act

Although compliance with a new regulation can be complex, it generally follows a similar pattern:

  • Raise awareness and train teams on the new regulation and its principles

  • Appoint a lead responsible for overseeing compliance efforts

  • Map existing systems subject to the new regulation

  • Process the obligations of the new regulation

  • Apply the underlying principles

Raise awareness among teams

The first step toward compliance with the AI Act is to raise awareness among all stakeholders within the organization. Teams must understand the implications of the AI Act, as well as the risks and opportunities linked to the use of AI in general, and the issues related to other regulations (GDPR, copyright, etc.).

Awareness initiatives may take various forms:

  • General awareness for all employees, including new hires, ideally through dedicated e-learning

  • More specific training for the teams most concerned, such as IT/IS teams responsible for AI development, as well as teams using AI tools (communications and marketing teams, etc.)

  • Regular internal communication based on policies or charters (for governance topics), and via more accessible channels to promote a culture of AI ethics

Appoint a lead

Compliance with the AI Act requires consistent management and supervision. It is therefore essential to appoint a lead or a dedicated compliance officer.

The role must have in-depth knowledge of the regulation, as well as related regulations (GDPR, etc.), and an interest in AI technologies (ML, NLP, neural networks, etc.).

They must also know and master the fundamental principles (i.e., 7 principles for responsible AI, see point 5) to be able to promote an ethical AI culture within the organization.

The AI Act compliance lead must have the human (dedicated team, internal network of referents, etc.) and financial resources to carry out their mission.

Although, unlike the GDPR, the AI Act does not formally require appointing a dedicated role—the Data Protection Officer (DPO)—the expectations (skills, etc.), the needs (resources, internal network, etc.), and the subject matter may plausibly and legitimately lead the DPO to take on this role.

Trustworthy AI by Design: Your practical guide to EU AI Act Compliance

The EU AI Act is redefining accountability for artificial intelligence. This white paper shows how DPOs and compliance leaders can build robust AI governance, leverage existing GDPR processes, and implement a practical roadmap toward trustworthy, compliant AI.

Start building Trustworthy AI now

Map your Artificial Intelligence Systems

Most organizations did not wait for regulation to design or use AI systems—particularly generative AI models. The first substantive step will therefore be to map existing systems, that is, to identify and classify the AI systems already in use.

This step is particularly important for two reasons: first, to understand the scope of work; second, to classify AI System in order to apply the correct legal regime.

Indeed, the AI Act adopts a four-level risk-based approach:

  • Prohibited practices: AI systems that do not respect the fundamental values of the European Union (for example, contrary to democratic principles) and are strictly prohibited from being developed or used;

  • High-Risk AI systems: Those that present a risk of harm to the health, safety, or fundamental rights of natural persons. For these AI Systems, a substantial list of obligations must be respected (see next point);

  • General-purpose AI models: Generalist AI capable of competently performing a wide range of distinct tasks, including the AI models most frequently used by organizations (GPT, Copilot, etc.);

  • AI System interacting directly with individuals: Mainly requiring compliance with specific rules regarding informing individuals.

Mapping will therefore allow each AI system developed or deployed to be classified, in order to apply the appropriate regime according to the level of risk.

Process compliance

Following mapping, the identified AI System must be brought into compliance, particularly High-Risk AI System (Art. 6), which notably require:

  • A risk analysis evaluating potential harm to the health, safety, and fundamental rights of natural persons, allowing the implementation of necessary security measures (Art. 9);

  • Establishing technical documentation listing procedural and technical information of the AI System (Art. 11);

  • A set of measures concerning data governance (Art. 10), transparency (Art. 13), human oversight (Art. 14), security (Art. 15), logging (Art. 19), CE marking, EU conformity registration (Art. 48), and more.

Beyond measures specific to High-Risk AI System, the AI Act also requires implementing and deploying a quality management system (Art. 17), i.e., a documented framework composed of policies, procedures, and documentation demonstrating compliance with obligations imposed by the AI Act.

The measures taken, as well as the documentation, must be subject to regular control by external bodies or independent internal bodies to verify compliance and address potential gaps.

Apply the fundamental principles for responsible AI

Beyond procedural obligations, the AI Act refers to substantive principles, which represent the fundamental values of so-called “responsible” AI.

They form the basis of all obligations under the AI Act and must be followed daily by compliance teams, operational teams working with AI, and ultimately all employees of the organization.

These seven principles originate from a report by the High-Level Expert Group on Artificial Intelligence (HLEG) in 2019:

  • Societal and environmental well-being: Protect against physical, mental, or environmental harm

  • Transparency and explainability: Allow users to understand AI functioning and to challenge its decisions

  • Privacy and data protection: Protect individuals’ privacy and ensure the security and confidentiality of their data

  • Technical robustness and safety: Design AIS to be safe and secure and to minimize risks of failure or misuse

  • Accountability: Hold actors responsible for their actions and the outputs of their systems

  • Fairness: Prevent bias and discrimination, promote justice and fairness

  • Human autonomy and oversight: Respect human autonomy, dignity, and fundamental rights

Conclusion

Complying with the AI Act is a complex but essential process to ensure ethical and responsible use of AI.

By following these steps, organizations can not only avoid sanctions but also strengthen trust and public acceptance of their AI technologies.

Compliance with the AI Act is an opportunity to demonstrate the company’s commitment to ethical and responsible practices in the use of artificial intelligence.

Manage your AI Act Compliance with EQS Group

From mapping AI systems to implementing a by-design approach, centralize your obligations and stay ready for the AI Act.

Explore EQS Privacy Cockpit!
Thomas Vini Pires
Thomas Vini Pires

Privacy & AI Solution Expert at EQS

With more than ten years of experience as a DPO within major international groups such as Orange, Adecco and Hermès, Thomas Pires is now Privacy & AI Solution Expert at EQS Group. He leverages his expertise to support the development of innovative software solutions dedicated to data governance, AI ethics, and risk management. Passionate about the intersection between technology and compliance, he regularly speaks on responsible digital transformation and the regulation of artificial intelligence.

Contact