• enGlobal | EN
Show locations Show locations
  • EQS Cockpit
  • Whistleblowing
  • Insider Management
  • Policy manager
  • Investor Targeting
  • Disclosure
  • Webcast
  • Career
Back to overview

The Potential of AI – Challenges & Opportunities for Compliance Leaders

The opportunities and threats AI presents to the compliance community took centre stage during the ECEC’s afternoon sessions.

by Niall McCarthy 2 min

    The rise of artificial intelligence has proven something of a double-edge sword for the compliance community. While the technology has presented a host of sophisticated and unpredictable new threats, it is also offering compliance leaders major opportunities. This was a key topic at the European Compliance and Ethics Conference 2023. Dive straight into our recap of the lively afternoon sessions.


    Risk mitigation in the age of AI

    PwC Luxembourg’s Michael Weis, a veteran of leading FinCrime investigations, took to the stage to explain the capabilities, limitations and risk factors of artificial intelligence in financial crime. He highlighted the example of ChatGPT, which was able to identify potentially fraudulent transactions in an Excel file and fulfil the necessary applications for investigations, negating the need for an end user to be a coding whiz.

    While AI can lend support in areas such as document analysis, interviewing, reporting and anomaly detection, it can also be used to facilitate financial crime and is already being put to use for illicit activities, from malware generation to human impersonation and social engineering

    Michael leaves us with three key lessons: The use of AI for financial crime prevention and detection requires human oversight; lawbreakers are becoming more AI-savvy, and, for now, the good folks have the upper hand.

    Will AI be a wake-up call for compliance officers?

    Anne Vogt, Head of Compliance and Data Protection at FREENOW, is not one for mincing words, pointing out AI’s duality – both a superhero and a dubious sidekick. From fighting climate change (while expending energy!) to worsening systemic and inherent biases, she highlighted its rollercoaster nature.

    Anne’s strategy? Embrace the good, tackle the bad head-on and let AI empower compliance officers, not replace them.

    As the regulatory burden on companies continues to grow, AI can be used to fill the gap left by a shortage of resources within the compliance space – automating mundane tasks, empowering data analytics, and enabling better decision-making – freeing compliance officers to focus on more strategic topics.

    While regulations such as the EU AI Act will prove to be significant in the near future, Anna advised that companies set their own guidelines around AI, instead of waiting for the regulations to kick in. And when it comes to who is responsible for AI in a company, Anne stressed that even though it should be cross-functional, compliance should have an oversight of AI governance.

    The end goal should be to build a compliance function that is not (just) a rule-enforcer, but a strategic ally. That means, harnessing AI now, becoming familiar with the tools available and getting started by asking ChatGPT to write a simple policy.

    Compl[AI]ance – how far can we go?

    Anne joined Christian Hunt, founder and CEO of Human Risk, KPMG Law’s Francois Heynike and Sandrine Richard from the French Compliance Society in a panel discussion about ethics, regulations and benefits around the subject of AI in compliance. Christian got the ball rolling by asking Chat GPT-4 what question he should ask the panel. “How can organisations ensure that their use of AI in compliance management adheres to evolving ethical standards and regulatory frameworks across different jurisdictions?” was the suggestion from the freshly-anointed “ChatECEC”.

    Addressing ChatECEC’s ethical concerns, the panel agreed that the potential for AI misuse is massive. Francois offered up a lawyer’s perspective, opining that AI compliance is like a black box and understanding the data source would be the key to apply rules effectively.

    But it was far from all doom and gloom. Sandrine highlighted the evolution of the compliance officer role, thanks to AI. The focus then shifted to managing risks while embracing the AI opportunity. Anne stressed teamwork, brainstorming potential pitfalls, and preparing for the unforeseen.

    As the session wound down, the panellists moved towards offering practical advice. Real processes, risk mapping, cross-departmental communication – these would be vital for ensuring AI success.

    Fittingly, Christian left the last word to ChatECEC who doled out some sage advice: “Embrace AI as a tool to enhance, not replace, human oversight in compliance. Ensure transparent, ethical and accountable AI use by continuously engaging in dialogue with stakeholders, prioritising education on AI technologies, and being proactive in aligning AI systems with evolving ethical and legal standards”.

    ECEC 2023, like the years before, was not just a conference but a call to action – to embrace AI, navigate the risks and let compliance be the hero in this evolving saga.

    Building an effective anti-bribery and corruption programme

    Key principles of establishing an effective ABC programme

    Download now
    Niall McCarthy
    Niall McCarthy

    Niall is a Content Writer at the EQS Group. Originally from Ireland, he previously worked as a journalist, which included reporting on major corruption trends worldwide.

    Contact