The biggest risk using LLMs is mistakenly believing output that is not accurate (AI hallucinations)

Eren Erman | 21 April, 2026

On day one at the 6th Compliance in European Energy Sector conference, Oray B. Gungor provided a practical roadmap for integrating Large Language Models (LLMs) into compliance workflows but stressed throughout the presentation that there are trade offs and risks which need to be managed.  

He shared how he is building LLM-powered regulatory tools such as UMM tracking and Position Limit Tracking but noted specific risks flagged by IT such as data leakage and confidentiality breaches (attributable to GDPR and other privacy rules) should sensitive information like contracts be input into an LLM system. 

As LLMs take on more of the information processing burden in Compliance, there is also a risk of gradual role erosion - professionals deferring to AI outputs on matters that still require human judgement and accountability, something that Oray continued to stress throughout his presentation. 

While LLM’s can accelerate first-pass analysis, they do not replace the compliance officer’s responsibility for the decision that follows. 

Three things must remain entirely human:

  1. Judgement - applying human expertise review (‘human in the loop’) to a specific situation;
  2. Context - understand the full circumstances of the underlying topic; and
  3. Accountabilty - personal responsibility for the decision.
Reliance on AI output remains a significant risk. These models can produce confident, professional worded outputs that often contain factual errors. This is where compliance will continue to play a very important "human in the loop” expert review role to ensure that the output is accurate, and, where required, updated before distributing more broadly.

 

Compliance teams should adopt these operational controls to mitigate this risk:

  • Implement a formal verification step where a human expert signs off on every model output before it influences a decision;
  • Cross-reference the AI analysis with underlyingregulatory facts, sources, legal references for every substantive interpretation to ensure accuracy;
  • Use structured system prompts that require the model to declare uncertainty rather than generate false information; and
  • Treat every model output as a first draft that requires expert human validation.
Oray is currently testing several use cases in real-world scenarios. While many firms are still in early stages of AI experimentation, his presentation is a reminder that Compliance can and should being begin experimenting with AI but must be conscious of a ‘human in the loop’ review for all output given the risks of AI hallucination. The risks are worth the rewards and Oray stressed that the future of Compliance is through using AI to increase speed, efficiency, and insight for specific compliance workflows.