
Deep Dive: ESMA Consultation on EMIR 3 Clearing Thresholds
ESMA has published a Consultation Paper on proposed amendments to the clearing thresholds regime under the EMIR 3 Regulation.
Ofgem’s AI guidance sets expectations for ethical, risk-aligned AI use across the UK energy sector. It introduces a non-prescriptive framework anchored on four core outcomes - safety, security, fairness, and environmental sustainability - covering AI governance, risk management, and lifecycle oversight.
The guidance confirms Ofgem’s position that existing laws like REMIT, GDPR, and competition rules apply fully to AI. It elevates AI from a technical tool to a regulated activity, requiring proactive governance, defensibility, and board-level awareness - especially for high-risk uses like trading and forecasting.
Compliance teams must ensure AI systems are governed from design to decommissioning, with traceable oversight, fairness reviews, and robust risk controls. Black-box models, algorithmic pricing, and vendor tools are high-risk areas, with firms remaining fully liable for any resulting market or consumer harm.
Ofgem published its first dedicated guidance on the ethical use of artificial intelligence (AI) in the UK energy sector (click here). It follows on from Ofgem’s January 2025 high-level strategic approach to AI (click here) which described its plan to set-out a robust method of regulating AI in the energy sector based on the UK government’s five AI principles.
Ofgem’s guidance offers a proportionate, outcomes-based framework designed to ensure that the adoption of AI supports innovation while safeguarding consumer welfare, operational integrity, and public trust.
At the heart of the guidance are four ethical outcomes that must be embedded across the AI lifecycle:
Ofgem’s aim is to give regulated entities and adjacent stakeholders a practical foundation for integrating ethical AI use within existing regulatory frameworks, including REMIT, the Competition Act 1998, data protection rules, and network reliability obligations.
It covers (i) governance measures and policies to ensure the effective oversight of AI, (ii) a risk approach to help stakeholders identify and manage risks associated with AI, and (iii) the competencies required for the ethical adoption of AI.
Jonathan Brearley, CEO of Ofgem, commented:
Delivering safe, secure, fair and environmentally sustainable AI outcomes requires stakeholders to comply with their regulatory obligations and to adopt good practice approaches. This includes governance and policies, risk and AI implementation throughout the AI life cycle including design, development, deployment, operations, monitoring, maintenance and decommissioning after use.
This guidance is not directed solely at energy licensees. Ofgem explicitly defines a broad target audience for compliance and engagement, comprising:
Ofgem’s guidance approach avoids prescriptive rulemaking around AI. Instead, it anchors its good practice recommendations in global best practices, citing eight principles-based risk frameworks that stakeholders are advised to consider:
These publications provide practical frameworks with which to evaluate AI while Ofgem urges stakeholders to tailor their implementation based on the nature of the AI system, the intended use, and the potential downstream impacts.
To further assist firms, Ofgem supplements the core guidance with five technical appendices, each covering a specific area of regulatory relevance:
Together, these annexes give stakeholders access to a consolidated view of compliance expectations, supported by international norms and practical examples. Three specific examples relevant to energy and commodity firms which we review in further detail include:
Ofgem’s Regulatory Position on AI
Ofgem’s AI position is that the existing regulatory framework is sufficient to govern AI use in its current form, however this is not a static assumption. It reserves the right to strengthen its approach as the use of AI evolves and as risks to consumers, market competition, or grid reliability become more complex or acute.
For compliance officers, third party AI vendors, and IT leaders, this guidance necessitates a shift in how AI projects are evaluated, approved, and monitored. It introduces a heightened duty of care requiring proactive governance, competent oversight, demonstrable fairness, and robust incident preparedness. Internal AI policies will need to integrate these expectations explicitly, backed by board-level awareness, cross-functional accountability, and audit-ready documentation.
We review and provide a deep dive into each of the guidance’s core areas including actionable takeaways firms can implement as follows:
We also analyse the five supporting appendices as they provide further depth in translating the core areas into day-to-day compliance, assurance, and oversight activities and form the foundation underpinning Ofgem’s expectations.
[1] Governance and Policies
Overview
Ofgem emphasises that governance must be formal, documented, and proportionate to risk. Key expectations include:
Governance must also extend to supply chains. AI vendors are subject to the same ethical expectations, but accountability for failures remains with the regulated entity not the developer.
Executive Summary
Ofgem places governance at the core of ethical AI adoption. Recognising the critical role that corporate leadership and policy frameworks play in shaping AI outcomes, the guidance calls for proportionate and scalable governance models that reflect both the risk posed and the operational context of AI systems.
Importantly, stakeholders must ensure that AI systems are not governed in isolation, but as components within broader organisational systems and subject to controls, audits, change management, and regulatory scrutiny.
Ofgem provides three layers of guidance in this area:
Additionally, stakeholders are expected to implement effective data governance, supply chain controls, and redress mechanisms for consumers adversely affected by AI-driven decisions.
Firms remain responsible for any breaches caused by AI systems, whether those systems are developed in-house or procured via third parties.
Taken together, these expectations form a de facto governance blueprint. While not legally binding, they do establish a regulatory benchmark. Failure to meet them may become grounds for enforcement under existing statutory duties particularly if governance failures contribute to consumer harm, competition risks, or data misuse.
Key Themes
Strategic Governance. Ofgem begins by recommending that every organisation define its AI strategy, including anticipated use cases, risk appetites, and intended consumer outcomes.
*ISO 42001 is the first international standard for Artificial Intelligence Management Systems, and specifies requirements for organisations to establish, implement, maintain, and continually improve responsible and effective AI governance and operations.
Roles and Responsibilities. A strong theme throughout the guidance is the necessity for clear accountability and ownership.
Governance Across the AI Lifecycle. The governance framework must cover the entire AI lifecycle from model development to decommissioning.
Management Information and Assurance. Decision-makers must be equipped with the necessary information to manage uncertainty.
Supply Chain Oversight. Ofgem stresses that responsibility for AI does not end with procurement.
Implications for Policy and Practice
[2] Risk Management Requirements
Overview
AI introduces distinct risks across the energy value chain, such as:
Ofgem calls for proportionate, outcomes-focused risk controls, including:
Risk frameworks must be auditable and designed to prevent foreseeable harm. Importantly, Ofgem encourages risk tolerance thresholds, not risk elimination, reflecting AI’s probabilistic nature.
Executive Summary
AI poses a distinct class of risks ranging from training bias and data drift to model overreach, failure in edge cases, reduced human oversight, and cyber-exploitability.
Ofgem highlights that AI is not to be viewed in isolation but as part of a wider system, where governance and human intervention remain essential safeguards. Accordingly, risk governance must extend beyond the algorithm to include operational context, human oversight, and long-term adaptability.
The guidance outlines nine good practice principles for AI risk management, which form a comprehensive risk lifecycle framework:
The guidance explicitly recommends tools such as digital twins, scenario planning, and human-in-the-loop safeguards each helping manage the inherent unpredictability of AI in complex systems. Risks must also be documented and auditable, especially when connected to safety, data protection, or competition law obligations. Ofgem notes:
Depending on level of risk associated with the use of AI it may be beneficial for potential users of the technology to use risk matrix frameworks, for example Machine Learning Principles, on the NCSC website, and keep a record of the assessment to aid future use. Risk areas might include, but not be limited to, operational, legal and reputational risks.
Key Themes
Proportionality in Risk Management. Ofgem’s guidance resists blanket controls in favour of a graduated model.
AI Risk Lifecycle. The guidance introduces a full-spectrum view of AI risk from ideation to decommissioning. Risk must be assessed at each stage:
Failure Mode and Maloperation. Ofgem places strong emphasis on understanding how and why AI might fail and what the consequences would be.
Human-AI Interaction and Cultural Risk. Beyond technical dimensions, Ofgem highlights cultural and behavioural risk. Humans can over-trust AI (automation bias) or distrust it (automation aversion), leading to poor oversight.
Competency and Risk Ownership. Risk frameworks are only as effective as those who manage them. Ofgem recommends that personnel involved in AI decision-making:
Where appropriate, organisations may consider appointing a designated AI Risk Officer to ensure that accountability is maintained throughout the project lifecycle.
Implications for Policy and Practice
[3] Competency and Capability Building
Overview
This section addresses the knowledge, skills, and institutional capabilities needed to support the safe, ethical, and effective deployment of AI across the energy sector. Ofgem makes clear that AI systems cannot be governed without the right people in place. Stakeholders are expected to:
It is not simply about upskilling AI-specific resources but also about embedding AI fluency across the entire organisation.
Executive Summary
Ofgem’s ethical AI guidance recognises that governance and risk management mechanisms are only as strong as the people who design, implement, and oversee them. Accordingly, it devotes a full section to competencies, calling on firms to assess and enhance their internal capacity to manage AI systems across their full lifecycle. This includes technical fluency, policy awareness, behavioural safeguards, and real-world operational understanding.
Unlike traditional software systems which are rules-based and deterministic, AI systems are probabilistic and adaptive. This means they require more nuanced assurance, more frequent re-evaluation, and a wider base of informed human oversight. From a regulatory and operational perspective, this elevates the need for institutional AI literacy, cross-disciplinary collaboration, and scenario-based learning within energy companies.
Ofgem sets out three broad areas of good practice:
Three additional observations include:
Key Themes
Foundational AI Literacy for All Staff. AI should not be the domain of technical teams alone. Ofgem calls for organisations to establish a baseline level of AI understanding across all business functions, including customer service, compliance, legal, procurement, IT and executive leadership. This ensures that every employee can recognise AI-driven decisions, understand their implications, and escalate concerns when needed.
Role-Specific Competency Development. Different roles require different levels of AI expertise. The guidance encourages organisations to formalise these expectations through competency matrices, professional development plans, and certification programmes where appropriate. Ofgem stresses the importance of aligning training to role responsibilities. For example:
Empowering AI Risk Owners. Ofgem suggests that organisations appoint AI risk officers or equivalent roles with end-to-end responsibility for the safe and ethical deployment of AI technologies. These roles are not just technical. Rather, they must span operational, ethical, and legal and compliance domains. Where a single person cannot cover all dimensions, a multidisciplinary AI governance group may be formed instead. These individuals should have the authority to:
Knowledge Management and Continuous Learning. Given the pace of AI innovation, static knowledge is a liability. Ofgem recommends that firms establish dynamic knowledge management systems that capture:
The above knowledge should inform training updates, policy revisions, and strategic planning. Firms should also consider participation in industry AI forums, working groups, and regulatory sandboxes to remain at the forefront of ethical AI development.
Horizon Scanning and Innovation Readiness. Firms must continuously scan the horizon for changes in AI capabilities, standards, and risks. This includes staying abreast of:
Where gaps are identified, whether in personnel, tools, or governance, firms must act to close them through capability-building or risk mitigation.
Implications for Policy and Practice
[4] Sector-Specific Examples
Overview
Ofgem’s guidance moves beyond principles by providing real examples how AI is being applied in the energy sector. These case studies serve two primary purposes: (1) they highlight real-world contexts in which AI is currently being used or considered, and (2) they expose specific risk factors, control mechanisms, and governance expectations.
While use cases range from customer interaction to autonomous grid management, three AI domains are especially critical to energy and commodity trading firms due to their heightened regulatory sensitivity as follows:
These areas carry high stakes financially, operationally and reputationally, and are subject to overlapping compliance regimes including REMIT, GDPR, the Competition Act, and sectoral licence conditions.
Executive Summary
Key use cases explored in the guidance include:
In each case, Ofgem outlines specific risks, control requirements, and mitigation expectations. Three are of particular interest to energy and commodity firms are as follows:
A] AI in Forecasting and Predictive Analytics
Energy companies are increasingly deploying AI for load forecasting, asset maintenance, outage prediction, and extreme weather modelling. These systems improve efficiency, enable proactive infrastructure management, and reduce manual effort. However, their use also introduces complex governance challenges.
Risks:
Ofgem’s Expectations:
B] AI in Pricing and Algo Trading
The use of AI in pricing and algorithmic trading introduces powerful capabilities but also heightened regulatory scrutiny. These tools can assist with bid optimisation, portfolio management, volatility modelling, and price forecasting. However, they operate within the boundaries of REMIT, MAR, and competition law where outcomes, not just intent, determine compliance.
Risks:
Ofgem’s Expectations:
Regulatory Warning: IT and compliance managers could face personal liability under the Competition Act if AI-enabled trading systems produce anti-competitive outcomes even if unintentionally. More broadly, firms remain responsible for misconduct triggered by vendors’ or partners’ models. Ofgem reminds firms that algorithmic price manipulation or information-sharing may violate the Competition Act 1998, attracting fines of up to 10% of global turnover.
C] Use of Black-Box Systems
Ofgem notes that the rise of black-box models including deep neural networks, foundation models, and third-party LLMs raises serious concerns around explainability, safety, and control. While such systems can deliver superior performance, they also reduce transparency, impair auditability, and undermine accountability.
Risks:
Ofgem’s Expectations:
Compliance Consideration: Black-box models cannot be ethically or legally deployed without compensatory governance. Firms must document the rationale for use, the limits of understanding, and the remediation mechanisms if things go wrong.
Implications for Policy and Practice
[5] Appendices Analysis
While Ofgem’s main guidance outlines ethical principles and governance expectations, the five supporting appendices provide the necessary depth to translate those principles into day-to-day compliance, assurance, and oversight activities.
These appendices are not optional reading. They form the technical and legal foundation underpinning Ofgem’s expectations. Each one addresses a core domain where AI introduces regulatory exposure.
Appendix 1 - Legal and Regulatory Obligations
Overview
Appendix 1 reinforces the message that AI deployment in the UK energy sector must operate within the bounds of existing legal and regulatory frameworks. Ofgem does not propose new AI-specific legislation at this stage. Instead, it requires that the introduction of AI does not dilute or displace licensees’ statutory duties and obligations.
The appendix clarifies that AI must be integrated in a way that supports, rather than undermines, the legal responsibilities of licensees, market participants, and other regulated persons.
The appendix serves three functions:
Key Themes and Regulatory Expectations
Firms remain bound by Standard Licence Conditions (SLCs), such as:
AI-driven decisions such as automated customer communications, billing adjustments, or outage triage must meet the same fairness, accessibility, and procedural standards as human-led decisions.
Compliance Implication: AI does not reduce legal responsibility. If AI results in non-compliance (e.g., discriminatory outcomes, failure to respond to consumer needs), the firm remains fully accountable.
AI systems used for energy forecasting, dispatch, or trading must comply with REMIT obligations related to:
Ofgem explicitly reminds market participants that the use of AI does not exempt them from REMIT registration or reporting duties.
Compliance Implication: AI-enhanced trading systems must be transparent, auditable, and capable of being interpreted by compliance teams. Black-box models that obscure trade intent or data provenance pose serious compliance risks.
Ofgem is clear: AI use can breach competition law, particularly if:
Stakeholders can face severe penalties under the Competition Act 1998, including:
Ofgem highlights that personal liability may apply to IT directors or senior managers overseeing AI systems that produce anti-competitive outcomes—even unintentionally.
Stakeholders, including IT Directors and IT Managers, can bear personal responsibility for a breach of competition law and will therefore need to ensure that their AI system does not produce anti-competitive effects.
Compliance Implication: Compliance teams must treat AI systems as potential sources of collusion or abuse risk. Documented competition law assessments, testing, and monitoring are essential particularly for pricing, dispatch, or bidding algorithms.
Organisations designated as Operators of Essential Services (OES) under the NIS Regulations 2018 must ensure AI systems do not introduce new cyber vulnerabilities or compromise system availability.
Operators must demonstrate that AI-enabled systems meet standards for resilience, confidentiality, and integrity under UK cybersecurity law where AI is used for functions such as:
Compliance Implication: AI systems may need to undergo security assurance or technical audits as part of broader cyber governance programmes. This includes testing for adversarial attacks, shadow AI use, and failure recovery capabilities.
Ofgem reaffirms a foundational regulatory principle: accountability cannot be outsourced. Where licensees procure AI solutions from third-party vendors, they must still ensure:
Compliance Implication: Procurement and vendor management functions must adopt enhanced due diligence for AI vendors, including assessment of model explainability, data handling, redress mechanisms, and legal compliance.
Enforcement and Regulatory Consequences
Ofgem outlines its enforcement toolkit, which includes:
It emphasises that failure to integrate AI responsibly into regulated activities may trigger enforcement action even if the failure was technical or outsourced.
Ofgem notes that compliance with its ethical AI guidance will likely be used as a mitigating or aggravating factor in enforcement decisions. Where breaches occur, it will assess whether appropriate governance, risk controls, and foresight were in place.
Implications for Legal, Compliance, and Policy Teams
Appendix 2 – AI Standards
Overview
This appendix lists applicable standards that support ethical AI deployment. Ofgem notes that while it does not regulate these standards directly, it expects stakeholders to use them as part of good practice governance and assurance. Adherence to these standards can help organisations demonstrate compliance with legal duties and alignment with Ofgem’s expectations.
Key Standards Highlighted
Stakeholders are referred to the UK’s AI Standards Hub (click here) for evolving norms. Notable standards include:
Compliance Implication: Firms should select and adopt AI standards that map onto their risk profile and use cases. Demonstrating alignment with ISO 42001 or the NIST framework may serve as a strong indicator of regulatory maturity.
Appendix 3 – AI Supply Chain Management
Overview
Appendix 3 of Ofgem’s guidance addresses a key regulatory concern: the extended responsibility of licensees and market participants for AI systems and services developed or operated by third parties.
It formalises the expectation that AI supply chains must be actively governed and subject to proportionate assurance, not simply contracted and assumed to be compliant. It applies equally to:
Where AI influences licensable activities, consumer outcomes, or market behaviours, the procuring entity bears the full weight of regulatory responsibility regardless of who built, trained, or maintained the system.
Key Themes
Compliance Implication: Procurement and compliance teams must jointly review the vendor’s development and deployment practices, especially where models are deployed in “black box” form or updated continuously without operator involvement.
Appendix 3 encourages firms to embed explicit contractual safeguards into all third-party agreements involving AI tools or services. These should include:
Compliance Implication: Traditional SLAs and procurement templates are insufficient. AI contracts must be customised to reflect algorithmic behaviour, bias risk, and data governance obligations.
The obligation to ensure ethical AI use does not end at contract signature. Appendix 3 highlights the importance of establishing:
Compliance Implication: Firms must treat externally sourced AI as a living system not a static product. This requires new forms of operational collaboration between AI risk teams, procurement, and IT.
AI vendors must be treated as critical service providers, particularly where:
Compliance Implication: Third-party risk management (TPRM) frameworks must be extended to include AI-specific due diligence, onboarding protocols, and periodic reviews. Many existing TPRM policies focus on financial, cybersecurity, or operational resilience. AI introduces new categories of risk (e.g., bias, explainability, and human over-reliance) that must now be captured.
Governance and Enforcement Considerations
Failure to properly govern AI supply chains can expose licensees to:
Ofgem is clear in its message - a lack of internal technical expertise or visibility over third-party models is not a defence. Firms must develop the organisational capacity to interrogate and challenge external AI systems and be able to demonstrate that they have done so when asked by regulators.
Practical Recommendations
To comply with the expectations set out in Appendix 3, firms should:
Appendix 4 – Data Use and Management
Overview
This appendix reinforces the foundational role of data governance in AI ethics. Ofgem underscores that flawed or poorly managed data can cause AI systems to fail or generate biased, unsafe, or misleading outcomes.
Core Expectations
Compliance Implication: Data governance teams must expand their remit to cover AI-specific risks such as dataset drift, model poisoning, data minimisation, and privacy attacks. Clear documentation is essential for demonstrating legal compliance and risk mitigation.
Appendix 5 – AI and Cyber Security
Overview
Appendix 5 addresses a critical and often underestimated dimension of AI ethics: the intersection between artificial intelligence and cybersecurity. Ofgem recognises that as energy stakeholders adopt increasingly complex, data-driven AI systems, they also expand the cyber-attack surface of operational and market-facing systems.
Unlike traditional IT infrastructure, AI introduces novel security risks that stem from its dynamic, data-dependent, and often opaque nature. These risks challenge legacy control frameworks and demand a more integrated, AI-aware approach to cyber resilience. Ofgem’s message is clear: ethical AI is not secure unless it is cyber secure.
Key AI Threat Vectors Identified by Ofgem
Compliance Implication: Training data pipelines must be locked down with the same rigour as sensitive codebases. Data provenance, integrity, and lineage controls are essential.
Compliance Implication: Robust input validation, context-aware filters, and adversarial testing should be standard in model deployment. Security teams must understand the operational context and potential misuse vectors of deployed models.
IT Governance Implication: Firms must extend shadow IT policies to explicitly cover AI tools. AI governance policies should mandate registration, approval, and security review of all external AI components.
Compliance Implication: Where explainability is limited, compensating controls such as output monitoring, redundancy, and human-in-the-loop validation are required. Explainability is not just an ethical concern—it is a prerequisite for detection and containment.
Practical Recommendations
To comply with Appendix 5 and reduce cyber-AI exposure, firms should explore the following: