Ofgem’s AI Guidance: What Every Energy Stakeholder Needs to Know

What Is It About

Ofgem’s AI guidance sets expectations for ethical, risk-aligned AI use across the UK energy sector. It introduces a non-prescriptive framework anchored on four core outcomes - safety, security, fairness, and environmental sustainability - covering AI governance, risk management, and lifecycle oversight.

Why It's Important

The guidance confirms Ofgem’s position that existing laws like REMIT, GDPR, and competition rules apply fully to AI. It elevates AI from a technical tool to a regulated activity, requiring proactive governance, defensibility, and board-level awareness - especially for high-risk uses like trading and forecasting.

Key Takeaways

Compliance teams must ensure AI systems are governed from design to decommissioning, with traceable oversight, fairness reviews, and robust risk controls. Black-box models, algorithmic pricing, and vendor tools are high-risk areas, with firms remaining fully liable for any resulting market or consumer harm.

Introduction

Ofgem published its first dedicated guidance on the ethical use of artificial intelligence (AI) in the UK energy sector (click here). It follows on from Ofgem’s January 2025 high-level strategic approach to AI (click here) which described its plan to set-out a robust method of regulating AI in the energy sector based on the UK government’s five AI principles.

Ofgem’s guidance offers a proportionate, outcomes-based framework designed to ensure that the adoption of AI supports innovation while safeguarding consumer welfare, operational integrity, and public trust.

At the heart of the guidance are four ethical outcomes that must be embedded across the AI lifecycle:

  • Safety;
  • Security;
  • Fairness; and
  • Environmental sustainability.

Ofgem’s aim is to give regulated entities and adjacent stakeholders a practical foundation for integrating ethical AI use within existing regulatory frameworks, including REMIT, the Competition Act 1998, data protection rules, and network reliability obligations.

It covers (i) governance measures and policies to ensure the effective oversight of AI, (ii) a risk approach to help stakeholders identify and manage risks associated with AI, and (iii) the competencies required for the ethical adoption of AI.

Jonathan Brearley, CEO of Ofgem, commented:

Delivering safe, secure, fair and environmentally sustainable AI outcomes requires stakeholders to comply with their regulatory obligations and to adopt good practice approaches. This includes governance and policies, risk and AI implementation throughout the AI life cycle including design, development, deployment, operations, monitoring, maintenance and decommissioning after use.

This guidance is not directed solely at energy licensees. Ofgem explicitly defines a broad target audience for compliance and engagement, comprising:

  • Licensees;
  • Market participants;
  • Operators of essential services;
  • Dutyholders;
  • Technology companies;
  • AI developers;
  • Consumer groups;
  • Other regulators; and
  • Government departments and agencies.

Ofgem’s guidance approach avoids prescriptive rulemaking around AI. Instead, it anchors its good practice recommendations in global best practices, citing eight principles-based risk frameworks that stakeholders are advised to consider:

  1. National risk register 2023 on GOV.UK
  2. Considerations for developing artificial intelligence systems in nuclear applications, Office for Nuclear Regulation
  3. Assurance of machine learning for use in autonomous systems (AMLAS) tool v1 user guide, University of York
  4. ISO42001 AI management systems
  5. AI Risk Management Framework, National Institute of Standards and Technology, US Department of Commerce
  6. Microsoft Responsible AI Standard v2, General Requirements
  7. The AI Security Institute
  8. Department for Science, Innovation and Technology’s (DSIT) Call for views on Cyber Security of AI

These publications provide practical frameworks with which to evaluate AI while Ofgem urges stakeholders to tailor their implementation based on the nature of the AI system, the intended use, and the potential downstream impacts.

To further assist firms, Ofgem supplements the core guidance with five technical appendices, each covering a specific area of regulatory relevance:

  • Appendix 1 – Legal and regulatory obligations in the energy and AI context;
  • Appendix 2 – Applicable AI standards and their current state;
  • Appendix 3 – Supply chain governance and third-party risk;
  • Appendix 4 – Data use and data governance principles; and
  • Appendix 5 – AI-specific cybersecurity measures.

Together, these annexes give stakeholders access to a consolidated view of compliance expectations, supported by international norms and practical examples. Three specific examples relevant to energy and commodity firms which we review in further detail include:

  1. AI in forecasting and predictive analytics. Demand prediction, extreme weather modelling, outage risk forecasting, and infrastructure lifecycle planning.
  2. AI in pricing and algo trading. Automated risk modelling, price discovery, and portfolio optimisation.
  3. Use of black-box systems. Deployment of opaque or partially explainable AI models, including foundation models and large language models (LLMs).

Ofgem’s Regulatory Position on AI

Ofgem’s AI position is that the existing regulatory framework is sufficient to govern AI use in its current form, however this is not a static assumption. It reserves the right to strengthen its approach as the use of AI evolves and as risks to consumers, market competition, or grid reliability become more complex or acute.

For compliance officers, third party AI vendors, and IT leaders, this guidance necessitates a shift in how AI projects are evaluated, approved, and monitored. It introduces a heightened duty of care requiring proactive governance, competent oversight, demonstrable fairness, and robust incident preparedness. Internal AI policies will need to integrate these expectations explicitly, backed by board-level awareness, cross-functional accountability, and audit-ready documentation.

We review and provide a deep dive into each of the guidance’s core areas including actionable takeaways firms can implement as follows:

  1. Governance and Policies;
  2. Risk Management Requirements;
  3. Competency and Capability Building; and
  4. Sector-Specific Examples.

We also analyse the five supporting appendices as they provide further depth in translating the core areas into day-to-day compliance, assurance, and oversight activities and form the foundation underpinning Ofgem’s expectations.

Thanks for your interest in our content.
Enjoy the read!

Introduction

Ofgem published its first dedicated guidance on the ethical use of artificial intelligence (AI) in the UK energy sector (click here). It follows on from Ofgem’s January 2025 high-level strategic approach to AI (click here) which described its plan to set-out a robust method of regulating AI in the energy sector based on the UK government’s five AI principles.

Ofgem’s guidance offers a proportionate, outcomes-based framework designed to ensure that the adoption of AI supports innovation while safeguarding consumer welfare, operational integrity, and public trust.

At the heart of the guidance are four ethical outcomes that must be embedded across the AI lifecycle:

  • Safety;
  • Security;
  • Fairness; and
  • Environmental sustainability.

Ofgem’s aim is to give regulated entities and adjacent stakeholders a practical foundation for integrating ethical AI use within existing regulatory frameworks, including REMIT, the Competition Act 1998, data protection rules, and network reliability obligations.

It covers (i) governance measures and policies to ensure the effective oversight of AI, (ii) a risk approach to help stakeholders identify and manage risks associated with AI, and (iii) the competencies required for the ethical adoption of AI.

Jonathan Brearley, CEO of Ofgem, commented:

Delivering safe, secure, fair and environmentally sustainable AI outcomes requires stakeholders to comply with their regulatory obligations and to adopt good practice approaches. This includes governance and policies, risk and AI implementation throughout the AI life cycle including design, development, deployment, operations, monitoring, maintenance and decommissioning after use.

This guidance is not directed solely at energy licensees. Ofgem explicitly defines a broad target audience for compliance and engagement, comprising:

  • Licensees;
  • Market participants;
  • Operators of essential services;
  • Dutyholders;
  • Technology companies;
  • AI developers;
  • Consumer groups;
  • Other regulators; and
  • Government departments and agencies.

Ofgem’s guidance approach avoids prescriptive rulemaking around AI. Instead, it anchors its good practice recommendations in global best practices, citing eight principles-based risk frameworks that stakeholders are advised to consider:

  1. National risk register 2023 on GOV.UK
  2. Considerations for developing artificial intelligence systems in nuclear applications, Office for Nuclear Regulation
  3. Assurance of machine learning for use in autonomous systems (AMLAS) tool v1 user guide, University of York
  4. ISO42001 AI management systems
  5. AI Risk Management Framework, National Institute of Standards and Technology, US Department of Commerce
  6. Microsoft Responsible AI Standard v2, General Requirements
  7. The AI Security Institute
  8. Department for Science, Innovation and Technology’s (DSIT) Call for views on Cyber Security of AI

These publications provide practical frameworks with which to evaluate AI while Ofgem urges stakeholders to tailor their implementation based on the nature of the AI system, the intended use, and the potential downstream impacts.

To further assist firms, Ofgem supplements the core guidance with five technical appendices, each covering a specific area of regulatory relevance:

  • Appendix 1 – Legal and regulatory obligations in the energy and AI context;
  • Appendix 2 – Applicable AI standards and their current state;
  • Appendix 3 – Supply chain governance and third-party risk;
  • Appendix 4 – Data use and data governance principles; and
  • Appendix 5 – AI-specific cybersecurity measures.

Together, these annexes give stakeholders access to a consolidated view of compliance expectations, supported by international norms and practical examples. Three specific examples relevant to energy and commodity firms which we review in further detail include:

  1. AI in forecasting and predictive analytics. Demand prediction, extreme weather modelling, outage risk forecasting, and infrastructure lifecycle planning.
  2. AI in pricing and algo trading. Automated risk modelling, price discovery, and portfolio optimisation.
  3. Use of black-box systems. Deployment of opaque or partially explainable AI models, including foundation models and large language models (LLMs).

Ofgem’s Regulatory Position on AI

Ofgem’s AI position is that the existing regulatory framework is sufficient to govern AI use in its current form, however this is not a static assumption. It reserves the right to strengthen its approach as the use of AI evolves and as risks to consumers, market competition, or grid reliability become more complex or acute.

For compliance officers, third party AI vendors, and IT leaders, this guidance necessitates a shift in how AI projects are evaluated, approved, and monitored. It introduces a heightened duty of care requiring proactive governance, competent oversight, demonstrable fairness, and robust incident preparedness. Internal AI policies will need to integrate these expectations explicitly, backed by board-level awareness, cross-functional accountability, and audit-ready documentation.

We review and provide a deep dive into each of the guidance’s core areas including actionable takeaways firms can implement as follows:

  1. Governance and Policies;
  2. Risk Management Requirements;
  3. Competency and Capability Building; and
  4. Sector-Specific Examples.

We also analyse the five supporting appendices as they provide further depth in translating the core areas into day-to-day compliance, assurance, and oversight activities and form the foundation underpinning Ofgem’s expectations.

Compliance Considerations

[1] Governance and Policies

Overview

Ofgem emphasises that governance must be formal, documented, and proportionate to risk. Key expectations include:

  • Board-level oversight of AI initiatives;
  • Clear risk ownership structures;
  • Internal AI policies and procedures aligned to legal and ethical outcomes; and
  • Oversight across the entire AI lifecycle, including system design, validation, and decommissioning.
  • Governance required at both project and organisational levels while incorporating clear oversight, board accountability, and supply chain management.

Governance must also extend to supply chains. AI vendors are subject to the same ethical expectations, but accountability for failures remains with the regulated entity not the developer.

Executive Summary

Ofgem places governance at the core of ethical AI adoption. Recognising the critical role that corporate leadership and policy frameworks play in shaping AI outcomes, the guidance calls for proportionate and scalable governance models that reflect both the risk posed and the operational context of AI systems.

  • This approach is sensitive to organisational size and complexity: while large licensees may require board-level oversight committees, smaller entities may meet expectations through streamlined but principled oversight structures.
  • Key expectations include:
  1. The articulation of an AI strategy with defined goals and risk parameters;
  2. Delegation of risk ownership to qualified leaders; and
  3. Embedding of AI considerations into existing risk and compliance structures.

Importantly, stakeholders must ensure that AI systems are not governed in isolation, but as components within broader organisational systems and subject to controls, audits, change management, and regulatory scrutiny.

Ofgem provides three layers of guidance in this area:

  1. Organisational Governance: Requires board or senior management oversight, the presence of clearly defined roles and escalation pathways, and access to skilled personnel.
  2. Project-Level Governance: Necessitates documented justifications for AI use, detailed planning, and adherence to ethical design and deployment practices.
  3. Policy Governance: Encourages the development of comprehensive internal policies and procedures to guide responsible AI development, integration, and monitoring.

Additionally, stakeholders are expected to implement effective data governance, supply chain controls, and redress mechanisms for consumers adversely affected by AI-driven decisions.

Firms remain responsible for any breaches caused by AI systems, whether those systems are developed in-house or procured via third parties.

Taken together, these expectations form a de facto governance blueprint. While not legally binding, they do establish a regulatory benchmark. Failure to meet them may become grounds for enforcement under existing statutory duties particularly if governance failures contribute to consumer harm, competition risks, or data misuse.

Key Themes

Strategic Governance. Ofgem begins by recommending that every organisation define its AI strategy, including anticipated use cases, risk appetites, and intended consumer outcomes.

  • This strategy should be periodically reviewed in light of emerging risks or significant organisational change;
  • Ofgem suggests that the strategy be board-approved where appropriate and that it be operationalised through a set of internal standards, procedures, and controls; and
  • It aligns with ISO 42001*, which emphasises the need for an integrated AI management system rooted in strategic direction and continuous improvement.

*ISO 42001 is the first international standard for Artificial Intelligence Management Systems, and specifies requirements for organisations to establish, implement, maintain, and continually improve responsible and effective AI governance and operations.

Roles and Responsibilities. A strong theme throughout the guidance is the necessity for clear accountability and ownership.

  • Firms should assign responsibility for AI risks to designated individuals or committees;
  • Ofgem recommends cascading roles and responsibilities down the management chain, with a focus on ensuring that supply chain participants also understand and observe these duties; and
  • This structure should be accompanied by decision-making authority and the ability to escalate unresolved risks particularly those affecting fairness, safety, or compliance with legal duties such as data protection or equality law.

Governance Across the AI Lifecycle. The governance framework must cover the entire AI lifecycle from model development to decommissioning.

  • This includes oversight of training data, model validation, system monitoring, change control, and impact assessment; and
  • Ofgem emphasises the need for redress mechanisms, such as consumer complaint channels and real-time monitoring, to ensure that AI decisions can be corrected promptly and transparently.

Management Information and Assurance. Decision-makers must be equipped with the necessary information to manage uncertainty.

  • It includes key performance indicators (KPIs), risk audit results, operational feedback, and scenario analyses;
  • The guidance encourages using independent review structures, including internal audit, to ensure that governance arrangements remain aligned with organisational risk appetite and regulatory expectations; and
  • There’s also a strong emphasis on communication, both internally (to ensure staff understand policies and expectations) and externally (to maintain transparency with consumers and regulators).

Supply Chain Oversight. Ofgem stresses that responsibility for AI does not end with procurement.

  • Firms must extend their governance structures to include third-party developers and vendors;
  • It includes verifying model explainability, performance, and compliance with ethical standards; and
  • Outsourcing AI development or management does not outsource regulatory liability, firms are still liable for any harm done by AI to consumers or the market more broadly.

icon_target RegTrail Insights

Implications for Policy and Practice

  • Compliance leaders must treat AI not as an emerging technology but as a core compliance concern with board-level visibility. Internal risk governance frameworks should be reviewed and extended to cover AI-specific roles and workflows.
  • Boards and senior executives should consider adding AI oversight to their risk governance charters. This may require co-opting new expertise, commissioning training programmes, and establishing cross-functional AI ethics committees.
  • Legal and compliance teams should update existing internal policies to reflect the distinct characteristics of AI. These updates should include guidance on fairness assessments, change management, consumer redress, and documentation standards.
  • Procurement and vendor management functions should reassess third-party contracts to include AI risk clauses, assurance requirements, and performance reporting tied to the ethical outcomes laid out by Ofgem.
  • All stakeholders should document and review their governance arrangements against Ofgem’s “good practice” checklist. This includes embedding AI-specific governance into broader digitalisation, ESG, and compliance strategies.

[2] Risk Management Requirements

Overview

AI introduces distinct risks across the energy value chain, such as:

  • Model drift and failure in edge cases;
  • Black-box opacity making audit and explainability difficult;
  • Automation bias in human-AI workflows; and
  • Market distortion or unfair exclusion of consumers.

Ofgem calls for proportionate, outcomes-focused risk controls, including:

  • Robust, scenario-based risk assessments;
  • Use of digital twins, fallback mechanisms, and human-in-the-loop processes;
  • Clear performance thresholds, model drift detection, and retraining protocols; and
  • Lifecycle-based governance from pre-deployment to decommissioning.

Risk frameworks must be auditable and designed to prevent foreseeable harm. Importantly, Ofgem encourages risk tolerance thresholds, not risk elimination, reflecting AI’s probabilistic nature.

Executive Summary

AI poses a distinct class of risks ranging from training bias and data drift to model overreach, failure in edge cases, reduced human oversight, and cyber-exploitability.

Ofgem highlights that AI is not to be viewed in isolation but as part of a wider system, where governance and human intervention remain essential safeguards. Accordingly, risk governance must extend beyond the algorithm to include operational context, human oversight, and long-term adaptability.

The guidance outlines nine good practice principles for AI risk management, which form a comprehensive risk lifecycle framework:

  1. Ensure AI is the most appropriate technology;
  2. Conduct robust, evidence-led risk assessments;
  3. Apply good practice in specification and development;
  4. Understand the broader system in which AI operates;
  5. Identify and mitigate failure modes;
  6. Build confidence in AI performance through rigorous testing;
  7. Ensure access to competent personnel;
  8. Account for human-AI interaction risks; and
  9. Monitor and review AI systems continuously.

The guidance explicitly recommends tools such as digital twins, scenario planning, and human-in-the-loop safeguards each helping manage the inherent unpredictability of AI in complex systems. Risks must also be documented and auditable, especially when connected to safety, data protection, or competition law obligations. Ofgem notes:

Depending on level of risk associated with the use of AI it may be beneficial for potential users of the technology to use risk matrix frameworks, for example Machine Learning Principles, on the NCSC website, and keep a record of the assessment to aid future use. Risk areas might include, but not be limited to, operational, legal and reputational risks.

Key Themes

Proportionality in Risk Management. Ofgem’s guidance resists blanket controls in favour of a graduated model.

  • The depth and formality of a risk response should scale with the potential consequence of system failure.
  • For instance, a chatbot providing non-binding guidance may require lightweight controls, while an AI forecasting model used for energy dispatch or pricing must undergo robust assurance and simulation testing.

AI Risk Lifecycle. The guidance introduces a full-spectrum view of AI risk from ideation to decommissioning. Risk must be assessed at each stage:

  • Pre-adoption: Determine if AI is the best tool for the task. Consider alternatives like rule-based systems or deterministic algorithms that offer greater explainability.
  • Development: Apply robust data governance, assumption testing, and ethics reviews. Understand how training data biases may embed into the model.
  • Deployment: Validate performance through scenario testing, failure mode analysis, and sensitivity checks. Ensure fallback mechanisms are in place.
  • Monitoring: Establish drift detection, feedback loops, and regular audits. Plan for model updates, environmental changes, and organisational shifts.
  • Decommissioning: Ensure systems are retired cleanly, and that any residual dependencies (e.g. linked APIs or datasets) are severed securely.

Failure Mode and Maloperation. Ofgem places strong emphasis on understanding how and why AI might fail and what the consequences would be.

  • Stakeholders must conduct impact assessments that account for both technical failure and human over-reliance or under-trust in AI systems.
  • The guidance recommends specific technical practices, including:
  1. Functional safety layers;
  2. Independent verification channels;
  3. Alert thresholds for anomalous outputs;
  4. Digital twins for parallel validation; and
  5. Fallback to manual control in adverse scenarios.
  • Ofgem acknowledges that some failure is inevitable, but regulators and operators alike must strive for fail-safe, not failure-free, systems.

Human-AI Interaction and Cultural Risk. Beyond technical dimensions, Ofgem highlights cultural and behavioural risk. Humans can over-trust AI (automation bias) or distrust it (automation aversion), leading to poor oversight.

  • Risk management strategies must, therefore, include training, change management, and role clarity to ensure that human judgment and AI tools complement rather than conflict with each other.

Competency and Risk Ownership. Risk frameworks are only as effective as those who manage them. Ofgem recommends that personnel involved in AI decision-making:

  • Possess appropriate technical fluency and regulatory awareness;
  • Understand the operational consequences of failure; and
  • Are supported by escalation frameworks and peer challenge mechanisms.

Where appropriate, organisations may consider appointing a designated AI Risk Officer to ensure that accountability is maintained throughout the project lifecycle.

icon_target RegTrail Insights

Implications for Policy and Practice

  • Boards must integrate AI risk management into enterprise risk frameworks, alongside market, compliance, and operational risks.
  • Compliance teams should build AI-specific extensions to existing risk registers, including indicators for drift, model performance, data lineage, and vendor risk.
  • Risk and audit teams should use structured methods such as AMLAS, NIST AI RMF, or ISO 42001 for quantifying and reviewing AI exposures.
  • Legal and regulatory affairs functions must ensure that AI risk assessments are auditable and defensible particularly where AI outputs could affect consumer billing, service exclusions, or competition outcomes.
  • Operations leaders should establish live monitoring dashboards, red-teaming protocols, and fault simulation drills for high-impact AI systems.

[3] Competency and Capability Building

Overview

This section addresses the knowledge, skills, and institutional capabilities needed to support the safe, ethical, and effective deployment of AI across the energy sector. Ofgem makes clear that AI systems cannot be governed without the right people in place. Stakeholders are expected to:

  • Build foundational AI literacy across business units;
  • Train staff in role-specific AI responsibilities (development, risk, compliance, oversight);
  • Appoint AI officers or governance leads with authority and technical insight; and
  • Engage in horizon scanning and support communities of practice to accelerate learning.

It is not simply about upskilling AI-specific resources but also about embedding AI fluency across the entire organisation.

Executive Summary

Ofgem’s ethical AI guidance recognises that governance and risk management mechanisms are only as strong as the people who design, implement, and oversee them. Accordingly, it devotes a full section to competencies, calling on firms to assess and enhance their internal capacity to manage AI systems across their full lifecycle. This includes technical fluency, policy awareness, behavioural safeguards, and real-world operational understanding.

Unlike traditional software systems which are rules-based and deterministic, AI systems are probabilistic and adaptive. This means they require more nuanced assurance, more frequent re-evaluation, and a wider base of informed human oversight. From a regulatory and operational perspective, this elevates the need for institutional AI literacy, cross-disciplinary collaboration, and scenario-based learning within energy companies.

Ofgem sets out three broad areas of good practice:

  1. Training and knowledge development plans: These should define baseline AI literacy for all staff and include role-specific training aligned to each individual's function within the AI lifecycle (i.e. governance, development, operations, oversight).
  2. Qualified and empowered decision-makers: Roles overseeing AI such as AI officers or risk leads must have the appropriate expertise, tools, and authority to govern AI use responsibly. These roles should span across regulatory compliance, technical design, data stewardship, and cybersecurity.
  3. Knowledge management systems: Firms should maintain up-to-date procedures to track AI developments, share lessons learned, and prepare for changes in standards or threats. This includes supporting ongoing research, proofs-of-concept, and external collaborations.

Three additional observations include:

  • Resilience and Scalability. Ofgem also notes that competency planning must take account of resilience and scalability i.e., the ability to adapt to new threats, novel technologies, or changes in policy. The guidance is especially sensitive to the speed at which AI is evolving, warning that today’s skills may not suffice tomorrow. This is particularly critical where AI is used in high-risk contexts such as grid control, autonomous systems, consumer pricing, or personal data handling.
  • Horizon Scanning. Ofgem encourages the use of horizon scanning to monitor new use cases, emergent risks, and innovation opportunities. It also recommends the formation of communities of practice, whether internal or cross-industry, to accelerate capability development and build a shared ethical baseline across the sector.
  • Cybersecurity. Lastly, cybersecurity is singled out as a critical area of competency. Security teams must understand the unique vulnerabilities of AI systems including adversarial inputs, model poisoning, and shadow AI and incorporate these into their threat monitoring and incident response protocols.

Key Themes

Foundational AI Literacy for All Staff. AI should not be the domain of technical teams alone. Ofgem calls for organisations to establish a baseline level of AI understanding across all business functions, including customer service, compliance, legal, procurement, IT and executive leadership. This ensures that every employee can recognise AI-driven decisions, understand their implications, and escalate concerns when needed.

  • Training should include:
  1. Basic concepts of how AI works;
  2. Ethical considerations (e.g. bias, transparency);
  3. The firm’s internal AI policies; and
  4. Processes for identifying, reporting, or escalating AI-related risks.
  • This type of foundational literacy is essential for building a culture of responsible AI use particularly in organisations that outsource AI capabilities to vendors or operate across multiple jurisdictions.

Role-Specific Competency Development. Different roles require different levels of AI expertise. The guidance encourages organisations to formalise these expectations through competency matrices, professional development plans, and certification programmes where appropriate. Ofgem stresses the importance of aligning training to role responsibilities. For example:

  • Developers and data scientists need advanced training in model design, training data ethics, and explainability techniques.
  • Governance professionals need awareness of AI regulations, ISO standards, and licensing conditions.
  • Operations teams need to know how to detect anomalies in AI behaviour, initiate fallback procedures, and interpret model outputs.
  • Executive sponsors need to understand AI risk exposure in relation to business strategy and regulatory risk.

Empowering AI Risk Owners. Ofgem suggests that organisations appoint AI risk officers or equivalent roles with end-to-end responsibility for the safe and ethical deployment of AI technologies. These roles are not just technical. Rather, they must span operational, ethical, and legal and compliance domains. Where a single person cannot cover all dimensions, a multidisciplinary AI governance group may be formed instead. These individuals should have the authority to:

  • Set or approve AI adoption policies
  • Review and approve AI projects based on risk thresholds
  • Halt deployments where risk tolerances are exceeded
  • Ensure alignment with Ofgem’s good practice guidance and other relevant frameworks

Knowledge Management and Continuous Learning. Given the pace of AI innovation, static knowledge is a liability. Ofgem recommends that firms establish dynamic knowledge management systems that capture:

  • Emerging cyber threats;
  • Regulatory changes;
  • Lessons from internal and external incidents;
  • Findings from R&D or pilots; and
  • Vendor performance and assurance gaps.

The above knowledge should inform training updates, policy revisions, and strategic planning. Firms should also consider participation in industry AI forums, working groups, and regulatory sandboxes to remain at the forefront of ethical AI development.

Horizon Scanning and Innovation Readiness. Firms must continuously scan the horizon for changes in AI capabilities, standards, and risks. This includes staying abreast of:

  • Advances in model architectures (e.g. foundation models, reinforcement learning);
  • Novel threats such as hallucination or prompt injection;
  • Updates to NIST, ISO, and DSIT guidance; and
  • Regulatory proposals from Ofgem, CMA, FCA, or the EU AI Act.

Where gaps are identified, whether in personnel, tools, or governance, firms must act to close them through capability-building or risk mitigation.

icon_target RegTrail Insights

Implications for Policy and Practice

  • Human Resources and Learning and Development departments must develop structured training pathways for all staff, from frontline agents to C-level executives, tailored to their interaction with AI systems;
  • Compliance teams should ensure that AI risk owners are appointed, trained, and formally empowered;
  • Cybersecurity functions must integrate AI-specific threat vectors into their monitoring and incident response frameworks;
  • Knowledge management leaders should update procedures to ensure rapid dissemination of new AI risks, guidance, and lessons learned; and
  • Boards and executive committees should receive regular AI briefings, participate in strategic reviews, and incorporate AI topics into risk committee agendas.

[4] Sector-Specific Examples

Overview

Ofgem’s guidance moves beyond principles by providing real examples how AI is being applied in the energy sector. These case studies serve two primary purposes: (1) they highlight real-world contexts in which AI is currently being used or considered, and (2) they expose specific risk factors, control mechanisms, and governance expectations.

While use cases range from customer interaction to autonomous grid management, three AI domains are especially critical to energy and commodity trading firms due to their heightened regulatory sensitivity as follows:

  1. Forecasting and predictive analytics;
  2. Pricing and trading; and
  3. The use of black-box AI systems.

These areas carry high stakes financially, operationally and reputationally, and are subject to overlapping compliance regimes including REMIT, GDPR, the Competition Act, and sectoral licence conditions.

Executive Summary

Key use cases explored in the guidance include:

  1. AI in consumer interactions – Virtual assistants, case summarisation tools, and personalised communication systems.
  2. AI to identify excluded or vulnerable consumers – Detection of digital exclusion and promotion of equitable treatment.
  3. AI in forecasting and predictive analytics – Demand prediction, outage risk forecasting, and infrastructure lifecycle planning.
  4. AI in cyber-physical systems – Automated control of energy storage, smart grid operations, and drones for infrastructure inspection.
  5. AI in pricing and trading – Automated risk modelling, price discovery, and portfolio optimisation.
  6. Use of black-box systems – Deployment of opaque or partially explainable AI models, including foundation models and LLMs.

In each case, Ofgem outlines specific risks, control requirements, and mitigation expectations. Three are of particular interest to energy and commodity firms are as follows:

A] AI in Forecasting and Predictive Analytics

Energy companies are increasingly deploying AI for load forecasting, asset maintenance, outage prediction, and extreme weather modelling. These systems improve efficiency, enable proactive infrastructure management, and reduce manual effort. However, their use also introduces complex governance challenges.

Risks:

  • Model drift due to changing environmental or market conditions;
  • Opacity in decision-making, making human validation difficult;
  • Overconfidence in model predictions leading to under-preparation; and
  • Feedback loops in which AI forecasts influence the very systems they predict.

Ofgem’s Expectations:

  • AI forecasting models must be validated not just technically, but operationally. Can human operators interpret outputs? Are assumptions defensible? What happens if the model fails?
  • Ofgem recommends hybrid strategies that combine AI with traditional forecasting methods, particularly for critical applications such as grid balancing or storm response.
  • Performance must be tracked continuously with alert thresholds and fallback mechanisms defined in advance.

B] AI in Pricing and Algo Trading

The use of AI in pricing and algorithmic trading introduces powerful capabilities but also heightened regulatory scrutiny. These tools can assist with bid optimisation, portfolio management, volatility modelling, and price forecasting. However, they operate within the boundaries of REMIT, MAR, and competition law where outcomes, not just intent, determine compliance.

Risks:

  • Unintentional collusion between firms using similar algorithms or training data;
  • Exploitation of market volatility, creating unfair pricing advantages;
  • Audit gaps where algorithms evolve without traceable human oversight; and
  • Infringement of REMIT obligations, especially regarding information disclosure and market manipulation.

Ofgem’s Expectations:

  • Firms must be able to demonstrate the intent, logic, and oversight of AI trading models. Audit trails, version control, and decision logs are essential.
  • Model explainability is not optional in this context. If a model influences prices or positions in regulated markets, it must be intelligible to compliance and audit functions.
  • Regular review cycles, independent validation, and cross-team escalation processes are expected.
  • Regularly monitor for anomalous trading patterns and adverse effects.
  • Limit the use of opaque third-party tools where explainability is compromised

Regulatory Warning: IT and compliance managers could face personal liability under the Competition Act if AI-enabled trading systems produce anti-competitive outcomes even if unintentionally. More broadly, firms remain responsible for misconduct triggered by vendors’ or partners’ models. Ofgem reminds firms that algorithmic price manipulation or information-sharing may violate the Competition Act 1998, attracting fines of up to 10% of global turnover.

C] Use of Black-Box Systems

Ofgem notes that the rise of black-box models including deep neural networks, foundation models, and third-party LLMs raises serious concerns around explainability, safety, and control. While such systems can deliver superior performance, they also reduce transparency, impair auditability, and undermine accountability.

Risks:

  • Lack of transparency regarding how decisions are made;
  • Inability to identify or correct harmful outputs;
  • Difficulty establishing regulatory defensibility, especially under GDPR or licence conditions; and
  • Over-reliance by humans, assuming accuracy without understanding.

Ofgem’s Expectations:

  • Black-box AI may be used only where risk is proportionate, and controls are robust.
  • Empirical testing for bias and accuracy;
  • Stakeholders must conduct explainability assessments, even if using proxy or model-agnostic techniques.
  • Where explainability is inherently limited, Ofgem expects compensating controls:
  1. A formal risk tolerance review before deployment.
  2. Independent validation of outputs;
  3. Corroboration mechanisms, such as parallel systems or digital twins;
  4. Clear thresholds for when human intervention is required; and
  5. Transparent communication to consumers or market participants where AI may affect outcomes.

Compliance Consideration: Black-box models cannot be ethically or legally deployed without compensatory governance. Firms must document the rationale for use, the limits of understanding, and the remediation mechanisms if things go wrong.

icon_target RegTrail Insights

Implications for Policy and Practice

  • Product teams and data scientists should embed fairness and explainability checks directly into use case design and AI development;
  • Compliance teams must assess new use cases against relevant licence obligations, competition law, and fairness principles;
  • Board-level governance should incorporate scenario planning for AI failure modes and track use case performance over time;
  • Cybersecurity and operations leads must ensure wraparound controls, fallback procedures, and red-teaming for physical systems using AI; and
  • Market surveillance and trading compliance teams must rigorously audit AI trading models and ensure competitive neutrality.

[5] Appendices Analysis

While Ofgem’s main guidance outlines ethical principles and governance expectations, the five supporting appendices provide the necessary depth to translate those principles into day-to-day compliance, assurance, and oversight activities.

These appendices are not optional reading. They form the technical and legal foundation underpinning Ofgem’s expectations. Each one addresses a core domain where AI introduces regulatory exposure.

Appendix 1 - Legal and Regulatory Obligations

Overview

Appendix 1 reinforces the message that AI deployment in the UK energy sector must operate within the bounds of existing legal and regulatory frameworks. Ofgem does not propose new AI-specific legislation at this stage. Instead, it requires that the introduction of AI does not dilute or displace licensees’ statutory duties and obligations.

The appendix clarifies that AI must be integrated in a way that supports, rather than undermines, the legal responsibilities of licensees, market participants, and other regulated persons.

The appendix serves three functions:

  1. Clarifies the legal instruments that remain fully applicable to AI-enabled operations;
  2. Highlights regulatory areas where AI creates new exposure or complexity; and
  3. Reinforces that legal liability is not mitigated by outsourcing or technological novelty.

Key Themes and Regulatory Expectations

  1. AI Must Not Breach Existing Licence Conditions

Firms remain bound by Standard Licence Conditions (SLCs), such as:

  • SLC 0 and 0A (Supply licence): Treating Consumers Fairly;
  • SLC 4A: Operational Capability; and
  • SLC 10AA (Distribution licence): Protecting Consumers in Vulnerable Situations.

AI-driven decisions such as automated customer communications, billing adjustments, or outage triage must meet the same fairness, accessibility, and procedural standards as human-led decisions.

Compliance Implication: AI does not reduce legal responsibility. If AI results in non-compliance (e.g., discriminatory outcomes, failure to respond to consumer needs), the firm remains fully accountable.

  1. REMIT and Market Integrity

AI systems used for energy forecasting, dispatch, or trading must comply with REMIT obligations related to:

  1. Market manipulation;
  2. Timely and accurate disclosure of inside information; and
  3. Registration and transparency.

Ofgem explicitly reminds market participants that the use of AI does not exempt them from REMIT registration or reporting duties.

Compliance Implication: AI-enhanced trading systems must be transparent, auditable, and capable of being interpreted by compliance teams. Black-box models that obscure trade intent or data provenance pose serious compliance risks.

  1. Competition Law Compliance

Ofgem is clear: AI use can breach competition law, particularly if:

  • Algorithms are designed or evolve to fix prices, coordinate bids, or exchange sensitive commercial information; or
  • AI is used in a way that reinforces or facilitates abuse of dominance.

Stakeholders can face severe penalties under the Competition Act 1998, including:

  • Up to 10% of global turnover;
  • Director disqualification (up to 15 years);
  • Dawn raids and investigative powers by Ofgem as a concurrent competition authority.

Ofgem highlights that personal liability may apply to IT directors or senior managers overseeing AI systems that produce anti-competitive outcomes—even unintentionally.

Stakeholders, including IT Directors and IT Managers, can bear personal responsibility for a breach of competition law and will therefore need to ensure that their AI system does not produce anti-competitive effects.

Compliance Implication: Compliance teams must treat AI systems as potential sources of collusion or abuse risk. Documented competition law assessments, testing, and monitoring are essential particularly for pricing, dispatch, or bidding algorithms.

  1. Cybersecurity and Essential Services

Organisations designated as Operators of Essential Services (OES) under the NIS Regulations 2018 must ensure AI systems do not introduce new cyber vulnerabilities or compromise system availability.

Operators must demonstrate that AI-enabled systems meet standards for resilience, confidentiality, and integrity under UK cybersecurity law where AI is used for functions such as:

  • Real-time energy balancing;
  • Autonomous grid controls; or
  • Predictive maintenance across critical assets.

Compliance Implication: AI systems may need to undergo security assurance or technical audits as part of broader cyber governance programmes. This includes testing for adversarial attacks, shadow AI use, and failure recovery capabilities.

  1. Outsourcing and Third-Party AI

Ofgem reaffirms a foundational regulatory principle: accountability cannot be outsourced. Where licensees procure AI solutions from third-party vendors, they must still ensure:

  • Compliance with all regulatory obligations;
  • Alignment of vendor tools with ethical AI principles; and
  • Clear service-level agreements, risk transfer protocols, and audit rights.

Compliance Implication: Procurement and vendor management functions must adopt enhanced due diligence for AI vendors, including assessment of model explainability, data handling, redress mechanisms, and legal compliance.

Enforcement and Regulatory Consequences

Ofgem outlines its enforcement toolkit, which includes:

  • Financial penalties: Up to 10% of turnover for sectoral breaches;
  • Consumer redress orders;
  • Provisional or final enforcement orders; and
  • Investigations under the Competition Act 1998.

It emphasises that failure to integrate AI responsibly into regulated activities may trigger enforcement action even if the failure was technical or outsourced.

Ofgem notes that compliance with its ethical AI guidance will likely be used as a mitigating or aggravating factor in enforcement decisions. Where breaches occur, it will assess whether appropriate governance, risk controls, and foresight were in place.

icon_target RegTrail Insights

Implications for Legal, Compliance, and Policy Teams

  • Legal teams should map AI systems against all existing statutory obligations and licence conditions. Where ambiguity exists (e.g. AI-generated decisions vs. human decisions), internal policies should be clarified.
  • Compliance leads must develop controls to detect and prevent unintended non-compliance arising from automated systems, including competition, data protection, and fairness breaches.
  • Procurement teams must include AI compliance clauses and audit rights in all contracts with external vendors.
  • Boards and executive committees must be briefed on AI legal exposures and the potential for director-level accountability.

Appendix 2 – AI Standards

Overview

This appendix lists applicable standards that support ethical AI deployment. Ofgem notes that while it does not regulate these standards directly, it expects stakeholders to use them as part of good practice governance and assurance. Adherence to these standards can help organisations demonstrate compliance with legal duties and alignment with Ofgem’s expectations.

Key Standards Highlighted

Stakeholders are referred to the UK’s AI Standards Hub (click here) for evolving norms. Notable standards include:

  • ISO/IEC 42001: AI management systems - a comprehensive framework for risk, governance, transparency, and ethics in AI.
  • BSI (British Standards Institution) AI guidance: UK-specific standards addressing algorithmic bias, explainability, and data assurance.
  • IEEE and NIST frameworks: Including the NIST AI RMF, which outlines risk-based AI management practices.

Compliance Implication: Firms should select and adopt AI standards that map onto their risk profile and use cases. Demonstrating alignment with ISO 42001 or the NIST framework may serve as a strong indicator of regulatory maturity.

Appendix 3 – AI Supply Chain Management

Overview

Appendix 3 of Ofgem’s guidance addresses a key regulatory concern: the extended responsibility of licensees and market participants for AI systems and services developed or operated by third parties.

It formalises the expectation that AI supply chains must be actively governed and subject to proportionate assurance, not simply contracted and assumed to be compliant. It applies equally to:

  • Commercial AI vendors;
  • Open-source AI tools;
  • Consultants or developers creating bespoke models; and
  • Software-as-a-Service (SaaS) AI platforms integrated into regulated operations.

Where AI influences licensable activities, consumer outcomes, or market behaviours, the procuring entity bears the full weight of regulatory responsibility regardless of who built, trained, or maintained the system.

Key Themes

  1. Supply Chain Transparency and Oversight. Ofgem expects firms to have clear visibility into their AI supply chain. This includes understanding:
  • The provenance and architecture of the AI systems being procured;
  • The governance frameworks used by vendors during model development;
  • The nature of datasets used in training (e.g., source, demographic representativeness, and legal basis); and
  • The mechanisms for testing, validation, and explainability.

Compliance Implication: Procurement and compliance teams must jointly review the vendor’s development and deployment practices, especially where models are deployed in “black box” form or updated continuously without operator involvement.

  1. Contractual Controls for AI Risk Transfer

Appendix 3 encourages firms to embed explicit contractual safeguards into all third-party agreements involving AI tools or services. These should include:

  • Ethical standards clauses (aligned to Ofgem’s four outcomes);
  • Compliance with UK regulatory and data protection law;
  • Explainability and auditability requirements;
  • Obligations to notify material changes to the model or data sources;
  • Incident escalation and access to logs or decision traces; and
  • Right to conduct independent assurance reviews or technical audits.

Compliance Implication: Traditional SLAs and procurement templates are insufficient. AI contracts must be customised to reflect algorithmic behaviour, bias risk, and data governance obligations.

  1. Ongoing Monitoring and Performance Assurance

The obligation to ensure ethical AI use does not end at contract signature. Appendix 3 highlights the importance of establishing:

  • Continuous performance monitoring of AI models (e.g., drift detection, fairness metrics, outcome validation);
  • Model change tracking, including documentation of re-training events or architecture changes;
  • Defined ownership of oversight responsibility within the procuring organisation;
  • Vendor accountability structures that align with Ofgem’s four ethical outcomes.

Compliance Implication: Firms must treat externally sourced AI as a living system not a static product. This requires new forms of operational collaboration between AI risk teams, procurement, and IT.

  1. Third-Party Risk Management Integration

AI vendors must be treated as critical service providers, particularly where:

  • AI influences billing, eligibility, complaint triage, or service prioritisation;
  • AI plays a role in market-facing or trading operations; or
  • AI forms part of grid control or infrastructure automation.

Compliance Implication: Third-party risk management (TPRM) frameworks must be extended to include AI-specific due diligence, onboarding protocols, and periodic reviews. Many existing TPRM policies focus on financial, cybersecurity, or operational resilience. AI introduces new categories of risk (e.g., bias, explainability, and human over-reliance) that must now be captured.

Governance and Enforcement Considerations

Failure to properly govern AI supply chains can expose licensees to:

  • Enforcement under licence conditions, particularly if consumer harm results;
  • Competition investigations, if externally sourced algorithms distort pricing or trading outcomes;
  • Data protection violations, if vendors process personal data unlawfully or embed discriminatory patterns;
  • Cybersecurity risks, particularly with cloud-based AI tools handling critical systems or data.

Ofgem is clear in its message - a lack of internal technical expertise or visibility over third-party models is not a defence. Firms must develop the organisational capacity to interrogate and challenge external AI systems and be able to demonstrate that they have done so when asked by regulators.

icon_target RegTrail Insights

Practical Recommendations

To comply with the expectations set out in Appendix 3, firms should:

  • Map all AI systems used in licensable functions, including third-party components and services;
  • Implement a standard AI vendor due diligence process before onboarding, including model architecture, training data, explainability, and bias risk;
  • Update contracts and SLAs with explicit clauses covering ethical outcomes, regulatory obligations, model updates, and audit rights;
  • Assign ownership for third-party AI oversight to a designated role or committee (e.g. AI Risk Officer, Compliance Oversight Function);
  • Monitor system performance and conduct periodic reviews of third-party tools, particularly after any performance drift, public incident, or change in business context;
  • Establish exit and fallback plans for AI systems deemed non-compliant or high risk, including contingency arrangements for model replacement or manual override.

Appendix 4 – Data Use and Management

Overview

This appendix reinforces the foundational role of data governance in AI ethics. Ofgem underscores that flawed or poorly managed data can cause AI systems to fail or generate biased, unsafe, or misleading outcomes.

Core Expectations

  • Data quality controls must be in place to ensure that AI models are trained on accurate, up-to-date, and relevant datasets;
  • Consent and privacy obligations must be maintained under UK GDPR. It includes lawful basis for automated processing and profiling;
  • Fairness and representativeness: Training data must reflect the diversity of the energy consumer base, especially with respect to vulnerable or excluded groups; and
  • Traceability and auditability: Stakeholders must be able to track the origin, transformation, and application of data across AI pipelines.

Compliance Implication: Data governance teams must expand their remit to cover AI-specific risks such as dataset drift, model poisoning, data minimisation, and privacy attacks. Clear documentation is essential for demonstrating legal compliance and risk mitigation.

Appendix 5 – AI and Cyber Security

Overview

Appendix 5 addresses a critical and often underestimated dimension of AI ethics: the intersection between artificial intelligence and cybersecurity. Ofgem recognises that as energy stakeholders adopt increasingly complex, data-driven AI systems, they also expand the cyber-attack surface of operational and market-facing systems.

Unlike traditional IT infrastructure, AI introduces novel security risks that stem from its dynamic, data-dependent, and often opaque nature. These risks challenge legacy control frameworks and demand a more integrated, AI-aware approach to cyber resilience. Ofgem’s message is clear: ethical AI is not secure unless it is cyber secure.

Key AI Threat Vectors Identified by Ofgem

  1. Model Poisoning. Adversaries may manipulate training datasets to embed malicious behaviours into AI models particularly those trained on public or third-party data. This can cause models to produce biased, inaccurate, or adversarial outputs, even if they appear normal during validation.

Compliance Implication: Training data pipelines must be locked down with the same rigour as sensitive codebases. Data provenance, integrity, and lineage controls are essential.

  1. Adversarial Attacks and Prompt Injection. Certain classes of AI models especially those handling unstructured data or natural language inputs are vulnerable to adversarial inputs that subtly manipulate outputs. In generative models, prompt injection attacks can elicit unsafe or misleading results.

Compliance Implication: Robust input validation, context-aware filters, and adversarial testing should be standard in model deployment. Security teams must understand the operational context and potential misuse vectors of deployed models.

  1. Shadow AI and Unmanaged Tools. The increasing availability of AI tools (e.g., low-code ML platforms, foundation model APIs) creates the risk that staff or departments may adopt AI capabilities outside formal IT governance structures. These unsanctioned tools, sometimes referred to as “shadow AI”, can introduce undocumented models into production environments.

IT Governance Implication: Firms must extend shadow IT policies to explicitly cover AI tools. AI governance policies should mandate registration, approval, and security review of all external AI components.

  1. Explainability and Detection Gaps. Black box models with low explainability hinder security teams’ ability to detect misuse, manipulation, or unintended feedback loops. In cyber-physical environments (e.g., automated battery management or grid control), this lack of observability can pose significant systemic risk.

Compliance Implication: Where explainability is limited, compensating controls such as output monitoring, redundancy, and human-in-the-loop validation are required. Explainability is not just an ethical concern—it is a prerequisite for detection and containment.

icon_target RegTrail Insights

Practical Recommendations

To comply with Appendix 5 and reduce cyber-AI exposure, firms should explore the following:

  • Update cyber risk assessments to include AI model architecture, training data, and adversarial pathways;
  • Train cybersecurity and AI teams jointly, ensuring mutual understanding of technical and operational risks;
  • Require vendors to conduct and share adversarial test results for any externally sourced AI tools;
  • Establish a central AI asset register that tracks ownership, model lineage, performance history, and incident history;
  • Incorporate AI into business continuity and incident response plans, including scenario exercises involving model failure, drift, or attack.

Want to read more?