Regulatory
Regulation is increasingly defining how AI and digital technologies are built, deployed, and governed, but the global picture is far from uniform. While the EU continues to drive a rules-based model, countries like Japan are taking a principles-led, innovation-first approach. The UK sits somewhere in between, emphasising flexibility while still signalling tougher oversight ahead.
2026 will be a pivotal year. Policymakers are advancing new frameworks to address fast-moving risks in data, privacy, AI, and digital infrastructure. We’re seeing greater experimentation with regulatory sandboxes, sharper enforcement on data and content governance, and early proposals for laws tailored to emerging technologies. The direction of travel is clear: more scrutiny, more coordination, and more accountability.
For businesses, this fragmented landscape brings both opportunity and complexity. Staying competitive will require monitoring developments across jurisdictions, adapting governance structures, and anticipating compliance hurdles before they land.
The EU’s Digital Omnibus exemplifies this shift - aimed not at adding rules, but at making existing ones more coherent and usable. If successful, it could reshape how future digital policy is designed, prioritising clarity and practicality over expansion.
Regulatory/cross-border compliance
The EU’s Digital Omnibus package is a crucial step towards simplifying and streamlining the digital regulatory framework across AI, data access, privacy, and cybersecurity.
Whilst there is no AI-specific legislation proposed by the EU for 2026, the Omnibus package aims to reduce complexity, cut down on overlaps between different regulations including the EU AI Act, and make compliance easier and less costly.
On 19 November 2025, the European Commission published two proposals:
- “Digital Omnibus Regulation Proposal”: targeted amendments to data, privacy and cybersecurity laws
- "Digital Omnibus on AI Regulation Proposal": simplifying aspects of the EU AI Act.
Key aspects of the Digital Omnibus Regulation Proposal include measures to modify cookie rules and address “consent fatigue” among consumers. It also proposes amendments to the definition of personal data in the EU’s General Data Protection Regulation (GDPR) and a single point for incident reporting under data protection and cybersecurity laws. There are also significant amendments to consolidate EU laws on data access and re-use.
The Digital Omnibus on AI Regulation Proposal outlines provisions to facilitate the use of personal data in the training of AI systems and a postponed entry into application of rules for high-risk systems under the EU AI Act.
Nearly all sectors will be impacted by aspects of the Digital Omnibus Package through data protection, data access and cybersecurity laws, or use AI in the value chain. Stakeholders can give feedback on the proposals until 26 January. Additionally, they can share their views via a Digital Fitness Check (stress-test of the rulebook) until 11 March 2026.
The future Proposal for a Digital Networks Act (DNA) is expected to review and likely replace the European Electronic Communications Code (EECC), which provides the regulatory framework for the telecommunications sector.
While the EECC allows for national variations, the Commission initially planned the DNA as a directly applicable regulation to achieve greater harmonisation, though opposition from some Member States may result in a directive instead. The Commission aims to strengthen the EU’s digital connectivity system and accelerate fibre and 5G/6G deployment.
In June 2025, the Commission launched a public consultation on the future DNA. The proposal is now scheduled to be published on 20 January 2026.
Recent statements from national governments, including Germany and Italy, reveal differing views on spectrum governance and cost-sharing, signalling complex negotiations ahead. Meanwhile, a debate continues in the background over whether content providers should contribute financially to internet service providers to offset infrastructure costs (dubbed “network fees” or “fair share”).
The DNA will influence telecoms infrastructure strategies and investment planning. Telecoms incumbents, challengers, and content providers should review the proposal once published, monitor negotiations, and engage early to inform policymakers about their business needs.
2026 will be the year of overrated regulatory simplification.
In November 2025, the European Commission launched a Digital Omnibus Package to simplify the regulatory framework for AI, privacy, data, and cybersecurity. Key topics:
- Wider exemptions and delayed AI Act implementation: The Commission has produced “targeted amendments” to prepare “its optimal implementation, also ahead of the full entry into force of its provisions” - that’s code for delays. These amendments will be subject to significant debate, but a one-year grace period for high-risk AI systems, delayed transparency requirements (i.e. labelling of AI content) and wider exemptions for high-risk systems are under consideration.
- Minor changes to the Data Act: A broader exemption to share data subject to trade secrets, and a lighter regime for custom made data processing services and SMEs.
- Harmonised breach reporting: There is a proposal for one reporting point for all incidents, i.e. covering GDPR, NIS2, DORA and CER. Also, various breach reporting elements might change, including timing (from 72hrs to 96hrs) and thresholds for notification to individuals.
We expect that many companies will use the Digital Omnibus Package as an opportunity to have their voices heard. This requires assessing the proposed changes for their business impact and developing corresponding proposals for regulatory improvements.
AI liability will remain governed by the EU AI Act and product safety rules for the foreseeable future, rather than being subject to new EU legislation.
This is despite a previous European Commission proposal to introduce a specific new law in the form of an AI Liability Directive (AILD).
EU negotiations to introduce AI-specific liability rules in the form of a new AI Liability Directive have stalled. The European Commission’s 2022 Proposal for a Directive would have made it easier for victims to claim compensation for harm caused by AI systems. The onus would have been on AI providers to prove that they had not caused the harm in question.
In October 2025, the Commission’s Proposal for an AI Liability Directive was official withdrawn since “no foreseeable agreement” could be reached. Subsequently, the European Parliament’s Legal Affairs Committee voted in December 2025 against challenging the withdrawal decision. This has effectively closed the door to new legislative initiatives on this topic in the next years.
Providers and deployers of AI systems will be relieved not to face a presumption of causality when defending AI liability cases. Instead, they should continue to manage risk under existing laws, such as product liability and national civil codes, while at the same time ensuring compliance with the EU AI Act.
Blockchain
In 2026, blockchain and distributed ledger technologies (DLT) will continue to move toward mainstream adoption and there will be increasing convergence with other technologies, like AI and IoT.
Blockchain and DLT adoption has accelerated over the past twelve months, expanding beyond its traditional base in financial services (where P2P models increasingly replace traditional systems) into cross-sector applications.
Use cases include energy grid balancing for regional communities, secure health data sharing, and supply-chain initiatives integrating commercial-settlement layers using crypto assets. This reflects a broader shift toward multi-technology ecosystems, even as regulation remains in flux globally - not only bringing compliance requirements for businesses operating across jurisdictions but also opportunities such as in the area of electronic identification and trust services.
To support this, the best practices report (supported by Bird & Bird) for the third cohort from the European blockchain sandbox will be published in Q1 2026, highlighting regulatory issues and solutions and increasing legal certainty.
At the intersection of rapidly developing use cases, new business models and evolving horizontal and sector-specific regulation, additional regulatory issues will inevitably emerge. Blockchain/DLT solutions may themselves become critical enablers of more effective and efficient compliance and oversight - supporting traceability, auditability, and ultimately “compliance by design.”
Businesses should assess how combined technology solutions can drive innovation while streamlining compliance, leveraging regulatory tools and sandbox insights to strengthen future business propositions.
AI regulation (outside the EU)
In 2026, we are likely to see developments in the UK government’s plans to introduce regulatory sandboxes for AI. In its consultation document on the sandboxes, it said evidence from pilots could lead to regulatory reforms, enabling UK businesses to adopt trusted AI.
Having launched a consultation in late 2025, some legislation will be needed over the coming year to give the mandate, budget, and scope for the conduct of the AI sandboxes. Setting up the regulatory sandboxes will take time.
A minister from the UK’s Department of Science, Innovation and Technology (DSIT) has said it’s premature to set a timeline for a UK AI Bill before the government has responded to its consultation on AI and copyright, which closed in February 2025. Findings are pending, but an interim report is expected by Christmas and the final report by March 2026. As a result, an “AI bill” seems unlikely in the next King’s Speech, though we’ll be watching closely.
There’s also uncertainty on whether any UK AI Bill would take a general or more narrow approach. In the King’s Speech in July 2024, the government proposed narrow legislation on LLM developers, but in July 2025, the science minister, Patrick Vallance, told Parliament that it was considering broader cross-cutting legislation for AI.
Over the coming year or two, we may see incremental regulatory changes on AI through amendments to existing laws. For example, the Consumer Protection Act 1987 is under review by the Law Commission. Following growing concern over automated decision making by the public sector, the Law Commission is reviewing automated decision making, which could lead to updated administrative laws. Also, the government has introduced amendments to the new Crime and Policing Bill to test AI models for CSAM prior to their release. DSIT is considering whether the Online Safety Act 2023 is broad enough in scope to cover certain risks posed by AI or whether it needs amending.
With so many aspects unresolved, there’s a clear risk that AI usage by UK industry will be impacted by uncertainty. In-house lawyers must: (1) keep up to date on developments; and (2) decide how best to advise the business within the current environment. We expect the UK government to implement an incremental, narrow regulatory framework for AI in the UK but clarity might not come until late 2026.
Since 2022, the Chinese government has published a series of regulations on AI, such as algorithm, deep synthesis, and generative AI.
This year, they are shifting their focus towards content labelling, data annotation, and data security, in particular, biometric data. We have also seen penalties handed down by the CAC on violation of these regulations.
In the coming year, China is expected to continue to strengthen AI Governance by stepping up enforcement on content management, algorithm filings and AI ethics, with more AI security standards to be drafted and published.
As Japan positions itself as a global AI leader, AI will increasingly become embedded as core infrastructure across the economy, government, and daily life.
Under Japan's soft law approach, expect a proliferation of guidelines, particularly sector-specific frameworks, to ensure AI safety, trustworthiness, and transparency.
The Japanese government has moved swiftly following the enactment of the AI Act in September 2025. Since the legislation took effect, the newly established AI Strategic Headquarters has begun drafting the AI Basic Plan alongside foundational guidelines for appropriate AI use and conducting studies examining potential misuse scenarios.
The wave of generative AI litigation has reached Japan. In late August 2025, three major newspaper publishers filed suits against Perplexity AI for copyright infringement, signalling that Japanese rights holders will actively defend their intellectual property against AI companies, even those based overseas.
AI will underpin national strategy and economic security, so businesses should prepare for new policies prioritising AI across each industry, as well as investment initiatives across both private and public sectors.
As Saudi Arabia continues to position itself as a regional AI hub, providing conditions that attract technology developers and investors, investment in data centre / AI infrastructure will increase.
In the last 12 months, we have seen global names in the AI technology and infrastructure space announcing AI-related investments in the Kingdom, from Alibaba and AWS to Nvidia and Tencent. Investments in local AI technology companies, e.g. PIF-backed HumAIn, have also attracted global attention.
Saudi Arabia’s telecoms regulator recently held a public consultation on a draft “Global AI Hub” law; focused largely on establishing a ‘data embassy’ type framework, it aims to reassure foreign cloud service customers and operators that their own laws would apply to data held within the contemplated framework. The public consultation is likely to have resulted in the draft law being sent ‘back to the drawing board’. Further regulatory efforts in this space are expected.
Cloud service providers and cloud customers that service the Saudi market or are based there should familiarise themselves with the current Saudi regulatory landscape and keep an eye out for developments that may affect their operations/businesses. If a ‘Global AI Hub’ becomes law, this could have implications for lawful access by foreign authorities in respect of data held in data centres in the Kingdom.
The rapid development of agentic AI brings heightened risks to organisations using and developing agentic AI systems, and will require organisations to rethink and supplement their approach to AI governance.
In October 2025, Singapore’s Cyber Security Agency released a draft Addendum on Securing Agentic AI Systems for public consultation. The Addendum provides a lifecycle-based framework to help secure agentic AI systems, including assessing security risks, threat modelling, supply chain security, access controls, validation, continuous monitoring, and maintaining human-in-the-loop oversight.
Organisations deploying agentic AI must address the unique risks associated with this technology as an integral part of their broader AI governance framework. This involves identifying AI use cases, assessing potential risks, implementing mitigation mechanisms, and continuously monitoring AI systems to ensure responsible deployment. Recognising AI-related risks as a governance priority is essential - requiring board-level oversight and cross-functional coordination among IT, security, legal, compliance, and business teams.
Quantum regulation
The Quantum Revolution is coming and investment is being directed heavily into the quantum space. In the last year, both direct and indirect investments have risen, and this is set to continue in 2026.
The EU fears it will fall behind in the quantum technology development race, as has happened with AI. It’s adopting the Quantum Act in mid-late 2026, which aims to deliver on the Quantum Europe Strategy, focusing on research and innovation, industrialisation, and security and resilience of quantum supply chains. Strategic projects in quantum research and production will benefit from combined EU and national financing, coordinated EU procurement, and streamlined permitting procedures for critical quantum facilities.
Initially, the focus for businesses in the quantum space should be on achieving the next quantum technology breakthrough. They should also ensure their voices are heard in the consultation process and seize the opportunities for research and funding that will emerge from implementing the Quantum Act.












