Artificial Intelligence

The global AI market is projected to reach $642 billion in 2029 (GlobalData). Whilst the productivity gains promised didn’t fully materialise in 2025, its evolution is accelerating. Generative AI continues to dominate headlines, raising ongoing questions about data use, rights, and the status of AI-generated outputs.

Meanwhile, agentic AI is shifting us into a new era of artificial intelligence. While today’s models produce single-step responses, tomorrow’s agentic systems will plan, act and execute multi-stage workflows with minimal human input. This transition raises major legal and governance challenges. Autonomous code may fall outside copyright protection, and no regulatory framework don’t yet fully addresses agentic behaviour.

The challenge lies in the virtually infinite web of real-world actions these agents could take. Limit an agent too much and it’s ineffective; too broadly and the range of unintended real-world actions becomes vast with early signs showing agents misinterpreting goals in subtle but significant ways. As AI moves from generating content to acting autonomously, it raises new questions about control, accountability, and trust.

Generative AI

Feyo Sickinghe Of Counsel, Netherlands

Contact me

Dr. Nils Lölfing Senior Counsel, Germany

Contact me

2026 will mark the inflection point where AI governance shifts from aspiration into strategic advantage. Organisations that embed adaptive governance models will outperform those stuck in static compliance or worse, still experimenting without governance.

The past year exposed a critical spectrum: while a minority of businesses with robust AI governance receive significant value through improved product quality and competitive advantage, most stay experimenting with immature or non-existent governance programmes. Everyone talks about AI policies, but few can demonstrate how their systems actually behave. The gap between aspiration and operational evidence has become a real risk.

As shown by the EU AI Act, regulators worldwide are shifting from principles to proof, with Impact Assessments emerging as accountability anchors demanding documentation and traceability. Yet organisations are at vastly different starting points: some lack governance entirely, others treat it as a static compliance checkbox, while AI systems themselves remain inherently dynamic and evolving.

Regardless of maturity level, three actions are critical: establish operational proof through living Impact Assessments that evolve with your systems; adopt cross-functional governance structures; and treat governance as a continuous value driver, not a point-in-time exercise.

Businesses will be ahead in 2026 if they recognise that effective AI governance is dynamic, measurable, and operational a strategic differentiator enabling scaled deployment while maintaining trust.

A sustainable remuneration framework that enables both creators and GenAI model providers to develop their respective activities has yet to emerge.

The ongoing tension between GenAI model providers and rightsholders continues to generate copyright claims and litigation in the EU.

The EU’s Digital Single Market Directive (“CDSM”, 2019/790) introduced specific exceptions for text and data mining (“TDM”), while the EU AI Act requires general-purpose AI model providers to implement policies complying with the EU copyright law and TDM opt-outs. The scope of the TDM exception and the modalities by which rightsholders can opt out of it are highly debated and have given rise to litigation in the EU.

A key development in this area is the growing wave of AI-related copyright litigation emerging across the EU. Rightsholders are increasingly bringing claims against GenAI model providers, challenging the use of copyrighted works in training datasets and the outputs generated by AI systems. This surge in legal disputes comes at a critical juncture, as the CDSM Directive is scheduled for review in 2026. The upcoming review may present an opportunity to address uncertainties surrounding AI and copyright, potentially establishing clearer frameworks for the use of protected content in AI development.

Businesses should closely follow these upcoming debates on copyright and AI.

Anne-Sophie Lampe Partner, France

Contact me

Pin-Ping Oh Partner, Singapore

Contact me

More countries are moving toward introducing text and data mining (“TDM”) exceptions, which allow copyrighted works to be used for AI training without rightsholders’ permission, provided certain conditions are met.

However, recent developments show mixed outcomes. In October 2025, Australia rejected a TDM exception after strong pushback from the creative industry. Hong Kong has deferred its plans for similar reasons. The UK is consulting on broadening its existing TDM exception to an EU-style model with opt-out provisions.

A TDM exception could significantly affect business models. For AI companies, an exception could reduce costs by enabling use of copyrighted works without licence fees. For rightsholders such as publishers, an exception could mean revenue loss and a need to rethink monetisation strategies. Understanding these shifts will be critical for planning and risk management.

It is probably safe to say that the anticipated AI productivity gains, and consequential reductions in force, didn’t fully materialise in 2025.

Instead, we saw a number of businesses in the sector scale back restructuring programmes in recognition that the wholesale replacement of human employees across certain functions is further off than was previously thought.

We do not expect that this will deter businesses from continuing their efforts to embrace AI in 2026, particularly technology organisations that are developing innovative AI technologies at speed and lead the market relative to other sectors. Tools such as agentic AI are becoming increasingly sophisticated and, crucially, capable of integration into an organisation’s existing technology infrastructure and therefore capable of greater productivity.

We expect 2026 to be the year that AI is implemented and made available to the workforce in a more targeted fashion, with a focus on efficiency gains and role augmentation, rather than sweeping changes and headcount reductions. Workforces will likely become more streamlined and focused, rather than completely unrecognisable, with targeted headcount reduction and cost cutting expected to continue.

Furat Ashraf Partner, UK

Contact me

Charles Hill Associate, UK

Contact me

Agentic AI

Agentic AI is redefining software engineering. While today’s generative tools assist with code suggestions, the next stage will see AI systems capable of planning, generating, and testing code semi-autonomously – turning human developers from creators into supervisors.

Over the past year, major technology players have introduced early “coding agents” that can independently propose architectures, refactor legacy code, and debug within predefined environments. The developer’s role is shifting from line-by-line creation to overseeing quality assurance and governance.

There are no specific laws for agentic AI yet, so existing rules apply. But copyright questions are set to intensify as autonomously generated software will likely fall outside protection.

Contracts will also need updating: parties must clearly define if, how, and under whose supervision one or several AI agents may act within development pipelines, and how their outputs will be reviewed and governed.

Clients should extend their AI compliance frameworks beyond employees to the AI agents themselves: defining where these agents can operate, what tasks they can perform, and when human intervention is mandatory. Setting these “agent boundaries” now will be critical to staying compliant, auditable, and commercially secure as coding becomes more autonomous.

Dr. Simon Hembt Counsel, Germany

Contact me

Oliver Belitz Counsel, Germany

Contact me

In 2026, the regulatory governance of agentic AI will shift from ensuring high-level system compliance to conducting detailed risk analysis of the entire 'agentic stack'.

2025 was the year that agentic AI moved from buzzword to reality - not as a single product, but as complex frameworks that 'stack' multiple large language models and connect them to other systems via APIs, controlling them through automation workflows (known as orchestration).

The practical application of the EU AI Act to that presents a new legal challenge. The Act's rigid definitions, such as "GPAI Model" and "GPAI System", weren’t designed for these new multi-part autonomous systems.

Applying these definitions to both the overall framework and its components is a major legal challenge. It requires in-depth technical and legal analysis, as well as a thorough understanding of the GPAI secondary legislation issued in 2025, particularly the new GPAI Guidelines and Codes of Practice.

Companies can no longer treat an agentic framework as one big system.

They must conduct 'agentic AI mapping' that answers the following questions:

  • What are the distinct components (LLMs, APIs and workflows)?
  • How is each component classified under the AI Act (e.g. system or model)?
  • What role does the company assume for each part (e.g. provider or deployer)?

Only then can businesses understand their specific compliance obligations and begin implementing them.

Agentic AI is pushing contracting into new territory - its ability to plan and act independently means a shift in contractual risk from defective outputs (e.g. buggy code) to unauthorised actions (e.g. wasting large amounts of money on a bad investment).

Over the past year, we’ve moved from static, prompt-bound systems to agents that can interpret abstract goals, generate plans, and execute them through real-world tools.

The key development is the potentially infinite web of possible “real world” actions an agent actions to take. While uses could define a finite set of permitted actions, limiting the system to only taking those actions would be to limit its utility.

However, the set of actions which could be taken but which the user would likely have prohibited is effectively unlimited - and we’re already seeing agents misinterpret goals in small but meaningful ways. This matters because the same agent that can manage your budget could, with similar logic, drain your account or expose sensitive data if its constraints are poorly set.

For clients, the implication is clear: contracts need to shift from documenting features to documenting boundaries, safeguards, and escalation paths.

That means specifying operational scope, tool access, monitoring rights, kill-switches, and limits on autonomous decision-making. It may also mean building in contractual mechanisms to include operational flexibility to enable safety guardrails to evolve quickly.

Will Bryson Partner, UK

Contact me

Oiver Belitz Counsel, Germany

Contact me

In 2026, contractual governance for agentic AI will pivot from just accepting the result of a service to scrutinising the process.

2025 saw the rise of ‘shadow implementation’ - where suppliers in tech, marketing, and BPO use AI Agents to code, draft reports, or manage campaigns. As this becomes mainstream, existing Statements of Work (SOWs) are often failing.

This creates critical new tensions depending on contract type:

In effort-based contracts (e.g. billable hours), the key question is how efficiency gains from agentic AI are shared between the parties.

In results-based contracts (e.g. delivering a software module), agent’s autonomous "reasoning and actions" forces the customer to focus on the process (e.g. to assess whether copyright may have arisen) - areas that were previously irrelevant when only the final 'work product' was owed.

Liability hinges on the SOW. If it doesn’t address AI, proving a breach is difficult.

  • Customers must demand transparency. The SOW must be updated to establish a fair commercial model for agent-driven efficiencies in hourly billing, and to mandate human review for results-based projects.
  • Providers must proactively define the autonomy of their service. A precise SOW is their best defence against claims that their 'autonomous employee' misunderstood the assignment.

Want to find out more?

AI legal services

Learn more

Generative AI

Learn more

EU AI Act guide

Learn more

Contracting for AI

Learn more

AI Regulatory Horizon Tracker

Learn more

Global AI Governance Report

Learn more

From reactive tools to digital colleagues: the rise of agentic AI

Learn more

Taking the EU AI Act to practice: decoding the GPAI Code of Practice and the training

Learn more

AI Act 2.0: The Commision's regulatory remix proposal

Learn more

Talent wars: the impact of artificial intelligence on human resource practices across Asia

Learn more

Our AI experts

See the full team

Toby Bond Partner, UK

Contact me

Dr. Miriam Ballhausen Partner, Germany

Contact me

Introduction

Previous page

Regulatory

Next page