AI Regulation and The Course of 'Goldilocks Porridge'

AI regulation is stuck between too much and too little. Katharine Wooller on why the UK's patchwork approach is stifling financial services innovation.

AI Regulation and The Course of 'Goldilocks Porridge'

Author: Katharine Wooller, Chief Strategist, Financial Services, Softcat plc

Getting AI regulation just right is a Goldilocks problem. Too early and you crush innovation. Too late and the damage is already done. Katharine Wooller, Chief Strategist at Softcat, on where financial services is sitting right now, and why the porridge is getting cold.

A patchwork of regulation

The EU AI Act recently celebrated its first birthday, the first phase which banned “unacceptable risk” AI, having come into force in February 2025. Whilst there have not, as yet, been any publicly reported fines for non-compliance, the act’s main highrisk system regime applies from 2 Aug 2026, so the big wave of enforcement is expected thereafter.

The UK doesn’t have a single horizontal equivalent but rather seeks to regulate AI with the existing laws and sector specific regulations, that is to say UK GDPR/data protection act and the Data Use and Access Act (DUAA), whilst the FCA looks for compliance of existing rules, for example consumer duty SM&CR, DORA, PRA model risk management. Indeed, the FCA, in its published approach updated February of this year, said it doesn’t plan to introduce extra AI regulations.

A nebulous problem

As it typical of any new technology, there has been much handwringing over the regulation of AI: how prescriptive should that rules be? Who and how should we regulate the technology?

This of course is a pattern we have seen in the industry with other incoming technologies; chronologically good examples are peer to peer lending, pay day loans, crypto, and I have no doubt, you will soon see the same dialogue around quantum computing.

What is clear however, is that we are lagging behind Europe, and that lack of regulatory clarity stifles innovation and investment. In my day job advising 2000+ regulated firms on sourcing, modernising and optimising their technology I see senior technology leaders, and boards, keen to take advantage of the efficiencies that AI can deliver, but nervous of regulatory risk, with many adopting an approach of “wait and see” which whilst prudent creates an ever decreasing window to gain commercial advantage from being an early adopter of a bleeding edge technology.

The price of regulatory tardiness

The damage that AI can potentially wreak is no longer theoretical. There are some headline grabbing litigation events globally that are worth reflecting on: Getty Images vs Stability AI in the UK high court is believed to be the first UK generative AI copyright trial, and there are broader copyright disputes around AI training, for example in New York Times vs OpenAI and Microsoft.

It is worth mentioning the legal issues around biometric data and privacy in AI. In the US, Clearview AI allegedly scraped billions of images from public websites and social media without the notices and consents required under BIPA, the US biometric data privacy law.

Recently the Sunday Times reported on a pending US lawsuit in Florida alleges that a man died by suicide after he developed a delusional romantic attachment to a chatbot, referring to it as “his wife” – at one point he allegedly tried to intercept a truck which he believed had a body that the chatbot could inhabit.

I see strong parallels with some of the early litigation against cryptocurrency, where particularly in the US we saw regulation via litigation rather than a clear rule set. For AI, as these cases reach conclusion, there will be deep ramifications for how all industries ethically deploy AI and how we create guardrails for audit and assurance.

The price of being too early to the party

Regulators and governments find themselves in a difficult spot. Financial services necessitate innovation to provide choice and competition for consumers. The size of the economic prize for AI is huge: research by PwC published in 2025 suggested that AI adoption would contribute £79.3bn to the economy by 2035.

Balancing regulation with fostering innovation, whilst preventing harm is a real challenge. Much noise has been made at government level around ramping up AI adoption, and particularly in building the infrastructure needed for AI, and increasing UK compute capacity.

I am yet to see much commentary around exactly how the regulatory landscape will support this, particularly when financial services one of the main uses cases for the technology. The government approach from the 2023 AI Regulation White Paper (and follow-up work) suggests that regulators apply five principles within their remits:

  • Safety, security & robustness
  • Transparency & explainability
  • Fairness
  • Accountability & governance
  • Contestability & redress

A solid sample size survey of whether firms feel that current regulation meets this criteria would be a really interesting exercise!

AI is like porridge

I do concede that working with such fast moving and potent technology is a constantly shifting landscape. Regulation, like Goldilocks’ porridge, is exquisitely hard to get “just right” – time will tell how we balance innovation whilst preventing harm.