From deepfake scams to data leaks via shadow AI, organisations are facing new, rapidly evolving threats that demand urgent action and robust governance. Here, we take a look at some of the specific threats and vulnerabilities that AI tools can cause.
A landmark incident hit Thames Valley Bank, a mid‑size digital bank in the UK, in July 2025. Its marketing team used an unsanctioned, public-facing generative AI model, often called “shadow AI”, to craft customer communications.
By uploading spreadsheets with customer names, risk profiles, and transactional data, confidential customer data became embedded into the AI’s training data. Attackers later extracted personal data via prompt injection. Over 75,000 customers had sensitive financial information exposed. Regulators at the ICO and FCA launched a joint probe, marking this as the first major UK compliance crisis directly attributable to generative AI misuse.
This underscores how unauthorised tools adopted by staff without oversight, a.k.a. "shadow AI", can create catastrophic data leaks even without traditional hacking.
Across sectors in the UK, unapproved AI tool usage has surged. A study by the Society for Computers and Law reported that 81% of corporate legal departments surveyed use AI tools not sanctioned by their organisation, with 47% doing so without oversight.
UK firms face mounting legal exposure under UK‑GDPR, as confidential legal or merger documents can inadvertently be uploaded into free AI systems, risking exposure in future AI outputs.
IBM’s 2025 Cost of a Data Breach study found that 20% of real breaches involved shadow AI. Organisations lacking AI governance paid significantly more (£498,000 extra) per breach. Only 3% had proper access controls to stop attackers from exploiting AI models or plug‑ins.
UK businesses are being targeted with increasingly sophisticated AI‑powered phishing and deepfake scams.
The NCSC warns that generative AI tools enable cybercriminals to write highly convincing messages and clone voices for impersonation. One London case had criminals using an AI‑cloned executive voice to trick a staff member into transferring thousands of pounds.
According to a 2025 report, 37% of breaches involved AI‑generated phishing and 35% involved deepfakes. These attacks are faster, making what used to take 16 hours now as little as five minutes, and making attackers’ capabilities all that much stronger.
The head of the NCSC warned in early 2024 that AI is lowering barriers to entry for ransomware, enabling attackers to identify and exploit victims more efficiently.
The UK has already endured high‑impact ransomware incidents in public services, e.g., NHS and cultural institutions, and AI is likely to turbocharge similar attacks in the future.
A government internal audit flagged that an AI tool used by the Ministry of Defence, developed with Textio and hosted via AWS, held names, roles, and email addresses of military personnel in the US cloud. Though classified as low risk, the use of this AI system raised breach concerns, particularly under inadequate controls in public AI deployments.
Additionally, the DeepMind/NHS Streams scandal surfaced when DeepMind accessed data for over 1.6 million NHS patients without proper consent. While not strictly generative AI, it exemplifies how AI systems handling sensitive health data can cause breaches via questionable data-sharing practices.
SMBs are increasingly targeted: on average, an AI‑driven breach costs an SME £108,000, while 46% of cyberattacks in 2025 targeted small businesses. Given that only around 31% of UK organisations have AI governance policies, and even fewer have implemented access controls to prevent AI misuse, this should come as no surprise.
Financial firms especially lag: though 90% use AI tools, only 18% have governance policies, and under 30% have safeguards for client data usage in AI systems.
AI‑related breaches in the UK are increasingly real and material, stemming not just from malicious external hacks, but from employee misuse of generative tools, deepfake phishing scams, and unchecked public-sector AI deployments.
The most significant events include the Thames Valley Bank breach (July 2025), widespread shadow AI incidents across legal and corporate settings, and rising AI‑assisted phishing and ransomware attacks. These underline urgent needs for:
How financial institutions can proactively address the challenges of the new regulations.
How to ensure your business is ready for mandatory DORA compliance.
Five challenges, and how to achieve compliance.
And why IT should care about them.
Cyber preparedness insights from a serving police superintendent
Share this story
We're a community where IT security buyers can engage on their own terms.
We help you to better understand the security challenges associated with digital business and how to address them, so your company remains safe and secure.