The AI risk gap: Why businesses lack coverage

What does the AI boom mean for business risks, with landmark regulations like the EU AI Act taking effect and most companies not confident in their current cover? Here’s what you need to know.

Emerging Risk Article 7 min Wed, May 7, 2025

Every generation sees new technologies with the potential to change the world, but only few of these have a lasting impact. First, the internet revolutionized how we connect and work. Then came smartphones and cloud computing, reshaping daily life and business. Now, artificial intelligence (AI) has proven itself as the latest game-changer. Far from a flash in the pan, a recent CFC survey found 79% of businesses already use AI in some capacity, with most of these planning to use the technology even more in the coming years.

Be it data analytics, research and development, price modelling, customer support or anything in between, AI is rapidly becoming core to business operations. However, this wholesale adoption also introduces new risks, heightened by a changing regulatory landscape with different jurisdictions taking different approaches.

What’s certain is the need for AI cover is greater than ever. But we found only 32% of businesses are confident their current insurance policies address AI risks adequately. So, how can you be sure you have the right cover?

New tools, new rules: How AI use cases and risks are changing

Like the internet, smartphones and cloud computing, AI is impacting life in multiple ways. This flexibility can be seen in how it’s used across industries, whether a tool is driving efficiencies, lowering costs or delivering more accurate results, faster. According to the CFC survey, more than two in three businesses are harnessing AI for improved data and analytics, while over half are implementing AI in marketing and to enhance employee productivity.

But the scene isn’t standing still. In digital healthcare, AI was first used in clinical work to improve diagnostics. While it still has a role to play in diagnosing illness and disease, digital healthcare has recently seen an AI-surge in back-office tasks like appointment booking and data entry. The risk profile has expanded from bodily injury to also include claims like data breaches and intellectual property (IP) disputes.

This begs the question, what’s next for AI? In any industry, as AI innovation continues to accelerate, we have to expect its use cases to evolve further, changing the risk landscape with it. One day, a fintech may find itself using AI in a way previously unforeseen. As insurers, we have to be ready to protect the risks this might introduce and spur innovation—playing our part in the AI revolution.

One size doesn’t fit all: Navigating global AI regulation

While AI adoption is accelerating, the regulatory landscape is evolving just as fast as governments race to respond—some with caution, others to drive innovation.

In the EU, the AI Act is the first comprehensive regulation of its kind, establishing a legal framework for AI development and use. The Act classifies AI systems based on the risks they pose, aiming to reduce uncertainty and promote safe, responsible development. Canada looked to follow suit with its own comprehensive Artificial Intelligence and Data Act (AIDA). However, this has since stalled, leaving individual provinces to advance their own AI regulations.

Meanwhile, the UK is adopting what’s perceived to be a more flexible, pro-innovation approach, relying on existing regulators and encouraging companies to innovate faster. The plan is to manage AI sector by sector, allowing regulations to be tailored to unique AI challenges and opportunities presented in different industries. Across the US, there’s no single national AI law. Regulation is currently shaped by a patchwork of federal initiatives, state-level legislation and evolving industry standards, while Australia is moving to introduce mandatory guardrails, particularly for AI used in high-risk settings.

For businesses—especially those operating internationally or trading across borders—understanding these regional differences is critical. It’s no surprise the CFC survey found compliance and regulatory issues to be a top concern around AI. Compliance can be difficult to achieve, particularly as what’s compliant in one jurisdiction may raise red flags in another. As the rules tighten, staying ahead of regulatory change is becoming as important as staying ahead of the technology itself.

Covered or caught out? The insurance gap in the AI era

Whether it’s a fintech startup using AI to power smarter decisions, a hospital streamlining diagnostics or an edtech platform using generative tools to personalize learning, AI is everywhere. And with it comes a whole new set of exposures, from IP disputes to data bias and regulatory breaches.

While businesses are leaning into AI, many are still unclear whether their current insurance policies offer adequate protection. As stated above, a recent CFC survey found only 31% of businesses feel confident their insurance covers AI risks effectively. Demand is growing for insurance that keeps pace with this fast-moving technology, but we’re still seeing a lack of understanding when it comes to AI risks, at a time when the majority of businesses already use the technology. So how can we close the protection gap?

At CFC, we believe innovation shouldn’t come with uncertainty. That’s why we build AI protection into our policies—both affirmatively, with clear wording in the policy, and silently, by covering exposures even when they’re not explicitly mentioned—across a wide range of industries including technology, finance, professions, healthcare, media and more. AI can seem complex. But with comprehensive cover in place, mitigating the risks can be simple.

Interested in AI? Sign up for exclusive content and early access to the latest updates you need to stay ahead.