In professional services, there’s no question generative AI (GenAI) has value in facilitating efficient, cost-effective processes. Machine learning models like ChatGPT, Midjourney and Gemini are capable of creating sophisticated marketing content, automating routine tasks and improving data analytics—all at speed and scale. But is take-up of the technology outpacing our understanding of the risks involved?
A recent CFC survey found that while almost 80% of businesses already use AI to some extent, only 34% claim to have a strong understanding of the technology. It’s likely many of these users are unaware of how GenAI introduces risk, with the technology’s constant development making it even more difficult to keep up. In this landscape, professional services are facing a pivotal question: how can you proceed with caution, taking a controlled approach to AI tools that taps into the upside while mitigating risk?
Here are four key considerations for any firm turning to GenAI.
-
The risk of using GenAI without a rulebook
When used well, GenAI can be a fast, efficient tool empowering professional services to do more with less. But there’s growing concern many users may be overestimating their ability to use the technology safely, and that trust in its potential could be blinding people to its risks.
Without clear guidance on what’s appropriate, what’s not and how to use GenAI responsibly, firms could be opening the door to serious issues from misusing confidential data to unknowingly breaching ethical lines. Yet many organizations still lack any official policy or training for employees. This lack of structure can leave space for well-meaning employees to make costly mistakes—and for bad actors to push boundaries unchecked.
-
How accurate are today’s GenAI tools?
One of the biggest challenges with GenAI is it can sound confident even when it’s completely wrong. “Hallucinations” occur when AI tools generate information that’s factually incorrect, with some of the latest models appearing to struggle even more with accuracy.
Most AI tools come with clear disclaimers: don’t take outputs at face value, always check the facts. But in the rush to get things done, those warnings can be easy to overlook, leading businesses to unknowingly rely on flawed information.
Good prompting that provides clear examples and sets clear parameters can help, but human oversight is still essential. That means training employees not just to use AI, but to question it. Accuracy shouldn’t be assumed, no matter how polished the answer looks.
-
Privacy pitfalls and the threat of cybercrime
Most AI tools warn users not to add sensitive information into the prompt box, but that practice can be difficult to uphold. The recent “AI doll” craze is a good example. People uploaded personal photos and details so the model could render lifelike toy versions of themselves, inadvertently handing over both private data and perhaps even infringing on a major doll brand’s intellectual property (IP). No matter how developed tools become, human users will always make innocent mistakes, leading to risk.
Then there’s malicious actors. Cybercriminals can “jailbreak” or inject prompts to bypass the model’s guardrails and make it reveal information it should keep hidden. On the other hand, glitches can occur which result in the AI leaking snippets of previous user sessions.
For professional services firms, the liability risk is clear. What if a well-meaning employee feeds client data to a model to get a better answer, only to discover they’ve breached confidentiality? Facing this scenario, clear policies, regular training and continued oversight are a must.
-
Fast-changing regulations and compliance
From data protection to human rights, many existing laws already touch on AI use. But new, purpose-built frameworks are beginning to emerge. The EU has led the way with its AI Act, while the UK has taken a more pro-innovation approach. In the US, AI governance is developing through a patchwork of state-level laws, with federal bills still in progress.
It can be difficult to keep up with multiple overlapping rules across different jurisdictions, often without clear definitions of what constitutes an AI system. Without a global AI playbook, firms need to be aware of different requirements to ensure compliance. Active governance, tailored advice and comprehensive insurance will be key to navigating the unpredictable regulatory terrain ahead.
Safeguarding AI risk in professional services
For professional services firms, AI doesn’t have to be an all-or-nothing proposition. By taking a thoughtful, measured approach, firms can adopt GenAI where it adds value, while ensuring staff are supported with clear policies, practical training and robust cyber security.
With exposures evolving constantly, comprehensive insurance has become vital to supporting responsible innovation while safeguarding the risks that come with it. Be it privacy breaches, IP disputes or misleading outputs, a well-structured policy tailored to AI-related exposures can help firms navigate uncertainty with greater confidence.
Ready for the latest insights on how AI is transforming risk, from regulatory shifts to real-world business impact? Sign up now to receive exclusive content and early access to expert analysis.