Back to Case Studies
Ongoing Company Voice Cloning

KPMG Canada: 81% of Companies Hit by AI-Powered Fraud Attacks

Incident Date March 2026
Victim Type Company
Attack Type Voice Cloning
Financial Impact Up to 5% of annual business profits per affected organization

Summary

A March 2026 KPMG Canada survey of 251 companies found that 81% had experienced AI-enabled fraud attacks in the prior 12 months, including deepfake audio and video, voice-cloned executive impersonation, and AI-generated phishing. Of those affected, 72% reported losing up to 5% of business profits to these attacks. Despite the widespread exposure, only 26% of organizations had a tested response plan covering AI-enabled threats.

Key Takeaways

  • 81% of 251 Canadian companies surveyed by KPMG experienced AI-enabled fraud attacks in the 12 months prior to March 2026.
  • 72% of affected organizations lost up to 5% of annual business profits to AI-powered fraud, including deepfake audio, video, and voice-cloned executive impersonation.
  • Only 26% of surveyed companies had a tested response plan covering AI-enabled attacks such as deepfakes and voice clones.
  • Voice-cloned executive impersonation and deepfake video were primary attack vectors, succeeding because organizations lacked real-time biometric verification methods.
  • The KPMG Canada survey covered 251 companies and was published on March 11, 2026, documenting AI fraud as a near-universal corporate threat in Canada.

Timeline

The Setup 2024-2025

Once generative AI tools became widely accessible, fraudsters began deploying deepfake audio, synthetic video, and voice-cloning technology against Canadian businesses at scale, targeting organizations that lacked formal AI fraud response protocols.

The Attack 12 months prior to March 2026

Across 251 surveyed Canadian companies, attackers used AI-generated voice clones to impersonate executives on calls, deployed deepfake video in authorization workflows, and sent AI-crafted phishing messages. Eighty-one percent of organizations reported at least one such incident.

The Impact Across the same 12-month period

Seventy-two percent of affected companies reported losses of up to 5% of annual business profits from AI-powered fraud, a material financial drain spread across organizations of varying size and sector.

The Discovery Findings published March 11, 2026

KPMG Canada published survey results drawn from 251 companies, showing that while attacks were nearly universal, detection and response capacity lagged badly. Only 26% of organizations held a tested AI fraud response plan.

The Fallout March 2026 and ongoing

The report prompted calls for Canadian businesses to adopt identity verification protocols, biometric authentication standards, and incident response frameworks built to handle AI-native threats including deepfakes and voice clones.

Attack Details

The KPMG Canada survey documented three primary AI-enabled fraud vectors used against Canadian companies in the 12 months before March 2026. The most technically involved was voice-cloned executive impersonation, where attackers synthesized the vocal patterns of senior leaders, typically CFOs or CEOs, and called finance or operations staff to request fund transfers, credential changes, or sensitive data. These calls were often timed to coincide with periods when the targeted executive was known to be unreachable for confirmation.

Deepfake audio and video attacks extended this threat into visual channels. In some cases, attackers used AI-generated video of executives in what appeared to be live or recorded authorization sessions, lending false legitimacy to fraudulent instructions. The combination of synthetic voice and video made traditional verbal or visual confirmation checks unreliable as standalone defenses.

AI-generated phishing was the third major vector. Unlike earlier template-based phishing, these messages were tailored using language models to mimic the writing style of known colleagues or vendors, referencing accurate organizational context to get past heuristic filters and human suspicion. The personalization increased click and compliance rates.

The vulnerability that made all three vectors effective was organizational, not purely technical. Only 26% of surveyed companies had a tested response plan covering AI-enabled attacks. The remaining 74% operated without validated protocols, meaning that even when individual employees suspected something was wrong, they often had no defined escalation path or verification procedure to act on that suspicion.

Damage Assessment

Financial losses were material and broadly distributed. Among the 81% of companies that experienced AI-enabled attacks, 72% reported losing up to 5% of annual business profits within the 12-month window. For a mid-sized Canadian company generating CAD 50 million in annual profit, a 5% loss represents CAD 2.5 million in direct fraud exposure. Aggregated across 251 surveyed companies, the collective financial impact runs into the tens of millions of dollars at minimum.

Beyond direct financial losses, organizations faced operational disruption from incident investigation, remediation of compromised processes, and the cost of retroactively building controls that should have been in place earlier. Time spent by finance, legal, and security teams responding to incidents adds an unmeasured cost not captured in the profit-loss figures.

The survey also revealed a preparedness gap that compounds long-term risk. With 74% of organizations lacking tested AI fraud response plans, the same companies that absorbed losses in the reported period remain structurally exposed to repeated attacks. Without tested protocols, each new incident requires an improvised response, which raises both the probability of loss and the severity when fraud succeeds.

How The AI Defense Suite Tools Could Have Helped

The dominant attack vectors documented in the KPMG survey, voice-cloned executive calls and deepfake video impersonation, share a common weakness that Proof of Life is built to exploit. Both attack types succeed because recipients have no reliable way to confirm that a voice or face belongs to a real, present human being. Proof of Life addresses this by letting individuals generate Proofies, biometric-verified selfies authenticated via Face ID or Touch ID, confirming that a real person created the image at a specific moment. A Proofie carries a cryptographic timestamp and What3Words location tag that cannot be retroactively faked. Any executive directing staff to take a high-value action could be required, as a matter of policy, to send a Proofie alongside that instruction, providing instant, unforgeable proof of presence.

For the AI-generated phishing and business email compromise vectors identified in the survey, Agent Safe provides a complementary layer of protection. Agent Safe analyzes inbound messages, emails, URLs, and attachments for markers of AI generation, social engineering patterns, and sender reputation anomalies. In an environment where 81% of companies are actively receiving AI-crafted communications designed to impersonate known contacts, a triage layer that flags suspicious messages before employees act on them directly addresses the preparedness gap the survey identified.

The finding that only 26% of organizations have a tested AI fraud response plan is itself an argument for deploying tools from the AI Defense Suite, available at aidefensesuite.com, as foundational infrastructure rather than waiting for a formal policy process to conclude. Proof of Life and Agent Safe can be adopted immediately, creating verifiable authentication checkpoints and message screening capabilities that function as operational controls even without a fully documented enterprise response plan.

Key Lessons

  • AI-enabled fraud is now a baseline operational risk for Canadian businesses, not an edge-case scenario, given that 81% of surveyed companies experienced attacks within a single year.
  • Voice and video verification cannot rely on the communication channel itself. A separate, biometrically authenticated confirmation method is required before acting on executive instructions involving money or access.
  • Having a tested response plan matters as much as having any plan at all. The 74% of organizations without validated protocols are exposed regardless of the quality of their written policies.
  • AI-generated phishing has outpaced heuristic filters. Message-level analysis tools that evaluate content patterns and sender reputation are necessary supplements to traditional email security.
  • Financial exposure up to 5% of annual profits makes AI fraud response a CFO-level priority, not solely an IT security concern.

Frequently Asked Questions

What did the KPMG Canada AI fraud survey find?

A March 2026 KPMG Canada survey of 251 companies found that 81% experienced AI-enabled fraud attacks in the prior 12 months, with 72% of those companies losing up to 5% of business profits. Attack types included deepfake audio and video, voice-cloned executive impersonation, and AI-generated phishing.

How much money did companies lose to AI fraud according to KPMG?

According to the KPMG Canada survey, 72% of affected companies lost up to 5% of annual business profits to AI-powered fraud attacks in a single 12-month period. The exact dollar figures varied by company size and were not individually disclosed.

How many companies have an AI fraud response plan?

Only 26% of the 251 companies surveyed by KPMG Canada in March 2026 had a tested response plan covering AI-enabled attacks such as deepfakes and voice clones, leaving 74% without validated protocols.

How could biometric verification help defend against voice-cloned executive fraud?

Tools like Proof of Life require executives to send a biometric-verified Proofie, authenticated via Face ID or Touch ID, before high-stakes instructions are acted upon. This creates a verification checkpoint that a voice-cloned call or deepfake video cannot replicate.

What types of AI fraud are Canadian businesses facing?

The KPMG Canada survey identified three primary AI fraud types targeting Canadian companies: voice-cloned executive impersonation calls, deepfake audio and video used to authorize fraudulent transactions, and AI-generated phishing messages crafted to mimic known contacts.

Sources

deep fakesvoice cloningexecutive fraudfinancial fraud