![]()
AI has changed the look and feel of cyber risk—not by inventing new crimes, but by making old ones painfully believable.
Phishing emails no longer give themselves away with bad grammar. They read like your accountant wrote them—because sometimes your accountant did, after their mailbox was compromised. The result is more successful credential theft and account takeovers that feel legitimate from the very first click.
That realism now stretches across channels. Messages arrive in Teams or Slack that appear to come from a colleague or supplier when defaults aren’t locked down, and the cross‑channel “echo” builds false confidence. When urgency, authority and money enter the thread, people move fast and skip checks.
Deepfake mimicry pushes that believability even further.
With a single headshot from a team page and a few seconds of audio scraped from a public interview, it’s now possible to synthesise a convincing video or voicemail from a CFO. Tools that once fumbled hands and lip‑sync can generate full‑body motion from a reference clip and switch languages while keeping the same voice.
These capabilities have clear upside for training and localisation. But they also dramatically lower the cost for threat actors to stage persuasive fraud. The right defence starts with doubt: pause when a request is new, rushed or secretive, and verify through a separate, trusted channel that you initiate. Even small choices—like posting only low‑resolution staff photos—can raise the bar against cloning attempts without killing team pages.
Culture either amplifies or dampens risk.
In organisations where no one questions leadership, a spoofed “approve this payment now” message is genuinely dangerous. Healthy friction helps. Celebrate verification; don’t punish it.
Formalise that mindset with zero‑trust principles: verify the user, the device and the context before granting access. Extend phishing awareness beyond email to every workplace channel. Expect pretexting built from your public footprint—LinkedIn activity, conference mentions, website phone numbers. Attackers layer familiarity, then switch mediums to cement trust. Anticipate that pattern and slow it down with policy and process, such as mandatory callbacks to known numbers and dual approval for sensitive actions.
Adopting AI safely is as important as defending against it.
An AI readiness assessment can surface gaps you didn’t know you had: shadow use of free tools, missing guidance on customer data, or unclear ownership of model outputs. Policies must live in practice; an unenforced rule is theatre.
Choose vendors who can clearly explain how they isolate, retain and secure uploaded data. Ask direct questions about encryption, training data reuse and regional storage. Train teams quarterly using fresh, local examples—including voicemail deepfakes and internal chat lures.
People want to help and move quickly. Give them simple, repeatable checks so that instinct doesn’t leak credentials or cash. AI is a force multiplier on both sides—pair curiosity with caution, and you gain leverage without handing it to an attacker.
Adopting AI safely takes more than policy.
Want to learn more? Get your AI Readiness Starter Pack here and understand where AI is already impacting your business — and where controls need to catch up.