And we need AI to fight back
Fresh back from V2 Copenhagen 2025, where the halls were buzzing with demos of AI-powered tools to combat attackers, I couldn’t shake one thought: The era of AI vs AI has only just begun. Below are my distilled reflections for anyone watching this new cyber arms race unfold.
Generative AI and LLMs have supercharged phishing and fraud. Attackers are now operating at machine speed, churning out highly convincing emails, voices, and videos that weaponize human trust. We’re seeing deepfake job applicants (nearly 1 in 5 hiring managers have encountered deepfakes in interviews – theweek.com) and synthetic identities flooding channels that people instinctively trust. In short, what looks legitimate today can easily be fake.
For years, we assumed a professional-looking email or familiar face was genuine. Not anymore. AI-generated personas can mimic writing style, appearance, even tone – fooling our eyes and ears. Some deepfakes are indistinguishable to the human eye. Verifying authenticity by human inspection alone is a losing battle. We must augment our defences with technology that doesn’t blink.
IBM’s 2024 Cost of a Data Breach study pegs the average incident at $4.45 million. With NIS2 fines and the SEC’s disclosure rules, boards can’t afford to look away.
Fortunately, we can fight fire with fire. AI-powered verification is rising to meet this challenge. For example, new identity systems can combine an e-passport’s chip data with a live facial scan and machine learning analysis to authenticate someone beyond human means – e.g. mob.id. Advanced fraud detectors can flag fake images or voices that humans would miss. But we need to move fast – and every miss is expensive. The bad guys aren’t waiting, so neither can we.
Practical Steps for CISOs & CEOs
- Pick your low-hanging fruit first: Reinforce the Zero-Trust paradigm “don’t trust, always verify” everywhere in your organization. Treat unexpected “urgent” requests (emails, calls, applicants) with very healthy skepticism – require a secondary verification channel for any high-risk transaction or new relationship – such as a verbal safeword challenge/check you can weave into a conversation. Start educating employees that seeing is not believing in the age of AI-fueled fraud.
- Strengthen defence in depth : Deploy AI-driven defences to verify what humans can’t. Pilot an email security or identity verification tool that uses ML to detect phishing and deepfakes. For instance, test solutions that scan IDs’ chip data and perform facial liveness checks. Implement validation channelse to support your business processes top to bottom.
- Think long-term: Build AI into your long-term security strategy. Incorporate AI-based verification at key trust points (customer onboarding, hiring, wire approvals). Invest in deepfake detection and synthetic fraud monitoring as a core competency, not a one-off project. Establish governance for AI/LLM use and regularly update incident response plans to account for AI-enhanced threats. By leveraging AI and human vigilance together, you can rebuild trust in a world where the fakes are coming at your business faster and harder every day.
Bottom line: The threat landscape has changed – attackers are exploiting the very trust channels we rely on. To outpace them, we must verify authenticity leveraging the very machines that are used to exploit our trust. It’s time to harden our “human firewall” with AI backup, so that what appears real is actually real – securityboulevard.com – socure.com. The CEOs and CISOs who act now will lead the charge in making trust tougher to hack.
And remember: It’s not paranoia if they really are after your business…
Leave a Reply