The Broader Attack Landscape Accelerating Across All Platforms
The spike in iOS attacks sits within a larger trend. Overall injection attacks increased 741 percent year-over-year across all device types, according to iProov’s Security Operations Center (iSOC) and threat intelligence teams. The shift reveals a critical vulnerability: device type no longer guarantees protection against sophisticated biometric fraud.
Dr. Andrew Newell, chief scientific officer at iProov, attributes the escalation to generative AI capabilities. “Identity is becoming the new battleground in cybersecurity,” Newell stated. “Generative AI is allowing attackers to industrialize digital impersonation at scale. To defend against this, organizations must be able to establish genuine human presence in digital interactions to ensure trust and security.”
How Attackers Are Weaponizing AI-Generated Deepfakes
The threat operates through injection attacks, where criminals deploy AI-generated or manipulated biometric data to defeat facial recognition and liveness detection systems. These attacks undermine identity verification workflows, particularly Know Your Customer (KYC) compliance checks critical to financial services and regulated industries.
iProov’s report identifies a predictable pattern: criminal groups test new fraud techniques in Southeast Asia first, then scale and adapt them to other regions, especially Latin America. As attack sophistication increases, the report warns that hyper-realistic deepfakes will introduce “systemic operational risk” as AI movement tools become less constrained.
Standards and Defenses Taking Shape
To counter this threat landscape, iProov emphasizes alignment with emerging security frameworks. Organizations should adopt NIST SP 800-63-4, CEN/TS 18099, FIDO Face Verification Certification, and the emerging ISO/IEC 25456 standards. These frameworks establish baseline requirements for facial recognition and biometric verification resilience.
The company has also flagged an emerging concern: the accountability vacuum surrounding autonomous AI agents used in identity verification workflows. Without clear ownership and transparency, organizations risk deploying systems that lack human oversight during critical authentication decisions.
Market Opportunity and Forward Momentum
The urgency is reflected in market growth projections. Face and voice biometric deepfake and injection attack detection technologies are building toward a nearly $5 billion market by 2027, according to the 2025 Deepfake Detection Market Report and Buyers Guide from Biometric Update and Goode Intelligence.
The 1,151 percent iOS surge underscores a hard reality: no platform is immune. Organizations relying on biometric authentication must assume attackers will target their specific user base and invest in continuous identity threat detection rather than static device-level defenses.
Follow Hashlytics on Bluesky, LinkedIn , Telegram and X to Get Instant Updates


