FDIC-Insured - Backed by the full faith and credit of the U.S. Government
-
-
-
Jeff Weeks
Sr. Vice President and Chief Information Security OfficerAug 08 2025
-
Author: Jeff Weeks, Senior Vice President and Chief Information Security Officer
In today’s digital-first hiring landscape, the threat isn’t just who you hire — it’s what you’re hiring. Generative AI is reshaping the threat landscape.
One of the most alarming trends? Cybercriminals and nation-state actors are now using artificial intelligence (AI) to fabricate entire job applicants — complete with deepfake videos, synthetic voices, social media profiles, and AI-enhanced resumes.
Their goal is simple: infiltrate organizations under the guise of legitimate employment to steal intellectual property, siphon sensitive data, or commit financial fraud.
The Threat Is Real and Growing
These real-life examples provide a sense of the scope and ramifications of fake job applicants.
- Palo Alto Networks demonstrated that a convincing AI-driven job applicant can be created in under 70 minutes by someone with minimal technical skills. (HR Dive).
- According to a survey of 1,000 hiring managers in the U.S., conducted by Resume Genius, about 17% reported interviewing deepfake AI candidates equipped with lip-synced video and synthetic voices (VICE, Resume Genius).
- According to the same survey, “74% of hiring managers have encountered AI-generated content in applications, with nearly half seeing AI-crafted resumes and cover letters …”(Resume Genius).
- CNBC and the U.S. Justice Department data reveal that since May 2024, over 300 U.S. companies have inadvertently employed fake IT workers tied to North Korea, funneling millions of dollars to the regime (VICE).
- These operations are sophisticated — some involve entire “laptop farms” managed remotely to simulate legitimate remote work. (WIRED).
- In one case, a deepfake applicant froze when asked to touch their face — an anomaly that exposed the fraud. (CBS News).
Best Practices to Guard Against AI-Driven Impostors
Traditional vetting methods — resume reviews, phone screens, and video interviews — are no longer enough. Here are some best practices for today’s hiring environment.
Strategy | Action |
---|---|
Live Video Challenges | Ask candidates to perform spontaneous actions during interviews. |
Biometric ID Validation | Use forensic tools to verify documents and facial data. |
In-Person Final Rounds | Require on-site interviews for high-risk roles. |
Deep Social Footprint Review | Look for consistent, long-standing online presence. |
Reference Vetting | Confirm identities through corporate switchboards. |
Staff Training | Teach recruiters how to spot deepfake anomalies |
AI Detection Tools | Deploy solutions that flag manipulated resumes and videos. |
Contextual Questions | Ask localized questions to test authenticity. |
Interview Recordings | With consent, record interviews for post-analysis. |
Final Thoughts
Generative AI offers transformative benefits to recruitment — unfortunately, it also empowers malicious actors. Organizations must adapt in-kind.
By training HR teams, fortifying vetting processes, and embracing both technology and human judgment, companies can defend their reputation, proprietary information, and overall security posture.
About the Author
Jeff has been with First National Bank of Omaha for more than 26 years and is currently the Senior Vice President and Chief Information Security Officer. The executive leadership and oversight provided by Jeff in the development, management, and execution of information security for FNBO enables the company’s ability to posture and protect private, personal information, and assets of the company’s clients, employees, and business partners.
The articles in this blog are for informational purposes only and not intended to provide specific advice or recommendations. When making decisions about your financial situation, consult a financial professional for advice. Articles are not regularly updated, and information may become outdated.