Author: Jeff Weeks, Senior Vice President and Chief Information Security Officer
- AI is reshaping cybersecurity for defenders and attackers, boosting speed, scale, and sophistication on both sides.
- Defensive AI helps teams detect threats faster, automate repetitive tasks, and focus human expertise where it matters most.
- Strong governance, including oversight, transparency, data protection, and red-teaming, is essential to managing AI risks and preventing misuse.
Artificial intelligence has rapidly become one of the most transformative forces in cybersecurity.
For defenders, AI promises speed, scale, and clarity in an environment defined by noise and complexity. For adversaries, however, those same capabilities enable automation and precision that fundamentally change the threat landscape.
The result is a true double-edged sword — one that demands disciplined governance as much as technical innovation.
How Defenders Are Using AI
On the defensive side, AI is being built into cybersecurity tools to help protect organizations. These tools can automatically look at huge amounts of security data from computers, networks, cloud systems, and user logins all at once. By connecting the dots across all that data, AI can spot suspicious activity or hidden threats that would be nearly impossible for a person to notice on their own.
Vendors are heavily promoting AI-powered cyber security operation center (CSOC) co-pilots and automated triage workflows. In practice, these tools can accelerate investigations by summarizing incidents, enriching alerts with additional context, recommending response actions, and even drafting incident reports. When implemented correctly, this reduces mean time to detect (MTTD) and mean time to respond (MTTR), allowing overstretched security teams to operate more efficiently without sacrificing rigor.
At its best, defensive AI augments human expertise rather than replacing it. Experienced analysts remain essential for judgment, escalation decisions, and business-context awareness, but AI can remove much of the mechanical overhead – such as alert triage, data correlation, and routine documentation – that slows them down.
How Attackers Are Scaling with AI
Unfortunately, attackers are benefiting from AI just as quickly — arguably faster. The same technologies that power defensive analytics also enable adversaries to automate and optimize their operations.
So-called “agentic” AI systems, which can plan and execute actions without constant human direction, represent a meaningful shift in attacker capability. These systems can autonomously chain together multiple phases of an attack lifecycle: reconnaissance, target selection, social-engineering content generation, phishing delivery, and even rudimentary exploitation. What once required a skilled operator now requires minimal human oversight, allowing cybercriminals to scale campaigns with unprecedented efficiency.
Trend Micro’s characterization of this phenomenon as “vibe crime” captures an important reality: AI-generated attacks no longer include obvious red flags. Phishing messages sound natural, adapt to the target’s tone, reference real-world events, and exploit psychological cues with alarming accuracy. The result is a dramatic erosion of traditional user-awareness defenses.
Governance Must Keep Pace with Capability
As organizations adopt AI-driven security tools, governance can no longer be an afterthought. In many respects, AI systems themselves have become critical assets and potential attack surfaces.
Several governance actions are essential:
- Protect training data. Models are only as trustworthy as the data they learn from. Organizations must safeguard training datasets and monitor for data-poisoning attempts that could bias outcomes or degrade detection accuracy over time.
- Maintain humans in the loop. High-impact decisions — such as account lockouts, system isolation, or customer-facing actions — should always require human review. AI can recommend, but accountability must remain with people.
- Ensure auditability and transparency. Model decisions, prompts, and actions should be logged in a way that supports regulatory scrutiny, internal audit, and post-incident review. Black box security, where the system’s decision-making is hidden, is no longer acceptable.
- Red-team your models. Just as organizations test applications and infrastructure, they must actively test AI systems for abuse paths, prompt manipulation, and unintended behaviors — before adversaries discover those weaknesses first. Red-teaming involves simulating an attacker to uncover and fix vulnerabilities.
From a risk-management perspective, AI governance belongs squarely at the intersection of cybersecurity, compliance, legal, and enterprise risk — not just within the CSOC.
What This Means for Consumers
For individuals, the implications are straightforward but sobering. Scams will become more convincing, more personalized, and harder to identify. Voice cloning, realistic video deepfakes, and AI-generated messages that mimic trusted institutions will continue to rise.
Despite all the technological change, the most reliable countermeasures remain surprisingly simple: strong verification habits and multi-factor authentication. Slowing down, independently validating requests, and using MFA wherever possible still neutralize most AI-enabled fraud attempts.
Final Thought
AI is an amplifier, for good or bad. For defenders, it can dramatically improve visibility and response — if governed responsibly. For attackers, it makes entry into hacking easier and increases realism and scale.
The organizations that succeed in this next phase of cybersecurity will be those that treat AI not just as a tool, but as a risk domain requiring the same discipline, controls, and oversight as any other critical technology.
About the Author
Jeff has been with First National Bank of Omaha for more than 26 years and is currently the Senior Vice President and Chief Information Security Officer. The executive leadership and oversight provided by Jeff in the development, management, and execution of information security for FNBO enables the company’s ability to posture and protect private, personal information, and assets of the company’s clients, employees, and business partners.