- The AI Leadership Forum
- Posts
- The Future of AI in Cybersecurity
The Future of AI in Cybersecurity
+NEWS: Apple Intelligence is here; OpenAi hits $10B in ARR

TL;DR
OpenAI’s new model, o3, just discovered a critical zero-day vulnerability in the Linux kernel—on its own. This marks a watershed moment in cybersecurity, where AI isn't just helping analysts—it’s becoming one. While this unlocks powerful tools for defenders, it also means cyber attackers are using the same AI to scale malware, automate phishing, and create deepfakes. The future of cybersecurity will be defined by who leverages AI more effectively: the protectors or the perpetrators. Welcome to the next phase of digital defense.
Something historic happened 2 weeks ago…
A researcher used OpenAI’s o3 model—a powerful large language model (LLM)—to uncover a zero-day vulnerability in the Linux kernel. That’s not just a headline; it’s a turning point.
AI didn’t just assist—it discovered, explained, and proposed solutions for a complex security flaw in a core infrastructure component. And it did this without custom scripts or complex tools.
Let’s talk about what this really means for the future of cybersecurity—both the promise and the peril.

AI vs. the Linux Kernel
On May 20, 2025, a zero-day bug (CVE-2025-37899) was confirmed in the Linux kernel’s SMB component. It’s technical, but here’s the essence:
The bug, a “use-after-free” vulnerability, allows attackers to hijack memory that was already freed—enabling them to potentially run code with system-level privileges.
What’s groundbreaking is how it was found: a researcher ran the o3 model on about 12,000 lines of code. The AI didn't just find the bug. It:
Identified a previous fix that would have failed
Diagnosed a rare race condition across concurrent sessions
Offered a full report that "felt human-written"
This wasn’t just an assist—it was a full-on discovery, traditionally done by experts using symbolic execution, fuzzing, or months of review.
Big Deal?
Until now, LLMs were known for chat interfaces, writing emails, or summarizing documents. This discovery shows they can operate like expert security researchers—able to understand timing-based flaws, complex memory issues, and logic bugs.
The key takeaway?
AI is becoming good at things we once thought only humans could do.
That opens the door to a new kind of cybersecurity research: AI-assisted discovery at scale.
But there’s a flip side.

The Double-Edged Sword: Defenders and Offenders
Just as AI is helping discover bugs, attackers are already using it to build malware, automate phishing, and manipulate trust in text, video, and voice.
Threat actors are using AI to:
Jailbreak ethical safeguards (WormGPT, GhostGPT)
Generate social engineering scams in multiple languages
Create real-time deepfakes for impersonation fraud
Write malicious code at a level that once required expert developers
This isn’t theory. FunkSec, a known ransomware group, claims 20% of their operations are now AI-powered. Nation-state groups, particularly from Iran and China, are reportedly leveraging LLMs in every phase of their attack cycle—from reconnaissance to command-and-control systems.
We are no longer in a world where you need a sophisticated hacker team to run an attack. All you need is access to the right model—and a prompt.

We’ll be right back.
Keep more of what you earn
Collective helps members keep more of what they earn — saving an average of $10,000 a year in taxes* — while taking countless hours of administrative work off their plates.
Your membership includes LLC and S Corp formation, payroll, monthly bookkeeping, quarterly tax estimates, annual business tax filing and more.
*Based on the average 2022 tax savings of active Collective users with an S Corp tax election for the 2022 tax year
So, What Does the Future Hold?
Opportunity:
AI-Powered Defense Tools
AI is revolutionizing how we detect threats—spotting behavioral anomalies, analyzing malware, and mapping attack patterns faster than any team could.Better Vulnerability Research
As demonstrated by o3, LLMs can now reason about logic, concurrency, and memory issues. They’re like having 100 junior researchers that don’t sleep or get bored.Automated Threat Intel
From scanning millions of phishing domains to flagging shady file names and automating rule creation for defense tools—AI is becoming an analyst's best friend.
Risk:
Weaponized AI Models
Malicious actors are customizing LLMs specifically for cyber crime. GhostGPT and FraudGPT are sold on Telegram with jailbreaks pre-baked in.Scale and Accessibility
Open-source models mean even low-skilled attackers can launch sophisticated campaigns.Deepfake Explosion
AI-generated voices and videos are already undermining basic trust in audio and visual media. From fake CEOs authorizing wire transfers to fabricated police officers coercing victims, it’s all happening now.
If you're curious about where this goes next, we’ll keep breaking it down. That’s what we’re here for.
This Week in AI
Did you enjoy today's newsletter? |
![]() | This is all for this week. If you have any specific questions around today’s issue, email me under [email protected]. For more infos about us, check out our website here. See you next week! |