Topics
Topics
The field is empty

Jen Easterly’s Xposure Keynote: How to Use KEV and AI to Stay Ahead of Cyber Threats

Published 23 Jul 2025
Last Modified 23 Jul 2025
Book your demo now >

I’ve spoken with plenty of cybersecurity leaders in my time, but hosting Jen Easterly at Pentera’s National Xposure 2025 summit felt different. As a former Director of CISA, she’s got that rare blend of battlefield grit, boardroom poise, and TED Talk charisma. With a conversation that traveled from Iraq’s frontlines to the development and launch of KEV and finally to the future of AI-powered cyber defense, she shared a vision of cybersecurity drawn from her experience that’s not just innovative, but actionable.

The Threat Landscape: It’s Evolving, Fast

Back when Jen was serving in Baghdad and tasked with disrupting bomb-making networks in real time, she thought she was drowning in data. That was before smartphones, before TikTok, before AI-generated phishing attacks. Now, over 300 million AI content requests are made every minute. Data volume, variety, and velocity have all exploded, and with it the attack surface.

There’s a cyberattack every 30 seconds. The average breach costs $5 million. Global cybercrime costs are on pace to hit $10.5 trillion. Yes, trillion – with a T. That’s more than the GDP of Japan, India, and Italy combined.

In this environment, defending against hypothetical threats isn’t feasible. You need to focus on what’s actually being exploited.

KEV: Use it to Cut the Noise

That’s where KEV comes in – the Known Exploitable Vulnerability Catalogue.
“When I arrived at CISA,” Jen said, “defenders were drowning in CVEs – 20,000 new ones a year, and barely 5% were ever exploited. It was chaos.” KEV was the antidote: a curated list of vulnerabilities confirmed to be under active attack. With over 300,000 CVEs identified in the National Vulnerability Database, in comparison, the KEV list has just over 1,300 entries, less than 0.5%. This dramatically narrows the field of vulnerabilities, enabling security teams to focus their efforts.

Each KEV entry has to meet three simple criteria:

  1. A valid CVE ID.
  2. Evidence of exploitation in the wild.
  3. A clear path to remediation.

But KEV has become more than just a government tool. It became a global public good, now used by critical infrastructure, banks, hospitals, you name it. Helping defenders across industries act before the next WannaCry or Log4Shell hits.

The KEV catalog is publicly available at CISA.gov. It’s an essential prioritization tool. Don’t treat it like another spreadsheet – operationalize it. Align it with your asset inventory, cross-reference it with SBOM data if you have it, and treat remediation timelines as mission-critical.

Jen’s takeaway: If you’re not patching KEV entries quickly, you’re leaving your business at risk.

AI: A Defender’s Dream or Hacker’s Playground?

Naturally, our conversation turned to AI.

Jen was candid: “Every tech creates its own failure mode. AI can be revolutionary for defenders, but it’s also supercharging our enemies.”

And let’s be blunt: sectors like finance, communications, and healthcare are ripe for AI-driven exploitation. Misinformation campaigns, AI-crafted deepfakes, and social engineering attacks aren’t theoretical, they’re happening. When public trust and even physical safety are on the line, they become corrosive when they hit these sensitive industries.

Jen pointed to how AI is being used to write malicious code faster, create hyper-targeted phishing emails or chain together vulnerabilities at machine speed. But it also gives defenders incredible new tools:

  • AI-assisted secure code generation
  • Refactoring legacy systems into safer languages like Rust
  • Predictive modeling for threat prioritization
  • Real-time vulnerability enrichment

Where does this leave us?

Black Box or White Box: Keeping AI Transparent

As organizations are racing to adopt AI-powered defenses, it becomes apparent that not all AI is created equal, and not every solution adds real value. The critical question every security leader should ask is this: How does this AI actually make us more secure?

If the answer is speed, accuracy, and smarter decision-making without added complexity or opaque behavior, you’re likely on the right track. But if the AI introduces “black box” logic, creates new integration debt, or becomes a shiny tool no one trusts or understands, you’ve simply widened your attack surface. AI should be operationalized with visibility, explainability, and measurable outcomes baked in. Not as an act of faith following a slick vendor demo.

In highly regulated industries, like healthcare, finance, and energy, securing AI means implementing role-based access, robust audit trails, private cloud deployments, and above all: building with the assumption that AI decisions will be scrutinized.

Looking Ahead: Great Things Start With Optimism

In cybersecurity, speed without direction is just chaos. That’s why operationalizing AI and KEV – although entirely different disciplines – isn’t just smart, it’s crucial. AI, when deployed with transparency, explainability, and discipline, becomes a force multiplier. And the KEV catalog? It brings priority to vulnerabilities that someone, somewhere, is already exploiting.

Looking ahead, the teams that win won’t just dabble in new innovations, they’ll operationalize them; they’ll patch faster, think like adversaries, and build systems resilient by design. The future belongs to the defenders who move as fast as the threats they face.

Listen to the full conversation here

Following her keynote, Jen opened the floor for questions from the audience. Here are some of her answers to participant questions at Xposure 2025:

What can be done to educate the human/end users about AI-created phishing attacks?

Start with storytelling: use real, compelling examples to show folks how generative AI makes phishing far more persuasive, personalized, and real-time. Then teach people how to slow down, verify, and use secure channels. Security awareness needs to move beyond slogans, PowerPoint, and once-a-month phishing tests; it should be immersive, contextual, and continuous.

What policies are most urgently needed to manage AI risks?

We need policies that enforce transparency, require red-teaming and external testing, mandate secure-by-design principles, and clarify liability. If software is eating the world, AI is devouring it faster – and without accountability, we’ll be left cleaning up avoidable disasters. Some great resources at https://www.cisa.gov/ai.

How do you recommend Threat Hunting teams evolve their tactics and tooling to detect and mitigate AI-driven threats, such as deepfakes or autonomous malware that are increasingly capable of bypassing traditional security controls?

Lots of work to do in this area, but some basics involve: integrate behavioral analytics and anomaly detection into core hunting processes; building custom YARA rules (sets of instructions that define the characteristics of a specific type of malware or threat) for AI-generated artifacts; expanding hunt techniques to include model drift, synthetic indicators, and adversarial prompts; and deploying defensive models to flag offensive AI usage. Recommend checking out the work done by CISA, in collaboration with AI companies on building operational collaboration around AI security incidents.

Subscribe to our newsletter

Find out for yourself.

Begin your journey in security validation and see why leading companies trust us with their cybersecurity validation.

Start with a demo
Related articles

Jen Easterly’s Xposure Keynote: How to Use KEV and AI to Stay Ahead of Cyber Threats

I’ve spoken with plenty of cybersecurity leaders in my time, but hosting Jen Easterly at Pentera’s National Xposure 2025 summit felt different. As a f...
ingress-nightmare

IngressNightmare Returns: 3 New Injection Points and How to Keep Attackers Out

Introduction When Wiz first disclosed IngressNightmare, they revealed a chain of vulnerabilities (CVE-2025-1097, CVE-2025-1098, CVE-2025-24514 and CV...

The Crowded Battle: Key Insights from the 2025 State of Pentesting Report

In the newly released 2025 State of Pentesting Report, Pentera surveyed 500 CISOs from global enterprises to understand the strategies, tactics, and t...