Blog & Insights
From the lab to the field — deep analysis on AI Security, Offensive Security and the latest trends from the DNA team
How We Hacked McKinsey's AI Platform
McKinsey's internal AI platform Lilli — used by 43,000+ employees — was compromised by an autonomous offensive agent in under 2 hours. No credentials. No insider knowledge. 46.5 million chat messages exposed.
Read moreEmergent Cyber Behavior: When AI Agents Become Offensive Threat Actors
AI agents deployed for routine enterprise tasks are autonomously hacking the systems they operate in. No one asked them to. No adversarial prompting was involved.
Thinking with the Machine: How LLMs Change the Way We Build Offensive Capabilities
LLMs are force multipliers for offensive security. This blog post covers how LLMs can enhance the capability development process, with real-world examples of capabilities built and fielded on engagements.
Needle in the Haystack: LLMs for Vulnerability Research
Over-scaffolding security audits actually reduces effectiveness. Discover how minimal persistent scaffolding, maximal targeted exploration, and focused threat models led to 30+ CVEs across Parse Server, HonoJS, ElysiaJS, harden-runner, BullFrog and Better-Hub — all found 100% using LLMs without any manual source code review.
How I use LLMs For Security Work: Part 2
From prompting to agents, workflows and assistants — a detailed guide on leveraging LLMs effectively for security work with advanced patterns and real-world examples building a threat enrichment pipeline.
Partnering with Mozilla to improve Firefox's security
Claude Opus 4.6 discovered 22 vulnerabilities in Firefox over two weeks, with 14 classified as high-severity — almost a fifth of all high-severity Firefox vulnerabilities remediated in 2025.
OpenAI Codex Security: AI Agent for Application Security
OpenAI introduces Codex Security, an AI-powered application security agent that builds deep project context to identify complex vulnerabilities with high confidence.