Key Takeaways
- AI models like Anthropic’s Claude Mythos are uncovering high‑severity vulnerabilities across major operating systems and browsers, shrinking exploit windows and overwhelming traditional, manual patching programmes.
- At scale, legacy patching workflows break down. Relying on vulnerability scanners alone, linear ticket-based approvals, or endpoint tools with limited visibility creates delay, blind spots, and growing exposure when CVE volume and cadence spike.
- Continuous triage, risk-based prioritisation, ring-based deployment with rollback, and closed‑loop verification are the only way IT and security teams can keep pace as attackers and defenders both leverage AI to move faster.
On April 7, Anthropic announced that its Claude Mythos Preview model had autonomously identified thousands of high- and critical-severity zero-day vulnerabilities across every major operating system and every major web browser. Over 99% of them were unpatched the day of disclosure.
Two weeks later, on April 21, Mozilla said it had used the same model to find and patch 271 vulnerabilities in the latest Firefox release. Mozilla's own assessment: "So far we've found no category or complexity of vulnerability that humans can find that this model can't."
271 is the first wave. Chrome, Edge, Windows, macOS, Linux, FreeBSD — the 17-year-old remote code execution flaw in FreeBSD that Anthropic's red team disclosed (CVE-2026-4747) is an early example of what's coming. Every vendor under Anthropic's Project Glasswing umbrella is positioned to ship fixes at a tempo the industry hasn't seen before. All those fixes become public CVEs with patches available, which lands them in the same place: your environment.
The containment story also has a crack. On April 21, Bloomberg reported that a Discord-linked group gained unauthorised access to Mythos through a third-party vendor environment. Anthropic says the activity didn't extend beyond that vendor. Whether or not similar capability is already in attacker hands, the defensive runway is shorter than the April 7 announcement implied.
Mythos entered a world already trending this way. CrowdStrike's 2026 Global Threat Report documented an 89% year-over-year rise in AI-enabled attacks in 2025. That trend line predates Mythos.
Call this a patch apocalypse. The plain operational kind, where the volume and cadence of public CVEs with available patches is about to outrun how most IT and security teams currently work.
NIST is already feeling the effects of the patch apocalypse. In April, the agency announced a major shift in the National Vulnerability Database (NVD) operations in response to a 263% surge in submissions. NIST will no longer provide detailed enrichment to all vulnerabilities submitted, and will instead only provide this for vulnerabilities that meet a high-risk criteria, such as those in the CISA Known Exploited Vulnerabilities catalogue or those affecting critical government software. NIST will be relying on CVE Number Authorities (CNAs), like Ivanti, rather than performing its own independent assessment.
I've been hearing three versions of the same response from customers and peers since the announcement. All three are variations of a programme designed for a slower world.
“We have a vulnerability scanner”
Qualys, Rapid7 and Tenable do vulnerability discovery well. Scanners find, flag, score and list. Deployment, verification, reboot handling and rollback are outside their scope. That work still has to happen somewhere. In most programmes it happens in a separate tool, with a separate team, on a separate cadence.
With the exploit window now running in hours and the Glasswing queue about to double the backlog, a scanner that produces 587 critical vulnerabilities and hands the list to a human team is a liability. The practical move is to connect the scanner you already own to a remediation engine that can act on its findings automatically. An autonomous endpoint management (AEM) platform, with ring-based deployment and rollback, and vulnerability intelligence to provide risk-based context for efficient remediation decisions so the list shrinks without a humans making every decision.
“We drive approvals through our ticketing system”
Speaking of humans having to make decisions… Long linear approval processes are going to slow the remediation process significantly. When is the last time you had to decide whether you were going to deploy the latest OS or browser update?
Organisations already know they are going to deploy these updates. Often the approval process is due to complex internal politics and misalignment on security outcomes. The end result? A very linear process that requires the vulnerability scanner previously mentioned, an analyst approving what you already know needs to be done, tickets going out to business owners for approval and sitting in inboxes waiting for approval, and ultimately valuable time wasted on a decision that was essentially already well understood and did not need to be made.
The market shift to Exposure Management is approaching this process very differently by focusing on defining an organisations risk-appetite and monitoring risk-posture. Next time a Windows OS update releases you already know you will deploy it, the schedule you will deploy it on and your SLA and compliance metrics you will measure success by. What you really want to know is:
1. Do I need to move faster because the update includes known exploited vulnerabilities?
Or
2. Is the update impacting operations and we need to slow down (good thing the Autonomous Endpoint Management platform includes ring deployment with rollback)?
“We have Intune”
Microsoft Intune has two scope limits that matter here.
First, it only manages devices enrolled with it. Unenrolled and unmanaged endpoints — servers, contractor laptops, shadow IT, neglected edge devices — sit outside its visibility entirely. During periods of increased vulnerability volume, those blind spots multiply faster than teams can handle manually.
Second, while Intune simplifies application deployment and updates, its third-party application coverage and prioritisation depth are narrower than most administrators realise. Intune can tell you what’s out of date, but not what actually increases your exposure––which forces teams to patch everything reactively, or based on guesswork when time is scarce.
Most enterprise environments aren’t exclusively Windows, fully enrolled, or running a small, homogenous app stack. When vulnerability disclosures spike, routing patching leaves gaps and turns into systemic risk.
Keep Intune. Pair it with a discovery and remediation layer that finds the assets Intune can't see, prioritises the vulnerabilities that matter most, and applies patches with confidence across the applications Intune doesn’t cover.
What to do about it
Automation is the operating model. It has to be built into the workflow.
Practitioners have known the principle for a while. It shows up in three places:
- Continuous triage. Known exploited vulnerabilities can follow a zero-day response track especially in less secure parts of the organisation like end user systems. Above that, set and define specific applications like the browsers and telecommunication apps to get updated on a priority track that is checked weekly or even daily. Everything else can wait for the regularly maintenance window to come around.
- Ring deployment with automated rollback. Test ring, early-adopter ring, broad production, mission-critical. The sequence is boring and it works for most maintenance. What's changed is that certain updates will need to compress to fit the exploit window vs waiting for your monthly maintenance. The test ring has to be automated and instrumented — a human checklist can't move that fast.
- Closed-loop verification. The patch isn't deployed until it's verified installed on the endpoint, and the CVE isn't closed until a rescan confirms it. Most teams skip that step, which is why compliance evidence becomes a fire drill the week before the audit. That's why we shipped continuous compliance in our platform this week — so compliance evidence is produced continuously and automatically as patches deploy, with automation handling the prioritisation decisions most teams don't have bandwidth for.
Mozilla's 271 Firefox vulnerabilities are a preview. Every major software vendor under Glasswing is about to startfixing more vulnerabilities and at an accelerated pace, and attackers with the same class of capability will be looking for exactly those openings whenever they gain access to a model like it. The resulting AI arms race will have a direct affect on the number and frequency of updates that organisations will have to remediate and at an accelerated pace. Automation is what carries a programme through. Teams still doing monthly-only patching are in for a rough stretch.
If you run an IT or security programme, the self-assessment is worth doing now. Take the last critical patch you pushed out. Even better, if a zero-day came out on a Friday would you be able to remediate it by Monday? Time it from CVE publication to verified instal on the last endpoint. If that number is measured in weeks, the patch apocalypse is going to find you.
FAQs
What is the “patch apocalypse?”
The patch apocalypse refers to the rapid increase in publicly disclosed vulnerabilities with available patches, driven by AI‑accelerated vulnerability discovery. The volume and speed of fixes are beginning to outpace how most IT and security teams can reasonably remediate them using traditional, human‑driven workflows.
What solutions can help aid in the “patch apocalypse?”
- An autonomous endpoint management (AEM) platform, with ring-based deployment and rollback, and vulnerability intelligence can provide risk-based context for efficient remediation decisions.
- By adopting a risk-based patch management approach, it incorporates real-world threat context to focus on vulnerabilities that are actively being exploited. The approach goes beyond traditional vendor severity ratings and CVSS scores to identify and prioritise vulnerabilities based on their actual risk to an organisation.
What’s the risk of not adapting?
AI models can identify vulnerabilities at a scale and speed humans cannot match. As attackers gain access to similar AI model capabilities, they will target newly disclosed vulnerabilities faster. Organisations relying on manual, fragmented patching processes will see increasing exposure – not because patches don’t exist, but because they can’t deploy them fast enough.
Does solely having a vulnerability scanner solve patching challenges?
No. Vulnerability scanners are essential for discovery, but they don’t deploy patches, verify instals, manage rollbacks, or close the loop. At high CVE volumes, scanners that generate long critical lists without automation behind them can actually slow remediation.
Why are ticket-based approval processes a risk now?
Linear approval workflows were designed for slower patch cycles and don’t address today’s realities. When teams already know updates will be deployed, additional approvals add delay without reducing risk. In a fast-moving threat environment, time is often the limiting factor.