In partnership with

What Just Happened

Anthropic accidentally exposed nearly 3,000 unpublished internal assets in a publicly searchable data store on their own website. Inside that data store was a draft blog post announcing their most powerful AI model ever — Claude Mythos. A model they had not told anyone about. A model they described as posing unprecedented cybersecurity risks. A model so powerful and so expensive they were planning to release it to only a tiny group of early access customers. Fortune found it first. Then the entire AI industry lost its mind.

Dario Amodei - Anthropic CEO

ARTIFICIAL INTELLIGENCE
🌎 Anthropic Leaked The Most Dangerous AI They've Ever BuiltHere is the full story and it only gets better as it goes.

A cybersecurity researcher named Roy Paz from LayerX Security and a Cambridge University researcher named Alexandre Pauwels independently found the exposed data. Fortune reviewed the documents and contacted Anthropic. Anthropic then quietly restricted access to the data store. By then it was too late. The internet had already seen everything.

What was in those documents is the real story.

Claude Mythos is a new tier of model above Opus entirely. Anthropic's current lineup goes Haiku, Sonnet, Opus — smallest to largest. Mythos sits above all of them in a new category they were internally calling Capybara. Larger than Opus. More intelligent than Opus. More expensive than anything they have shipped before.

The benchmark jumps are significant. Compared to Claude Opus 4.6 — their current best model — Mythos gets dramatically higher scores on software coding, academic reasoning, and cybersecurity. Their own draft blog post described it as the most powerful model they have ever built. Their spokesperson confirmed it calling it a "step change" and "the most capable we've built to date."

The cybersecurity angle is where it gets scary. The leaked draft said Mythos is currently far ahead of any other AI model in cyber capabilities. It also warned that the model could allow attacks to scale faster than defenders can respond. Anthropic was so concerned about this that their entire rollout plan was built around giving early access specifically to cybersecurity defenders so they could prepare before the model became more widely available. They were trying to give the good guys a head start before the bad guys could use it.

The irony is historic. They built a model so advanced in cybersecurity that they were scared to release it publicly. Then they leaked it themselves through a basic configuration error in their own content management system. The most cybersecurity-capable AI ever built was exposed because someone forgot to check a privacy setting on a blog draft.

Anthropic called it human error. They are not wrong.

The rollout plan remains the same — a small group of early access customers in cybersecurity first. No public release date. The model is also described as very expensive to serve, meaning even when it does launch broadly it will not be cheap.

Why This Is A Bigger Deal Than A Leak

Every AI lab is racing to build more capable models. Anthropic just accidentally confirmed they have something so far ahead of the current frontier that they are genuinely scared to release it. They are not being cautious for marketing reasons. They are being cautious because their own testing showed the model could be used to cause real harm at a scale that current defenses cannot handle.

That is not a product announcement. That is a warning.

Software stocks dropped on the news. Cybersecurity stocks fell because the market realized a model that outpaces all current cyber defenses is about to exist in the world. Bitcoin slid to $66,000. The financial markets understood the implications faster than most people did.

The Vibe Check: Anthropic has spent years telling the world they are the safety-first AI lab. They just accidentally published proof that they are sitting on something that scares even them. That is not a bad look for their mission. That is actually the mission working exactly as intended. But it is a genuinely wild day to be following AI.

Limited Time Free Trial

Attio is the AI CRM for modern teams.

Connect your email and calendar, and Attio instantly builds your CRM. Every contact, every company, every conversation, all organized in one place.

Then Ask Attio anything:

  • Prep for meetings in seconds with full context from across your business

  • Know what’s happening across your entire pipeline instantly

  • Spot deals going sideways before they do

No more digging and no more data entry. Just answers.

Industry Impact
Also Today: The Leak Also Revealed A Secret CEO Summit 👀

Some Cool Stuff Worth Knowing

Buried in the same exposed data store was something else nobody was supposed to see. Anthropic has a private invite-only summit planned for European business leaders at a UK country manor. Dario Amodei is attending personally to demo features to top executives.

No public announcement. No press. Just Dario in a country manor somewhere in England showing Claude Mythos to the CEOs of major European companies before anyone else gets access.

The enterprise sales playbook is working exactly as you would expect from a company approaching $19 billion in annualized revenue. They are not waiting for people to discover Claude. They are bringing Claude directly to the people who write the biggest checks.

The fact that this summit was in the same leaked data store as the Mythos documents suggests Anthropic was planning to announce both together — a new frontier model and a high touch enterprise rollout strategy to match. One leak. Two stories. Neither one was supposed to be public today.

This is Hilarious

Anthropic spent years telling the world they are the most safety conscious AI lab on the planet. They built a model so powerful they were scared to release it. They designed an entire rollout strategy around giving cybersecurity experts a head start before anyone else could get their hands on it. They wrote a whole blog post about the unprecedented risks it poses.

Then they left that blog post in a public Google Drive.

The most safety-first company in AI just became the least secure blog on the internet. We love to see it.

What's The Recap?

Anthropic accidentally confirmed they have the most powerful AI model ever built sitting in testing right now. It beats Opus 4.6 on every benchmark that matters. It is so good at cybersecurity that it scares them. It is so expensive they cannot release it widely yet. And none of us were supposed to know any of this today. The leak is embarrassing for them. The model is impressive for everyone. And the fact that a safety-first AI lab is the one pumping the brakes on their own most powerful creation is either the most reassuring thing you have heard all week or the most terrifying depending on how you look at it. Either way you needed to know.

Stay building. 🤖

Check Out Our Latest YouTube Video

Recommended for you