Anthropic Hits $380B — And Is Quietly Beating OpenAI in Enterprise Sales
Anthropic's February 2026 valuation surge to $380B is backed by a concrete edge: Ramp's March index shows Anthropic outpacing OpenAI in first-time enterprise purchases, fueled by Google TPU deals and aggressive cloud expansion.
Transcript
What do you actually do with a million tokens of context?
That's the question.
Anthropic says Opus 4.6 can swallow an entire codebase — a thousand-page legal filing — in one single pass. But the real story isn't the number. It's what it unlocks, and where it still completely breaks down.
And here's the stat that caught my eye — Anthropic is quietly winning seventy percent of head-to-head first-time enterprise purchases against OpenAI. Seventy percent. And that million-token context window? Big reason why.
We're digging into all of it today. Stay with us.
That enterprise momentum is exactly what we're unpacking today. Hey, I'm Holden.
I'm Naomi. And welcome to AI Dose Daily.
So here's the story. Anthropic launched Opus 4.6 back in February with a one-million-token context window in beta. That is massive. We're going to break down what that concretely enables — for enterprise workflows, for agentic coding, for long-document analysis — and critically, where it still falls short.
Because the number sounds impressive, but the interesting part is what changes in practice when a model can hold that much information at once.
Exactly. So here's our roadmap. We'll cover the compute infrastructure that actually makes this possible, the enterprise adoption numbers — which are wild — how OpenAI is responding with its own agent platform, and what you should watch out for if you're building on any of these models right now.
Lots to get to. Let's dive in.
So to understand why a million-token context window is such a big deal, you've gotta zoom out. The AI landscape right now is defined by three overlapping races — compute, enterprise agents, and governance. This isn't about who has the smartest chatbot anymore. It's about infrastructure, distribution, and regulation.
It's an arms race measured in gigawatts now, not GPU clusters. [SFX: WOOSH]
Exactly. And Anthropic's compute backbone for Opus 4.6 — this is the engine under the hood — it's a Google deal worth tens of billions of dollars, giving them access to up to a million Google AI chips, expected to bring well over a gigawatt of capacity online this year. That raw compute is what makes serving million-token contexts at enterprise scale even feasible.
And investors are clearly buying the thesis. Anthropic hit a three-hundred-eighty-billion-dollar valuation in February, with Nvidia and Microsoft in the round.
That's right. AP reported it February twelfth.
Okay but Holden — help me here. Why does context length matter more than just having a bigger, smarter model?
Because a million-token window means you feed in an entire codebase, a full regulatory filing, months of customer interactions — no summarization, no chunking. The model sees everything at once. That's a fundamentally different capability than a smarter model that can only look at twenty pages at a time.
One pass. One prompt. Everything in view.
That's the promise. Now let's talk about what it actually enables in practice.
So we've got the infrastructure. We've got the compute. Now let's talk about what a million tokens actually *does* when you put it in front of an enterprise customer.
Yeah, and the numbers tell a story here. Ramp's AI Index dropped March 11th — 47.6% of businesses are now using AI. Nearly a quarter of them, 24.4%, are on Anthropic specifically.
But here's the number that should make OpenAI nervous. [SFX: IMPACT]
Anthropic is winning roughly 70% of head-to-head first-time business purchases against OpenAI. Seventy percent.
And long context is a huge piece of why. Think about what an enterprise actually needs. You've got entire contract libraries, compliance frameworks, quarterly financials — documents that used to require chunking and summarization. Now? One pass. One prompt. Everything in context at once.
And for agentic coding, this is a qualitative leap. With a million tokens, an agent can hold an entire code repository in context. It reads across files, understands dependencies, makes coordinated changes. Compare that to 128K or 200K windows where agents are working through a keyhole — they lose cross-file coherence constantly.
And for long-document analysis — legal discovery, patent review, medical records — a million tokens is roughly 750,000 words. That's ten full novels. Professionals who spend hours reading can get answers in seconds.
But — and this is important — there are real bottlenecks. Latency scales with context length. Time-to-first-token on a full million-token input is *significantly* longer. Cost per query rises linearly or worse. And there's the "lost in the middle" problem — attention quality degrades for information buried deep in long contexts. That hasn't been fully solved.
Plus enterprise data pipelines aren't built to cleanly package documents into million-token payloads yet. The model can handle it. The plumbing can't always keep up.
Which brings us to OpenAI's answer. They launched Frontier on February 5th — an enterprise agent platform, not a context-length play. HP, Intuit, Oracle, State Farm, Thermo Fisher, Uber — all early adopters. Their bet is that the orchestration layer matters more than raw window size.
And OpenAI said it explicitly. The defining question for leaders is no longer what AI can do, but — quote — "how to turn capability into operational change."
So you've got two very different theories of what wins enterprise AI. Anthropic says: give the model everything, let it see the whole picture. OpenAI says: build the platform that makes agents useful inside your existing workflows.
And right now, on first-time purchases at least, Anthropic's theory is winning.
So we've laid out what the million-token window does and how OpenAI is countering with Frontier. But let's zoom out — what does this arms race actually mean?
Okay, here's the bullish case. OpenAI raised a hundred and ten billion dollars on February 27th at a seven-thirty billion valuation. Anthropic hit three-eighty billion in February. These are not speculative bets on a maybe. This is capital being deployed years in advance because the biggest investors on the planet believe long-context enterprise AI is durable infrastructure.
And the skeptical case?
The spending is staggering. We're talking gigawatts of power, tens of billions in chip deals, and the AP has explicitly flagged concerns — not just financial, but environmental. Local communities are pushing back on power costs and construction footprints around these data centers. And here's the thing — serving a million-token query is expensive. Can enterprises actually afford this at volume, or does it stay a premium feature for high-stakes use cases only?
That's a real question. But here's what I think people are sleeping on — the lock-in dynamics. [SFX: RISER]
OpenAI's February 27th AWS deal makes Amazon the exclusive third-party cloud distribution provider for Frontier. OpenAI is consuming roughly two gigawatts of Trainium capacity through that deal. Meanwhile, Anthropic has spread its bets — Google, Microsoft, Amazon, Nvidia. So if you're an enterprise picking a cloud ecosystem right now, you are increasingly choosing your AI model provider at the same time. And those switching costs? They're going up fast.
Which brings us to the real question — is context length a moat or a commodity?
Right now it's a differentiator. Anthropic has it, others are catching up. But when OpenAI or Google matches a million tokens — and they will — the advantage shifts to quality of attention, pricing, developer experience. The window size alone is temporary.
So the play for Anthropic is to use this head start to lock in enterprise relationships before the feature gap closes.
Exactly. And that seventy percent first-time purchase win rate suggests it's working — for now.
Okay, so let's bring this down to earth. If you're listening and you're actually making decisions right now, here's what matters.
Number one — if you're evaluating models for enterprise, do not trust the marketing number. A million tokens means nothing if the model loses coherence at token 600,000. Feed your actual documents, your real contracts, your real code — and check for degradation in the middle of the context. Test it yourself.
Number two — if you're building agentic coding tools, this changes your architecture. You might not need RAG for intra-repo context anymore. The whole repo fits. But watch your latency and cost budgets, because a full million-token query is not cheap and it is not fast.
Number three — and this is the one people are sleeping on — your cloud provider choice is becoming your AI model choice. OpenAI's locked into AWS exclusivity, Anthropic's tight with Google. Factor that into procurement now, not six months from now.
And finally, watch whether that "beta" label on the million-token window lifts in Q2 and whether pricing drops enough for high-volume use. That's the real unlock.
Alright, before we go, let's rip through the rest of this week's headlines. Naomi, you ready?
Let's go.
OpenAI quietly acquired health-tech startup Torch back in January for about a hundred million dollars in equity — that's a clear signal they're pushing beyond chat and into specialized healthcare workflows.
Meanwhile, OpenAI's agreement with the Department of War, published February 28th and updated March 2nd, now explicitly bars domestic surveillance of U.S. persons and blocks agencies like the NSA from using the deal — defense AI with guardrails, at least on paper.
Nvidia and OpenAI have a partnership targeting at least ten gigawatts of AI data centers, with the first gigawatt on their new Vera Rubin systems expected in the second half of this year.
And the number that still makes my jaw drop — OpenAI's hundred-and-ten-billion-dollar financing round broke down as thirty billion from SoftBank, thirty billion from Nvidia, and fifty billion from Amazon, making it the single largest private capital raise in tech history.
Sam Altman summed up the Amazon side of that deal by saying, quote, "OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people."
Practical and genuinely useful — at a fifty-billion-dollar price tag. No pressure.
That is gonna do it for this episode of AI Dose Daily.
If you found this useful, share it with someone who's trying to make sense of the AI landscape right now — trust me, they need it.
We'll be back tomorrow with more. Thanks for listening, everybody.
See you next time.
