← Back to blog

Best of the Week in AI: April 24–30, 2026 — The Week the Lock-In Era Ended

In the span of five days, the alliance map of the AI industry was redrawn. Google announced up to $40 billion in funding for Anthropic — its biggest external AI bet ever — while continuing to ship a frontier model of its own. DeepSeek dropped V4, an open-weights model with a 1 million-token context window that lands within a hair of frontier quality at roughly a tenth of the price. Microsoft and OpenAI dissolved the cloud exclusivity that had defined the last three years of AI infrastructure. And less than 24 hours after that, GPT-5.5 was live on AWS Bedrock.

If you’ve been operating on the mental model that AI is a Big Three of OpenAI, Anthropic, and Google — with each one nailed to a single cloud and a single chip vendor — this was the week that model broke.

The vertical AI stack — one lab, one cloud, one chip — just decoupled in public, and there’s no obvious way to put it back together.

Below: the four headlines, the pattern underneath them, and what it changes for anyone trying to pick which AI to use this quarter.

April 24 — Google’s $40B Bet on Anthropic

On Thursday, Google announced an investment of up to $40 billion in Anthropic. The structure is striking: $10 billion paid immediately at roughly a $350 billion Anthropic valuation, with another $30 billion contingent on performance milestones. Most of the value is compute — Google Cloud will provide 5 gigawatts of TPU-based capacity over the next five years, with room to scale further.

Read past the headline number and the strange thing is that Google is investing $40 billion in its own most direct frontier competitor. DeepMind ships Gemini 3.1 Pro. Anthropic ships Claude Opus 4.7. Both target the same enterprise budgets, the same agentic workflows, the same benchmark leaderboards. Google is now writing checks to both teams.

Why? Because exclusivity isn’t a winning bet anymore. Anthropic still holds its $5 billion Amazon investment with up to $100 billion of committed AWS spend. It signed CoreWeave capacity earlier in April. It’s reportedly preparing a public listing later in 2026. Google needs Anthropic’s training runs in TPU pods to validate its hardware roadmap against Nvidia. Anthropic needs every gigawatt it can find. Both sides win by not insisting on owning each other.

April 24 — DeepSeek V4 Lands on the Frontier (and Open-Sources Itself)

The same day Google was writing its $40B check, DeepSeek dropped a preview of V4 — under an MIT license, weights on Hugging Face. Two variants:

The pricing is what made everyone stop scrolling:

Model Input ($/M tokens) Output ($/M tokens) License
DeepSeek V4-Flash $0.14 $0.28 MIT (open weights)
DeepSeek V4-Pro $1.74 $3.48 MIT (open weights)
Gemini 3.1 Pro ~$5 ~$30 Closed
GPT-5.5 ~$2 ~$12 Closed

V4-Pro is now the cheapest frontier-class model on the market, by a margin large enough to change procurement decisions. V4-Flash undercuts even GPT-5.4 Nano. DeepSeek’s own claim is that V4-Pro “rivals top closed-source models” while leading every other open-weights model on math, STEM, coding, and agentic benchmarks.

Two structural details matter beyond the price tag. First, V4 is the highest-profile model to date that is explicitly optimised for Huawei Ascend chips alongside Nvidia — meaning it’s the first frontier-tier model that doesn’t fundamentally depend on the Nvidia supply chain. Second, the architecture cuts FLOPs to roughly 27% of V3.2 in long-context inference, with a KV cache shrunk to about 10% of the previous generation. Cheap is partly a pricing decision; in V4’s case it’s also a real efficiency story.

For any organisation with sovereignty, regulatory, or air-gap constraints, this week’s release is the first time a near-frontier model has actually been runnable on your own metal without a major quality drop.

April 27 — Microsoft and OpenAI Officially Decouple

On Monday, Microsoft and OpenAI publicly restructured the contract that anchored both companies for the last three years. The headline changes:

For three years, OpenAI’s enterprise sales motion dragged Azure along by default. Azure’s Copilot story was, structurally, a single-vendor lock-in. As of Monday, neither of those things is true. The most consequential phrase in the whole week of news was buried in a paragraph of contract language: nonexclusive license through 2032.

April 28 — OpenAI on AWS Bedrock, Less Than 24 Hours Later

The very next day, AWS launched a preview of OpenAI models on Bedrock. Three things landed simultaneously:

The speed is the story. The fact that AWS could ship a Bedrock-OpenAI integration within 24 hours of the Microsoft exclusivity ending tells you exactly how long both companies had been waiting. The contracts were drafted. The pricing was negotiated. The deployment was queued. The only thing left was for Microsoft to sign.

What this means in practice: any team running on AWS — which is most teams — can now reach GPT-5.5 without rearchitecting around Azure. Combined with Anthropic’s existing Bedrock presence and Mistral’s, AWS just became the most fully-stocked frontier-AI shelf in the cloud market.

The Pattern Underneath

Pull these four stories together and the same shape shows up in each: vertical integration is breaking horizontally.

A year ago, the conventional wisdom was that the AI industry would consolidate into a few inseparable lab-plus-cloud-plus-chip stacks. OpenAI + Microsoft + Nvidia. Anthropic + AWS + Nvidia. Google + Google + Google. Investors loved it. Pundits loved it. It was clean.

This week, all three of those stacks officially decoupled.

The AI industry isn’t consolidating into three vertical pillars anymore. It’s flattening into a marketplace where every model can talk to every cloud, where open weights credibly threaten closed models on price, and where the only durable advantage is being genuinely best at something specific.

Why This Makes Multi-Model the Default, Not the Hedge

If you’re a buyer, the most expensive habit you can have right now is loyalty.

The price gradient between the top closed model and the best open one used to be five to ten times. After this week, it’s closer to two times on input and one-to-three times on output — and on bulk extraction or classification workloads, it’s now cheaper to run V4-Flash than to keep paying a frontier provider. Quality leadership rotates roughly every six weeks. The premium for picking right is shrinking; the cost of picking wrong is rising.

This is exactly the rational case for multi-model comparison — and it got sharper this week, not weaker. When the market consolidates, picking one is fine. When it diversifies, picking one is increasingly expensive. Anyone who locked themselves into a single annual contract before April is now overpaying or underperforming, sometimes both.

We saw the same logic at the task layer in our contract review story: each model caught a different flaw, and only the comparison surfaced them all. The same is now true at the procurement layer.

A Practical Routing Guide for the New Landscape

Here’s how the lineup looks after this week, with sensible defaults for common workloads. As always, treat it as a starting point and let your own comparisons override.

If you’re doing… Start with Cross-check with
High-volume extraction / classificationDeepSeek V4-FlashGemini 3.1 Pro
Frontier reasoning at lowest costDeepSeek V4-ProClaude Opus 4.7
Real codebase changes / SWE workClaude Opus 4.7GPT-5.5
Long-running terminal agentsGPT-5.5 (Bedrock)Opus 4.7 (Bedrock)
Multimodal / video / long PDFGemini 3.1 ProGPT-5.5
Sovereign / on-prem / air-gapped deploymentsDeepSeek V4-ProMistral Large
Anything you’ll publish or shipRun all three majorsTrust the consensus

Notice how every row except the last has a different leader. That, in one table, is the entire argument for not picking one.

One More Thing — The Mythos Preview Quietly Reshuffling Coding

One quieter story this week worth flagging. Anthropic’s Claude Mythos Preview — available exclusively through Project Glasswing to roughly 50 partner organisations — has begun showing up on private SWE-bench Verified runs at numbers above the public Opus 4.7 leader. Public reporting describes Mythos as “a step change” above Opus, with notable strength on computer-security tasks. Access is gated, the model is unreleased, the numbers are early — but if they hold when Mythos ships publicly, the coding crown changes hands again before the end of May.

For context on why that matters at all, our three-flagship breakdown from earlier this week walks through how thin the leads already are between Opus 4.7, GPT-5.5, and Gemini 3.1 Pro. Mythos arriving on top of that mix would put the coding leaderboard back into motion before most teams have even tested the current one.

The Bigger Picture

Two years ago, the “best AI” question was a vendor question. One year ago, it was a benchmark question. As of this week, it’s a routing question — and the routing surface just got significantly larger. More clouds, more chips, more open-weights options, less exclusivity, faster turnover, sharper price differences.

None of that is bad news for users. It’s the opposite. The decoupling of the vertical stacks means more leverage, more choice, and more reason to compare. The labs are no longer pretending they’re each best at everything. Their cloud partners are no longer pretending they’re each loyal to one lab. And the open-source contenders are no longer pretending they’re a generation behind.

The right question for May isn’t “which AI is best”. It’s “am I set up to use whichever one wins next week?”


Did one of these stories change how you’re routing AI work this quarter? We’d love to hear which one. Drop us a line.

The lock-in era is over. Compare every frontier model in one tab.

Try SNEOS Free