back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-21

While everyone debates AI bubbles and state regulations, a fundamental shift is happening beneath the surface: the complete dissolution of boundaries between AI companies, creating a singular interconnected system where competition becomes collaboration, and individual corporate strategy becomes collective orchestration of the entire AI economy.

The Great AI Convergence: How Competition Became Collusion

The Microsoft-Nvidia-Anthropic deal announced this week isn’t just another partnership—it’s the latest evidence that the AI industry has evolved beyond competition into something resembling a single, distributed organism. Microsoft invests $5 billion in Anthropic while Anthropic commits to buying $30 billion in Microsoft compute. Nvidia invests in Anthropic while Anthropic commits to developing on Nvidia chips. It’s a perfect circle of mutual dependence that would make any antitrust lawyer’s head spin.

But this isn’t an anomaly—it’s the new normal. Google’s Alphabet owns DeepMind while investing in Anthropic. Amazon backs Anthropic while competing with it through Bedrock. OpenAI partners with Microsoft while Microsoft hedges with Anthropic. Every major player is simultaneously competitor, customer, supplier, and investor to every other player.

The strategic brilliance is that this structure makes traditional antitrust enforcement nearly impossible. There’s no single monopoly to break up when everyone owns everyone else. Instead of one company controlling AI, we have something far more sophisticated: a distributed monopoly where the entire industry functions as a single entity with aligned incentives. When Satya Nadella says ‘we are increasingly going to be customers of each other,’ he’s describing the architecture of the post-competitive economy.

This convergence solves the fundamental problem of AI development: the astronomical costs require risk-sharing that transcends traditional corporate boundaries. No single company can afford the full stack alone, so they’ve collectively created a system where success means everyone wins—and failure means everyone loses together. It’s not market consolidation; it’s market transcendence.

The Insurance Revolt: When Risk Becomes Uninsurable

While tech executives paint rosy pictures of AI’s future, insurers—the people whose literal job is pricing risk—are running for the exits. The growing reluctance to provide AI coverage isn’t just about technical uncertainty; it’s about the recognition that AI represents a fundamentally new category of systemic risk that traditional insurance models can’t handle.

Unlike traditional technology risks, AI failures don’t scale linearly. A faulty software update might crash some systems. But an AI model making decisions across millions of transactions, healthcare diagnoses, or financial trades can create cascading failures that spread faster than any human response. The potential for multibillion-dollar claims isn’t hypothetical—it’s inevitable in a system where AI touches everything.

The insurance retreat reveals something crucial that the AI hype machine obscures: even sophisticated risk assessment professionals can’t confidently model AI’s potential for catastrophic failure. When Warren Buffett won’t insure something, that tells you more about its real risk profile than any venture capital valuation.

This creates a paradox for AI adoption. Companies need insurance to deploy AI at scale, but insurers won’t provide coverage for systems whose failure modes they can’t understand or price. The result is that AI deployment is happening with dramatically less risk mitigation than any comparable technology in history. We’re essentially flying blind at 30,000 feet, and the people who usually sell us parachutes have decided they’d rather stay on the ground.

The insurance industry’s caution should be a wake-up call, but instead it’s being ignored in favor of moving fast and breaking things. That strategy works fine until the things that break are critical infrastructure, financial systems, or human lives.

The Regulatory Preemption Gambit: Federal Power Grab Disguised as Innovation Policy

Trump’s draft executive order to override state AI regulations isn’t really about federal versus state authority—it’s about creating a regulatory vacuum that benefits the AI industry at the expense of democratic oversight. The proposed DOJ AI litigation task force would systematically challenge any state that dares to impose meaningful constraints on AI development, using interstate commerce and First Amendment arguments as weapons.

The genius of this strategy is that it frames opposition to AI regulation as support for innovation and constitutional rights. Who could be against free speech and economic growth? But look closer and you’ll see something more sinister: the order explicitly targets states that require AI systems to alter ‘truthful outputs’—essentially arguing that AI companies have a constitutional right to deploy systems that generate harmful content without accountability.

The threat to withhold federal broadband funding from non-compliant states reveals the real game. This isn’t about constitutional principles; it’s about using federal leverage to prevent any meaningful constraints on AI development. States like California and Colorado have tried to implement basic transparency requirements—publish how you train models, report safety measures—and even these modest steps are deemed unacceptable.

What’s particularly telling is the order’s call for a ‘minimally burdensome national standard.’ Translation: a federal framework so weak it provides political cover for inaction while preventing states from implementing stronger protections. It’s regulatory capture disguised as regulatory clarity.

The tech industry has learned that it’s easier to capture one federal regulator than fifty state regulators. By preempting state action without providing meaningful federal oversight, they create the best of all worlds: the appearance of regulation with none of the substance. It’s a masterclass in using federalism as a shield rather than a principle.

Questions

  • If the AI industry has evolved beyond traditional competition, should we be regulating it like a utility rather than a collection of competing firms?
  • What happens when the technologies powering our economy become too risky for the insurance industry to cover—and should that tell us something about deployment speed?
  • Is the push for federal preemption of AI regulation actually about preventing any regulation at all, and what does that mean for democratic oversight of transformative technology?

Past Briefings

Mar 24, 2026

I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts

THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...

Mar 23, 2026

OpenAI Guarantees PE Firms 17.5%. The Bonfire Gets a Bigger Tent

THE NUMBER: 17.5% — the guaranteed minimum return OpenAI is offering private equity firms to raise $4 billion in new capital. For context, the S&P 500 has averaged 10.5% annually over the last decade. When a pre-IPO company expected to go public at over $1.5 trillion has to promise returns that beat the market by 70% just to get investors in the door, the incentive structure is telling you something the press release isn't. The Opening Two stories landed today that look separate but aren't. OpenAI is offering PE firms a guaranteed 17.5% return with downside protection to raise $4...

Mar 22, 2026

Jensen Huang Just Told Every Company What to Build. Most Aren’t Listening.

THE NUMBER: 250,000 — GitHub stars for OpenClaw in weeks, not years. Jensen Huang called it the most successful open-source project in history and the operating system for personal AI. Every enterprise company, he said, needs an OpenClaw strategy. But the real question isn't whether you have one. It's whether your business can even be read by one. At GTC last week, Jensen Huang didn't just announce products. He announced a new competitive requirement. Every company needs a claw strategy — a plan for deploying AI agents and, just as critically, a plan for making their business accessible to the...