back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-11-20

While everyone debates whether AI is a bubble, the real story is a massive power consolidation happening beneath the surface. The federal government is moving to crush state AI regulation, enterprises are building fortress-like local systems, and a new class of AI grifters is emerging to profit from regulatory confusion—all while the technology’s actual adoption remains stubbornly slow.

The Federal Steamroller Comes for State AI Laws

Trump’s leaked executive order to weaponize the DOJ against state AI laws isn’t just regulatory arbitrage—it’s the opening move in a winner-take-all battle over who controls the future of American technology. The draft order would create a DOJ task force specifically to challenge state laws as unconstitutional, withhold federal funding from non-compliant states, and preempt the patchwork of regulations that Silicon Valley claims is stifling innovation.

But look past the ‘innovation versus regulation’ framing. This is really about cementing the dominance of a handful of mega-tech companies before alternatives can emerge. States like California and Colorado have been the only entities with enough power to actually impose meaningful constraints on AI development—requiring transparency, safety protocols, and algorithmic accountability. Crush that, and you’re left with purely voluntary industry self-regulation.

The timing isn’t coincidental. We’re seeing simultaneous AI bubble concerns, slowing enterprise adoption, and growing skepticism about returns on massive AI investments. The last thing Big Tech needs is state-level regulation creating compliance costs that could tip already-marginal AI projects into the red. Better to use federal preemption to lock in the current oligopoly structure while the window remains open.

The real tell? Even GOP senators like Josh Hawley are questioning the anti-federalism angle, suggesting this isn’t about conservative principles but about protecting specific corporate interests. When you lose traditional Republicans on a states’ rights issue, you’re revealing the true beneficiaries.

The Great Enterprise AI Fortress-Building

While the headlines obsess over ChatGPT subscriptions and public AI tools, enterprises are quietly building something entirely different: local AI fortresses designed to keep their data completely isolated from the cloud giants. The story emerging from education and corporate IT leaders reveals the real AI adoption strategy for organizations that actually handle sensitive information.

Take Merced County Office of Education, which argues that running local LLMs with tools like Ollama and contracting with Cisco or Nvidia for enterprise support is more cost-effective and secure than paying for cloud subscriptions. They’re willing to accept 90% of the capabilities to maintain 100% of the data security. This isn’t technological Luddism—it’s rational risk management.

The broader pattern is telling: enterprises are moving fast on AI infrastructure but slowly on actual deployment. Half of CFOs expect AI to create new roles while nearly as many expect job cuts, yet only 12% feel prepared for these shifts. Companies are buying the picks and shovels without knowing what they’re mining for.

This creates a fascinating dynamic where the cloud AI providers’ revenue looks robust (hello, Nvidia’s $57 billion quarter) while actual workflow transformation remains limited. Enterprises are essentially stockpiling AI capability in local fortresses, waiting to see how the regulatory and competitive landscape shakes out before committing to specific use cases.

The winners here aren’t the flashy AI chatbot companies but the infrastructure players enabling this fortress-building: hardware providers, enterprise AI platforms, and cybersecurity companies that can guarantee data never leaves the building.

The AI Grift Economy Goes Mainstream

Rob Braxman’s elaborate privacy phone scam reveals something bigger than one bad actor—it shows how AI anxiety is creating a new class of sophisticated grifters who exploit the gap between technological fear and understanding. Braxman built an entire ecosystem around fake privacy solutions, selling phones that can’t make calls and offering encrypted communications that send keys in plain text, all while positioning himself as the antidote to Big Tech surveillance.

But the real innovation here isn’t technical—it’s psychological. Braxman understood that people’s AI fears aren’t really about specific technical capabilities but about loss of control and agency. So he sold them the illusion of control: ‘private’ phones, ‘secure’ messaging, and explanations for why mainstream solutions couldn’t be trusted. The products didn’t have to work; they just had to feel like resistance to an overwhelming technological tide.

This grift economy extends far beyond individual scammers. Look at Xania Monet, the AI-generated pop star that signed a $3 million deal after hitting 17 million streams. The entire project exists because someone realized they could capture the novelty value of AI-generated content while actual human creativity becomes more valuable by contrast. It’s arbitraging the temporary fascination with AI against the enduring appeal of authentic human expression.

Meanwhile, legitimate AI safety concerns get drowned out by both the grifters selling fake solutions and the companies overselling AI capabilities. When judges complain about becoming ‘human filters’ for AI-generated legal arguments, and chatbots prove dangerous for teen mental health, the real issues get lost in the noise between scammers and boosters.

The through-line: AI’s greatest impact so far might be creating new opportunities for sophisticated deception rather than genuine productivity improvements.

Questions

  • If enterprises are stockpiling local AI infrastructure but not deploying it, what happens when the bubble finally pops and they’re stuck with expensive hardware they never actually used?
  • Will the federal preemption of state AI laws backfire by making AI development less trustworthy in the eyes of consumers and enterprises who relied on state oversight?
  • When the AI grift economy inevitably collapses, will it take legitimate AI safety research and development down with it?

Past Briefings

Mar 24, 2026

I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts

THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...

Mar 23, 2026

OpenAI Guarantees PE Firms 17.5%. The Bonfire Gets a Bigger Tent

THE NUMBER: 17.5% — the guaranteed minimum return OpenAI is offering private equity firms to raise $4 billion in new capital. For context, the S&P 500 has averaged 10.5% annually over the last decade. When a pre-IPO company expected to go public at over $1.5 trillion has to promise returns that beat the market by 70% just to get investors in the door, the incentive structure is telling you something the press release isn't. The Opening Two stories landed today that look separate but aren't. OpenAI is offering PE firms a guaranteed 17.5% return with downside protection to raise $4...

Mar 22, 2026

Jensen Huang Just Told Every Company What to Build. Most Aren’t Listening.

THE NUMBER: 250,000 — GitHub stars for OpenClaw in weeks, not years. Jensen Huang called it the most successful open-source project in history and the operating system for personal AI. Every enterprise company, he said, needs an OpenClaw strategy. But the real question isn't whether you have one. It's whether your business can even be read by one. At GTC last week, Jensen Huang didn't just announce products. He announced a new competitive requirement. Every company needs a claw strategy — a plan for deploying AI agents and, just as critically, a plan for making their business accessible to the...