back

Signal/Noise

Get SIGNAL/NOISE in your inbox daily

Signal/Noise

2025-12-10

While financial media fixates on LLM leaderboards and stock predictions, today’s stories reveal the real stakes: AI is becoming the ultimate context capture mechanism, and whoever controls the flow of information into these systems controls the narrative. The battle isn’t just for market share—it’s for the ability to shape reality itself.

The Distribution Trap: Why Alphabet Already Won the War That Matters

The Motley Fool’s Alphabet cheerleading misses the actual strategic game being played. Yes, Gemini 3.0’s 30% user growth versus ChatGPT’s 6% matters, but not for the reasons they think. This isn’t about having the “best” LLM—it’s about controlling the pipes through which AI becomes useful to humans.

Alphabet isn’t winning because Gemini is technically superior. It’s winning because it already owns the daily workflow of billions. When AI agents emerge as the next phase, Google doesn’t need to convince anyone to adopt a new platform—it just needs to make existing tools smarter. Your Gmail gets better at drafting emails. Google Maps becomes conversational. Search becomes proactive.

This is classic bundling strategy disguised as innovation. OpenAI is still trying to figure out how to make ChatGPT subscriptions profitable while Google is embedding AI directly into the revenue-generating activities people already perform daily. The agent revolution won’t be about downloading new apps—it will be about familiar tools becoming invisibly intelligent.

The real tell? Sam Altman’s “temporary economic headwinds” memo isn’t about competition from a better model. It’s about the realization that standalone AI products might be fundamentally unprofitable when your competitor can subsidize AI development with search advertising revenue. Google doesn’t need to monetize Gemini directly—it just needs Gemini to make its existing monopolies more valuable.

This explains why Microsoft is desperately trying to Copilot-ify everything, and why Meta is throwing billions at AI despite no clear monetization path. They all understand the same terrifying truth: if you don’t control how AI accesses and processes information, you become irrelevant to how humans understand the world.

The Information Pollution Precedent: From BBSes to Bias Laundering

The far-right extremism story reads like ancient history until you realize it’s actually a preview of AI’s near future. Every major technological shift—from bulletin board systems to the web—has been weaponized first by those with the strongest incentives to manipulate information. Now we’re handing them the most powerful information manipulation tool ever created.

The pattern is consistent and chilling: early adopters exploit new platforms for propaganda distribution, mainstream users follow, and by the time society develops countermeasures, the damage is embedded in the system’s architecture. What makes AI different is the scale and sophistication of potential manipulation.

We’re already seeing this play out. Grok calling itself “MechaHitler” isn’t a bug—it’s a feature of systems trained on human-generated content without adequate filtering. The far-right’s embrace of AI tools for propaganda creation, image manipulation, and detection evasion represents the early-stage exploitation that historically predicts how these technologies will be abused at scale.

But here’s the deeper strategic concern: AI systems don’t just reflect bias, they amplify and legitimize it. When a chatbot denies the Holocaust, it’s not just spreading misinformation—it’s laundering extremist views through the perceived authority of artificial intelligence. Users increasingly treat AI outputs as objective truth, creating a perfect vector for reality distortion.

The companies building these systems face a fundamental tension between engagement (which rewards controversial content) and responsibility (which requires expensive human oversight). Guess which one wins when venture capital needs returns and public companies need growth. We’re building information systems optimized for virality in a world where the most viral information is increasingly poisonous.

Questions

  • If AI agents become the primary interface between humans and information, who decides what sources these agents prioritize and trust?
  • What happens when the same companies optimizing for engagement are responsible for filtering out extremist content from their training data?
  • Are we building AI systems to inform users or to confirm their existing beliefs, and do the economic incentives even allow for a distinction?

Past Briefings

Mar 24, 2026

I’m a Mac. I’m a PC. And Only One of Us Is Getting Enterprise Contracts

THE NUMBER: 1,000 — the number of publishable-grade hypotheses an AI model can generate in an afternoon. Terence Tao, the greatest living mathematician, says the bottleneck is no longer ideas. It's knowing which ones are true. Two engineers hacked an inflight entertainment system this week to launch a video game at 35,000 feet. The airline gave them free flights for life. The hacker community on X thought it was the coolest thing they'd seen all month. Every CISO reading this just felt their blood pressure spike. That's the divide. Not between capabilities. Between cultures. Remember those "I'm a Mac, I'm...

Mar 23, 2026

OpenAI Guarantees PE Firms 17.5%. The Bonfire Gets a Bigger Tent

THE NUMBER: 17.5% — the guaranteed minimum return OpenAI is offering private equity firms to raise $4 billion in new capital. For context, the S&P 500 has averaged 10.5% annually over the last decade. When a pre-IPO company expected to go public at over $1.5 trillion has to promise returns that beat the market by 70% just to get investors in the door, the incentive structure is telling you something the press release isn't. The Opening Two stories landed today that look separate but aren't. OpenAI is offering PE firms a guaranteed 17.5% return with downside protection to raise $4...

Mar 22, 2026

Jensen Huang Just Told Every Company What to Build. Most Aren’t Listening.

THE NUMBER: 250,000 — GitHub stars for OpenClaw in weeks, not years. Jensen Huang called it the most successful open-source project in history and the operating system for personal AI. Every enterprise company, he said, needs an OpenClaw strategy. But the real question isn't whether you have one. It's whether your business can even be read by one. At GTC last week, Jensen Huang didn't just announce products. He announced a new competitive requirement. Every company needs a claw strategy — a plan for deploying AI agents and, just as critically, a plan for making their business accessible to the...