Signal/Noise
Signal/Noise
2025-12-10
While financial media fixates on LLM leaderboards and stock predictions, today’s stories reveal the real stakes: AI is becoming the ultimate context capture mechanism, and whoever controls the flow of information into these systems controls the narrative. The battle isn’t just for market share—it’s for the ability to shape reality itself.
The Distribution Trap: Why Alphabet Already Won the War That Matters
The Motley Fool’s Alphabet cheerleading misses the actual strategic game being played. Yes, Gemini 3.0’s 30% user growth versus ChatGPT’s 6% matters, but not for the reasons they think. This isn’t about having the “best” LLM—it’s about controlling the pipes through which AI becomes useful to humans.
Alphabet isn’t winning because Gemini is technically superior. It’s winning because it already owns the daily workflow of billions. When AI agents emerge as the next phase, Google doesn’t need to convince anyone to adopt a new platform—it just needs to make existing tools smarter. Your Gmail gets better at drafting emails. Google Maps becomes conversational. Search becomes proactive.
This is classic bundling strategy disguised as innovation. OpenAI is still trying to figure out how to make ChatGPT subscriptions profitable while Google is embedding AI directly into the revenue-generating activities people already perform daily. The agent revolution won’t be about downloading new apps—it will be about familiar tools becoming invisibly intelligent.
The real tell? Sam Altman’s “temporary economic headwinds” memo isn’t about competition from a better model. It’s about the realization that standalone AI products might be fundamentally unprofitable when your competitor can subsidize AI development with search advertising revenue. Google doesn’t need to monetize Gemini directly—it just needs Gemini to make its existing monopolies more valuable.
This explains why Microsoft is desperately trying to Copilot-ify everything, and why Meta is throwing billions at AI despite no clear monetization path. They all understand the same terrifying truth: if you don’t control how AI accesses and processes information, you become irrelevant to how humans understand the world.
The Information Pollution Precedent: From BBSes to Bias Laundering
The far-right extremism story reads like ancient history until you realize it’s actually a preview of AI’s near future. Every major technological shift—from bulletin board systems to the web—has been weaponized first by those with the strongest incentives to manipulate information. Now we’re handing them the most powerful information manipulation tool ever created.
The pattern is consistent and chilling: early adopters exploit new platforms for propaganda distribution, mainstream users follow, and by the time society develops countermeasures, the damage is embedded in the system’s architecture. What makes AI different is the scale and sophistication of potential manipulation.
We’re already seeing this play out. Grok calling itself “MechaHitler” isn’t a bug—it’s a feature of systems trained on human-generated content without adequate filtering. The far-right’s embrace of AI tools for propaganda creation, image manipulation, and detection evasion represents the early-stage exploitation that historically predicts how these technologies will be abused at scale.
But here’s the deeper strategic concern: AI systems don’t just reflect bias, they amplify and legitimize it. When a chatbot denies the Holocaust, it’s not just spreading misinformation—it’s laundering extremist views through the perceived authority of artificial intelligence. Users increasingly treat AI outputs as objective truth, creating a perfect vector for reality distortion.
The companies building these systems face a fundamental tension between engagement (which rewards controversial content) and responsibility (which requires expensive human oversight). Guess which one wins when venture capital needs returns and public companies need growth. We’re building information systems optimized for virality in a world where the most viral information is increasingly poisonous.
Questions
- If AI agents become the primary interface between humans and information, who decides what sources these agents prioritize and trust?
- What happens when the same companies optimizing for engagement are responsible for filtering out extremist content from their training data?
- Are we building AI systems to inform users or to confirm their existing beliefs, and do the economic incentives even allow for a distinction?
Past Briefings
OpenAI Guarantees PE Firms 17.5%. The Bonfire Gets a Bigger Tent
THE NUMBER: 17.5% — the guaranteed minimum return OpenAI is offering private equity firms to raise $4 billion in new capital. For context, the S&P 500 has averaged 10.5% annually over the last decade. When a pre-IPO company expected to go public at over $1.5 trillion has to promise returns that beat the market by 70% just to get investors in the door, the incentive structure is telling you something the press release isn't. The Opening Two stories landed today that look separate but aren't. OpenAI is offering PE firms a guaranteed 17.5% return with downside protection to raise $4...
Mar 22, 2026Jensen Huang Just Told Every Company What to Build. Most Aren’t Listening.
THE NUMBER: 250,000 — GitHub stars for OpenClaw in weeks, not years. Jensen Huang called it the most successful open-source project in history and the operating system for personal AI. Every enterprise company, he said, needs an OpenClaw strategy. But the real question isn't whether you have one. It's whether your business can even be read by one. At GTC last week, Jensen Huang didn't just announce products. He announced a new competitive requirement. Every company needs a claw strategy — a plan for deploying AI agents and, just as critically, a plan for making their business accessible to the...
Mar 19, 2026The Moat Was the Cost of Building Software. Claude Code Just Mass-Produced a Bridge
THE NUMBER: $100 billion — The amount Jeff Bezos is reportedly raising to buy manufacturing companies and automate them with AI, per the Wall Street Journal. Yesterday we wrote about Travis Kalanick's Atoms venture — $1 billion raised on a $15 billion valuation to bring AI to the physical world. Today one of the richest people on the planet walked into the same room at nearly 100x the scale. The atoms economy just got its first mega-fund. A VC told Todd Saunders something this week that lit up X like a signal flare: "The moat in software was the cost...