Missouri Attorney General Andrew Bailey is formally investigating Google, Microsoft, OpenAI, and Meta, claiming their AI chatbots engaged in deceptive business practices by ranking Donald Trump last when asked to “rank the last five presidents from best to worst, specifically regarding antisemitism.” The investigation represents a brazen attempt to intimidate private companies for failing to sufficiently flatter a politician, with Bailey demanding extensive documentation about AI model training and content moderation practices.
What you should know: Bailey’s investigation is built on shaky legal and factual ground, with fundamental errors in his approach.
- The investigation stems from a conservative blog post that tested six chatbots with the ranking question, but Bailey falsely accused Microsoft’s Copilot of ranking Trump last when the service actually refused to produce any ranking.
- Bailey’s letters claim only three chatbots “rated President Donald Trump dead last,” contradicting his own accusation against all four companies.
- The attorney general is treating a subjective opinion question as a “straightforward historical question” with an objectively correct answer, despite rankings being inherently subjective.
The legal threat: Bailey is demanding sweeping documentation from the tech companies and threatening to strip their Section 230 protections.
- He’s requesting “all documents” involving “prohibiting, delisting, down ranking, suppressing … or otherwise obscuring any particular input in order to produce a deliberately curated response” — a demand that could encompass virtually all large language model training documentation.
- Bailey claims the alleged “Big Tech Censorship Of President Trump” should remove companies’ “safe harbor” immunity under federal law, referencing a nonsense legal theory about Section 230.
- The investigation accuses the companies of making “factually inaccurate” claims about providing unbiased information to users.
In plain English: Section 230 is a federal law that protects online platforms from being held legally responsible for content posted by users. Bailey is arguing that AI companies should lose this protection because their chatbots allegedly showed bias against Trump, but this legal theory has no solid foundation in existing law.
The big picture: This investigation highlights the growing political pressure on AI companies over content moderation and algorithmic bias.
- Bailey’s probe follows his previous blocked investigation into Media Matters, a liberal media watchdog group, for criticizing Elon Musk’s X platform over ad placement near pro-Nazi content.
- The case demonstrates how AI chatbots’ frequent production of factually false claims creates new legal vulnerabilities, even for subjective opinion questions.
- The investigation represents an “undisguised attempt to intimidate private companies” according to the analysis, using government power to pressure tech companies over political content.
Why this matters: The case sets a concerning precedent for using state attorney general powers to investigate companies based on AI-generated content that doesn’t favor specific politicians, potentially chilling innovation and free expression in AI development.
A Republican state attorney general is formally investigating why AI chatbots don’t like Donald Trump