The New York Times has built a specialized AI team of eight people, led by editorial director Zach Seward, to help reporters tackle complex investigations involving massive datasets that were previously impossible to analyze manually. The initiative represents one of the most structured approaches to AI integration in newsrooms, focusing on research and investigations rather than content generation.
What you should know: Seward’s team primarily uses AI for semantic search and data analysis to help reporters process enormous amounts of information under tight deadlines.
- The team includes four engineers, a product designer, and two editors who work directly with reporters on individual projects before creating repeatable processes for the broader newsroom.
- Their approach focuses on “knotty, huge, messy data sets” with “immediate deadlines” while building tools that can be reused by other journalists.
Key breakthrough tool: The team developed “Cheat Sheet,” an internal spreadsheet-based AI tool that emerged from a high-stakes election investigation.
- A reporter needed to analyze 500 hours of leaked Zoom recordings from an election interference group before Election Day, which AI helped transcribe into 5 million words.
- The tool uses semantic search to find topics and concepts rather than specific keywords, since groups “wasn’t so dumb as to say, ‘We’re going to spread misinformation on the internet’… then you could Control F,” Seward explained.
- Cheat Sheet is now used by several dozen reporters and allows them to select which large language model to use with guidance from Seward’s team.
In plain English: Semantic search is like asking AI to find things based on meaning rather than exact word matches—similar to how you might search for “vacation spots” and find results about “holiday destinations” even though the exact words don’t match.
Real-world applications: The AI tools have enabled investigations that would have been impossible through traditional reporting methods.
- One project involved analyzing an unorganized list of 10,000 names of people who registered for a tax cut in Puerto Rico, where “you can’t Google 10,000 names… but a computer can Google 10,000 names.”
- Even with imperfect accuracy, the AI helped sort names into promising leads that reporters could then follow up with traditional reporting methods.
Newsroom integration strategy: Seward’s team has spoken to 1,700 of the 2,000 newsroom employees and maintains an open Slack channel for questions and use case sharing.
- The team hosts training sessions on AI tools for research and investigations, addressing what Seward calls “writer’s block in the chatbot” where reporters struggle to identify specific applications.
- Communication ranges from basic questions like “how can I get Gemini?” to bureau chiefs sharing innovative use cases with colleagues worldwide.
Editorial guardrails: The Times maintains strict boundaries around AI use, with content generation explicitly prohibited.
- Reporters can use AI for drafting SEO copy and headlines around published articles, but not for writing the articles themselves.
- Seward emphasizes to staff: “Never trust output from an LLM. Treat it… with the same suspicion you would a source you just met and you don’t know if you could trust.”
What they’re saying: Seward acknowledges both the potential and risks of newsroom AI integration.
- “We’re not trying to be AI boosters. In fact, quite the opposite. I think there’s a lot of caution. A lot of time we spend cautioning people about uses of AI, both [in the] legal and editorial senses.”
- On competitive advantage: “No reporter is going to say no to a competitive advantage, which I think is the theme of what we’re trying to build for them.”
- “I definitely live in fear of an error that is in some way attributable to AI. To be clear, we also say in sessions with our newsroom we would never attribute an error to AI, meaning it’s always on us.”
Inside The New York Times’ AI newsroom strategy