×
AI gun detector mistakes Doritos for weapon, traumatizes Baltimore student
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Armed police swarmed a 16-year-old student outside a Baltimore high school after an AI gun detection system mistakenly identified a crumpled bag of Doritos in his pocket as a firearm. The October 20 incident at Kenwood High School highlights growing concerns about the reliability of AI surveillance technology in educational settings, where false positives can lead to traumatic encounters with law enforcement.

What happened: Taki Allen was handcuffed at gunpoint by multiple officers after Omnilert’s AI system triggered a weapons alert based on surveillance footage.

  • “It was like eight cop cars that came pulling up for us,” Allen told WBAL-TV 11 News. “They started walking toward me with guns, talking about ‘Get on the ground,’ and I was like, ‘What?'”
  • Police later showed Allen the AI-captured image that prompted the response, revealing the system had flagged his snack as a weapon.
  • “It was mainly like, am I gonna die? Are they going to kill me?” Allen said. “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.'”

The technology behind the mistake: Baltimore County Public Schools implemented Omnilert’s gun detection system last year to scan existing surveillance footage and alert police in real time.

  • The AI system analyzes video feeds and automatically notifies law enforcement when it believes it has detected a weapon.
  • Omnilert, the company that makes the detection software, acknowledged the incident was a “false positive” but maintained the system “functioned as intended,” claiming its purpose is to “prioritize safety and awareness through rapid human verification.”

School response falls short: Baltimore County Public Schools sent a letter to parents offering counseling services but provided no direct outreach to Allen himself.

  • “We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote.
  • Allen says no school official has personally apologized or spoken with him about the traumatic experience.
  • “They didn’t apologize. They just told me it was protocol,” he said. “I was expecting at least somebody to talk to me about it.”

Lasting impact on student: The false alarm has left Allen afraid to return to school and questioning his basic safety on campus.

  • “If I eat another bag of chips or drink something, I feel like they’re going to come again,” Allen said.
  • The incident underscores how AI surveillance errors can create lasting psychological effects on students, particularly when combined with aggressive police responses.

Why this matters: The case adds to mounting evidence that AI detection systems deployed in sensitive environments like schools can produce dangerous false positives with real-world consequences.

  • The incident occurs amid broader implementation of AI surveillance technology across educational institutions nationwide.
  • Similar AI reliability issues are emerging across sectors, from military decision-making tools to age verification systems that struggle with facial recognition accuracy.
Armed police swarm student after AI mistakes bag of Doritos for a weapon

Recent News

Tech workers privately doubt AI hype despite company investments

Many fear speaking up could be career-limiting in today's AI-obsessed workplace culture.

TechCrunch Disrupt 2025 offers CIOs rare AI insights from Silicon Valley

Tech giants and startups reveal how AI is transforming real business operations and development workflows.

HBR: Agentic AI requires organizational restructuring, not tech tweaks

Cross-functional collaboration breaks traditional silos to unlock AI's full potential value.