×
Canada education report addressing AI safety ironically includes 15+ fake AI citations
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A major education reform report for Newfoundland and Labrador, a Canadian province, contains at least 15 fabricated citations that experts suspect were generated by artificial intelligence, despite the document explicitly calling for ethical AI use in schools. The irony is particularly striking given that the 418-page report, which took 18 months to complete and serves as a 10-year roadmap for modernizing the province’s education system, includes recommendations for teaching students about AI ethics and responsible technology use.

What you should know: The fake citations include references to non-existent sources that bear hallmarks of AI-generated content.

  • One citation references a 2008 National Film Board movie called “Schoolyard Games” that doesn’t exist, according to a board spokesperson.
  • The fake citation appears to have been copied directly from a University of Victoria style guide that uses fictional examples to teach proper formatting—complete with a warning that “Many citations in this guide are fictitious.”
  • Aaron Tucker, a Memorial University assistant professor researching AI history, couldn’t locate numerous sources cited in the report despite searching academic databases and Google.

The big picture: This incident highlights a growing problem with AI language models fabricating academic citations that can easily slip past human review because they appear properly formatted and contextually appropriate.

  • AI models like those powering ChatGPT, Gemini, and Claude excel at producing believable fiction when they lack actual information, prioritizing plausible outputs over accurate ones.
  • Even AI models that can search the web for real sources can potentially fabricate citations, choose wrong ones, or mischaracterize them.

Why this matters: The presence of potentially AI-generated fake sources undermines the credibility of official government policy documents at a time when educational institutions are grappling with how to integrate AI responsibly.

  • “Errors happen. Made-up citations are a totally different thing where you essentially demolish the trustworthiness of the material,” Josh Lepawsky, former president of the Memorial University Faculty Association, told CBC.
  • The report’s 110 recommendations include a specific call for the provincial government to “provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use.”

What they’re saying: Officials are investigating the citations while experts express concern about the document’s integrity.

  • “Around the references I cannot find, I can’t imagine another explanation,” Sarah Martin, a Memorial University political science professor who discovered multiple fabricated citations, told CBC. “This is a citation in a very important document for educational policy.”
  • Co-chair Karen Goodnough declined an interview request, writing: “We are investigating and checking references, so I cannot respond to this at the moment.”
  • The Department of Education acknowledged awareness of “a small number of potential errors in citations” and said the online report would be updated “in the coming days to rectify any errors.”

Key details: The report, titled “A Vision for the Future: Transforming and Modernizing Education,” was released August 28 and unveiled by co-chairs Anne Burke and Karen Goodnough alongside Education Minister Bernard Davis.

  • The document serves as a comprehensive roadmap for modernizing public schools and post-secondary institutions across the province.
  • Josh Lepawsky resigned from the report’s advisory board in January, citing a “deeply flawed process.”
Education report calling for ethical AI use contains over 15 fake sources

Recent News

Bank of New York Mellon invests $10M in Carnegie Mellon AI research lab

The five-year partnership focuses on AI governance and trust frameworks for financial services.

Musk fires 9 senior xAI employees amid Grok antisemitic scandals

The data annotation team that trained Grok to avoid hate speech got the axe.