CJF Hinton Award for Excellence in AI Safety Reporting

CJF Hinton Award for Excellence in AI Safety Reporting

The CJF Hinton Award for Excellence in AI Safety Reporting

About the Award

This annual award recognizes exceptional journalism that critically examines the safety implications of artificial intelligence (AI)—a transformative technology with far-reaching societal impacts. While AI offers tremendous potential for positive change, responsible development and implementation requires careful attention to safety considerations. The goal of this award is to promote thoughtful, evidence-based reporting that raises awareness of catastrophic risks* associated with AI, fosters public understanding, and encourages dialogue on creating a safer future for humanity. The Award is named for Nobel laureate Dr.Geoffrey Hinton, one of the pioneers of modern artificial deep learning, who is also one of its most prominent critics and cautionary voices.

The CJF Hinton Award for Excellence in AI Safety Reporting also seeks to highlight journalism that not only identifies challenges but also explores innovative solutions and pathways to mitigate risks – advancing public conversation on how AI technologies can be developed and applied responsibly. 

Award Details

Prize: $10,000
Eligibility: Open to Canadian journalists or teams whose work is published on a national media platform.
Media Formats Accepted:

  • Long-form Investigative Reporting or Series
  • Digital Media / Multimedia Storytelling 
  • Podcast
  • Audio Reporting (eg. Radio Features
  • Documentary (Audio or Video) 

Submissions will be evaluated on how the story/series accomplishes the following:

  • Provides nuanced, evidence-based reporting on AI safety challenges
  • Explores ethical dilemmas, societal risks, and the potential catastrophic risks of AI technologies
  • Investigates and highlights solutions that aim to mitigate risks and enhance AI safety
  • Highlights the necessary safeguards of AI to ensure a secure and safe future
  • Demonstrates scientific accuracy and upholds journalistic integrity
  • Promotes public understanding of complex AI safety issues

Timeline for submissions: Submissions open in December 2025, alongside the CJF’s annual awards. The recipient will be announced at the annual CJF Awards ceremony in June 2026.

*Catastrophic Risks

According to the AI Safety Foundation and leading researchers, “catastrophic risks” include, but are not limited to:

  • Bioterrorism
  • Automated warfare
  • Cyberwarfare
  • Accidental release of dangerous AI
  • Uncontrollable AI: systems that acquire power to pursue their own goals

This award is supported by the AI Safety Foundation, and by a generous gift by Richard Wernham and Julia West

About the AI Safety Foundation

The AI Safety Foundation increases awareness and scientific understanding of the catastrophic risks of artificial intelligence by advancing education and research in order to ensure a safer future for humanity.  One of the flagship initiatives of the Foundation is “The Hinton Lectures”, named after Nobel Laureate Geoffrey Hinton, an AISF board member. The lectures are held annually on the topic of AI Safety, delivered by world-leading experts in artificial intelligence.  The Foundation is a registered charitable organization.

About Geoffrey Hinton

Geoffrey Hinton, 2024 Nobel Laureate in Physics and the “Godfather of AI,” is internationally renowned as a pioneer in the field of deep learning as a mode of artificial intelligence. With John J. Hopfield, he received the Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks,” including his invention of the Boltzmann machine using statistical physics techniques. In 2018, he received the Association for Computing Machinery’s A.M. Turing Award, often called the “Nobel Prize in Computing,” alongside Yoshua Bengio and Yann LeCun “for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” He is a board member of the AI Safety Foundation.