CJF Hinton Award for Excellence in AI Safety Reporting

CJF names shortlist for inaugural CJF-Hinton Award for Excellence in AI Safety Reporting

 Toronto, April 8, 2026  – The Canadian Journalism Foundation (CJF) is proud to announce its shortlist for the inaugural CJF Hinton Award for Excellence in AI Safety Reporting.

Launched in October 2025 in partnership with the AI Safety Foundation, the award is named for Nobel laureate Dr. Geoffrey Hinton, a pioneer of modern deep learning and a  prominent voice alerting governments and the public to artificial intelligence’s (AI’s)  potential threat to humankind. It recognizes exceptional journalism that critically examines the safety implications of AI, a transformative technology with far-reaching societal impacts. It also honours reporting that not only identifies challenges but explores innovative solutions and pathways to mitigate risks, advancing public conversation on how AI can be developed and applied responsibly. The goal of this award is to promote thoughtful, evidence-based reporting that raises awareness of catastrophic risks associated with AI, fosters public understanding, and encourages dialogue on creating a safer future for humanity. The winner receives $10,000.

“AI’s risks are no longer theoretical. They are immediate, complex and deeply consequential. This year’s shortlisted entries confront some of the most urgent safety challenges of our time, including non-consensual deepfakes and coordinated disinformation, with rigour, originality and real-world impact. This is exactly the kind of reporting this award was created to recognize,” says Natalie Turvey, CJF president and executive director.

The three finalists for this year’s award and the stories or series shortlisted are:

Nam Kiwanuka and the team at TVO’s The Thread with Nam Kiwanuka (Chantale Dahmer, Ali Zaidi and Diego A. Garcia) for How AI is turning your image into an explicit deepfake. The episode examines non-consensual deepfakes targeting girls, featuring Toronto students’ experiences, expert insights, legal gaps, policing challenges and advocacy for protections and digital rights reform. “The terrorization of women at scale with near-zero accountability was palpable. The situation it presented felt like a civilizational step backwards from the progress we have made towards gender equality,” remarks jury member Matthew Lee. “The piece made clear that the proliferation of AI tools capable of generating nonconsensual intimate imagery cannot be adequately addressed through legal prohibition or platform moderation alone.”

 

Craig Silverman, co-founder of Indicator, with co-founder Alexios Mantzarlis, and co-authors Santiago Lakatos and Benjamin Shultz for three articles from Indicator’s body of reporting on AI nudifiers. These tools of abuse allow anyone to turn a single photo of a person into a realistic pornographic image or video without the subject’s consent, and are used by millions of people around the world to abuse women and girls. Jury member Cillian Crosson calls the series “impressive accountability journalism,” noting “Their reporting has already produced concrete results — Google revoked SSO for 23 sites and Meta removed thousands of ads promoting nudifying sites.”

 

Rory White, of Canada’s National Observer, with additional contributions from Managing Editor David McKie for a three part investigative series, uncovering a group that used artificial intelligence to create persuasive climate disinformation targeting local politicians across Canada, exploring cross-governmental knowledge sharing as a solution and introducing a purpose-built tool that makes coordinated messaging detectable across over 500 local governments, which enabled a real-time intervention in Cochrane, Alberta, where councillors voted to remain in the program. Jury chair Nikita Roy calls the series “A strong solutions-oriented investigation that not only exposes but demonstrates a real-world intervention.”

This award is supported by the AI Safety Foundation and by a generous gift from philanthropists Richard Wernham and Julia West.

 

The CJF Hinton Award for Excellence in AI Safety Reporting

This annual award recognizes exceptional journalism that critically examines the safety implications of artificial intelligence (AI)—a transformative technology with far-reaching societal impacts. While AI offers tremendous potential for positive change, responsible development and implementation requires careful attention to safety considerations. The goal of this award is to promote thoughtful, evidence-based reporting that raises awareness of catastrophic risks* associated with AI, fosters public understanding, and encourages dialogue on creating a safer future for humanity. The Award is named for Nobel laureate Dr.Geoffrey Hinton, one of the pioneers of modern artificial deep learning, who is also one of its most prominent critics and cautionary voices.

The CJF Hinton Award for Excellence in AI Safety Reporting also seeks to highlight journalism that not only identifies challenges but also explores innovative solutions and pathways to mitigate risks – advancing public conversation on how AI technologies can be developed and applied responsibly. 

Award Details

Prize: $10,000
Eligibility: Open to Canadian journalists or teams whose work is published on a national media platform.
Media Formats Accepted:

  • Long-form Investigative Reporting or Series
  • Digital Media / Multimedia Storytelling 
  • Podcast
  • Audio Reporting (eg. Radio Features
  • Documentary (Audio or Video) 

Submissions will be evaluated on how the story/series accomplishes the following:

  • Provides nuanced, evidence-based reporting on AI safety challenges
  • Explores ethical dilemmas, societal risks, and the potential catastrophic risks of AI technologies
  • Investigates and highlights solutions that aim to mitigate risks and enhance AI safety
  • Highlights the necessary safeguards of AI to ensure a secure and safe future
  • Demonstrates scientific accuracy and upholds journalistic integrity
  • Promotes public understanding of complex AI safety issues

The submission period for this CJF Hinton Award for Excellence in AI Safety Reporting is closed. 

*Catastrophic Risks

According to the AI Safety Foundation and leading researchers, “catastrophic risks” include, but are not limited to:

  • Bioterrorism
  • Automated warfare
  • Cyberwarfare
  • Accidental release of dangerous AI
  • Uncontrollable AI: systems that acquire power to pursue their own goals
  • Disinformation

This award is supported by the AI Safety Foundation, and by a generous gift by Richard Wernham and Julia West

About the AI Safety Foundation

The AI Safety Foundation increases awareness and scientific understanding of the catastrophic risks of artificial intelligence by advancing education and research in order to ensure a safer future for humanity.  One of the flagship initiatives of the Foundation is “The Hinton Lectures”, named after Nobel Laureate Geoffrey Hinton, an AISF board member. The lectures are held annually on the topic of AI Safety, delivered by world-leading experts in artificial intelligence.  The Foundation is a registered charitable organization.

About Geoffrey Hinton

Geoffrey Hinton, 2024 Nobel Laureate in Physics and the “Godfather of AI,” is internationally renowned as a pioneer in the field of deep learning as a mode of artificial intelligence. With John J. Hopfield, he received the Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks,” including his invention of the Boltzmann machine using statistical physics techniques. In 2018, he received the Association for Computing Machinery’s A.M. Turing Award, often called the “Nobel Prize in Computing,” alongside Yoshua Bengio and Yann LeCun “for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.” He is a board member of the AI Safety Foundation.