Canada Invests in AI Safety

Canada Invests in AI Safety

9 June 2025

The Canadian Artificial Intelligence Safety Institute (CAISI) is set to fund research projects addressing critical AI safety concerns. These projects will focus on misinformation, generative AI risks, and the safety of autonomous systems. The initial ten projects will each receive $100,000, with one initiative led by AI pioneer Yoshua Bengio examining the decision-making processes of large language models.

CAISI, launched last year, is part of a global network of safety institutes created in response to calls for AI regulation. However, there's a growing global trend towards prioritising AI adoption over safety. Despite this shift, Canada aims to leverage its research capabilities to maintain a focus on AI safety. The Canadian government plans to emphasise AI's economic potential, especially as it prepares to host the upcoming G7 summit.

CAISI's research program aims to build a community of AI safety researchers and ensure that safety remains a key consideration in AI development and deployment. The program will focus on current and emerging AI risks, such as deepfakes, privacy security, and bias, with the goal of fostering greater public trust in AI systems.

Source:cp24.com

AI generated content may differ from the original.

Published on 9 June 2025
aiartificialintelligenceintelligenceaisafetycanadaresearchaiethics
  • Microsoft Ranks AI Model Safety

    Microsoft Ranks AI Model Safety

    Read more about Microsoft Ranks AI Model Safety
  • Bengio unveils LawZero AI lab

    Bengio unveils LawZero AI lab

    Read more about Bengio unveils LawZero AI lab
  • LawZero: Safer AI Research

    LawZero: Safer AI Research

    Read more about LawZero: Safer AI Research
  • Vanguard, UofT: AI Labs

    Vanguard, UofT: AI Labs

    Read more about Vanguard, UofT: AI Labs