The Canadian Artificial Intelligence Safety Institute (CAISI) is set to fund research projects addressing critical AI safety concerns. These projects will focus on misinformation, generative AI risks, and the safety of autonomous systems. The initial ten projects will each receive $100,000, with one initiative led by AI pioneer Yoshua Bengio examining the decision-making processes of large language models.
CAISI, launched last year, is part of a global network of safety institutes created in response to calls for AI regulation. However, there's a growing global trend towards prioritising AI adoption over safety. Despite this shift, Canada aims to leverage its research capabilities to maintain a focus on AI safety. The Canadian government plans to emphasise AI's economic potential, especially as it prepares to host the upcoming G7 summit.
CAISI's research program aims to build a community of AI safety researchers and ensure that safety remains a key consideration in AI development and deployment. The program will focus on current and emerging AI risks, such as deepfakes, privacy security, and bias, with the goal of fostering greater public trust in AI systems.