Despite advancements in artificial intelligence, AI systems still struggle with puzzles that humans can solve quickly. This limitation highlights the difference between AI and artificial general intelligence (AGI). AGI requires the ability to generalise and adapt to new situations using minimal information, a skill that remains challenging for AI.
One test used to evaluate AI's generalisation ability is the Abstraction and Reasoning Corpus (ARC), which involves solving coloured-grid puzzles by deducing and applying hidden rules. The ARC Prize Foundation has developed new tests, including ARC-AGI-3, specifically designed for testing AI agents in video games. These tests reveal that while AI excels at tasks requiring human expertise, it often fails to replicate human intuition, adaptability, and perspective when problem-solving. AI's struggles with these puzzles indicate that current AI models are still far from achieving true AGI.
AI is designed to process data and follow strict rules, but it often lacks the human ability to consider intent, step back to see the bigger picture, and learn from experience. Humans can quickly make connections based on cultural knowledge, wordplay, and shared experiences, while AI models struggle with creative leaps and unexpected links. This difference underscores the importance of incorporating human playfulness and creativity into AI design to bridge the gap between AI and human-level intelligence.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




