A recent study by METR has challenged the assumption that AI coding tools automatically boost productivity for experienced developers. The study involved 16 developers working on real-world tasks in open-source repositories. Surprisingly, the findings indicated that using AI tools actually increased task completion time by 19%. This contradicts the developers' initial expectations of a 24% reduction in completion time with AI assistance. Even after the study, developers still believed AI had sped them up by 20%.
The METR study used tools like Cursor Pro and Claude 3.5/3.7 Sonnet. Developers were allowed to use any AI tools they chose, or none at all. Researchers caution that the findings are a snapshot in time, influenced by factors such as developer familiarity with the repositories and the maturity of the projects. While AI coding tools have shown significant improvements, this research highlights the need for caution regarding the promised productivity enhancements.
The study suggests that developers may spend more time prompting AI, waiting for outputs, and reviewing AI-generated code, rather than actively coding. It serves as a reminder that the impact of AI on software development requires ongoing evaluation. Further investigations have also indicated that AI tools can introduce errors and security vulnerabilities into the coding process.