Acko.net: LLMs Produce Forgeries

Acko.net: LLMs Produce Forgeries

5 March 2026

What happened

Large language models (LLMs) generate "forgeries" of human output rather than authentic creations, according to Steven Wittens' blog post "The L in 'LLM' Stands for Lying." Wittens argues this imitative nature leads to "slop-coded" pull requests in open-source projects, forcing maintainers to close contributions, and new employees injecting "run-of-the-mill mediocrity" by offloading work to bots. He contends that LLM-driven software development results in output that "look, feel and function mostly the same as they ever did: barely."

Why it matters

This perspective challenges the assumed productivity benefits of LLMs for platform engineers and open-source project maintainers. It implies increased code review burdens and a potential degradation of codebase quality, requiring security architects to implement stricter controls on AI-generated contributions. Procurement teams must scrutinise "AI-powered" tooling claims, as the article suggests current LLM output may hinder genuine skill development and introduce hidden technical debt. The mechanism is the LLM's imitative nature, constraining the authenticity and quality of generated code.

Source:acko.net

AI generated content may differ from the original.

Published on 5 March 2026

Subscribe for Weekly Updates

Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.

Acko.net: LLMs Produce Forgeries