On March 20, English Wikipedia editors voted 40-2 to ban articles generated by large language models. The ban is comprehensive — editors cannot use AI to produce whole articles, sections, or substantial new content.
Two exceptions survive: LLMs can help refine existing writing if accuracy is manually verified, and they can assist with translation if the editor is bilingual and checks the output. The ban currently covers English Wikipedia only.
The rationale is straightforward: hallucinated AI text enters the encyclopedia, gets scraped by AI companies for training data, and re-enters future models — a feedback loop that degrades both Wikipedia and the AI models trained on it. Editors call this 'model collapse' and consider it an existential threat to knowledge integrity.
The vote reflects a broader tension. AI companies depend heavily on Wikipedia as training data, but Wikipedia's volunteer editors now see AI-generated content as pollution. The near-unanimous margin — 40 to 2 — suggests this isn't controversial within the editing community.
The decision is notable for its pragmatism. Rather than banning AI tools entirely, Wikipedia drew a clear line: AI can help humans write better, but it cannot replace human authorship. A sensible framework that other knowledge platforms may follow.