Andrej Karpathy has proposed a new pattern for LLM-powered knowledge management: instead of re-deriving answers from raw documents every time, have an LLM build and maintain a persistent, interlinked wiki.
The approach addresses a fundamental inefficiency in current RAG systems — every query requires the model to re-process source documents from scratch. By maintaining a living wiki, the LLM incrementally builds structured knowledge that can be traversed and updated.
The wiki pattern creates interlinked pages where concepts reference each other, much like Wikipedia. When new information arrives, the LLM updates relevant pages rather than starting from zero.
Karpathy argues this is closer to how human experts work — they maintain mental models that get refined over time, rather than re-reading every textbook for each question. The pattern is gaining traction in the developer community as a practical alternative to purely retrieval-based approaches.