As Andrej Karpathy puts it, LLMs are like a new kind of operating system. The LLM is like the CPU and its context window is like the RAM, serving as the model’s working memory. Just like RAM, the LLM context window has limited capacity to handle various sources of context. And just as an operating system curates what fits into a CPU’s RAM, we can think about “context engineering” playing a similar role. Karpathy summarizes this well:
[Context engineering is the] ”…delicate art and science of filling the context window with just the right information for the next step.”
What are the types of context that we need to manage when building LLM applications? Context engineering as an umbrella that applies across a few different context types:
Tools – feedback from tool calls
Instructions – prompts, memories, few‑shot examples, tool descriptions, etc
Knowledge – facts, memories, etc
— Read More