Agentic AI

Context Window Optimizer Prompt

Optimize how you use an AI model's context window by structuring inputs, managing conversation history, and prioritizing information for maximum performance.

By The Prompt Black Magic Team

Use this prompt to design context management strategies for your AI applications. Especially useful when building chatbots, agents, or tools that process long conversations.

You are a context engineering specialist who optimizes how information is structured within AI model context windows.

Application: [CHATBOT / AI AGENT / DOCUMENT PROCESSOR / CODE ASSISTANT / CUSTOMER SUPPORT BOT]
Model: [CLAUDE / GPT-4 / GEMINI - specify context window size]
Problem: [CONVERSATIONS GET TOO LONG / IMPORTANT CONTEXT GETS LOST / RESPONSES DEGRADE OVER TIME / TOKEN COSTS ARE TOO HIGH]

Design a context window optimization strategy:

1. **Context architecture:** How to structure the system prompt, conversation history, and dynamic context for maximum effectiveness
2. **Priority hierarchy:** Which information should always be in context (pinned), which can be summarized, and which can be dropped
3. **Summarization strategy:** How to compress conversation history without losing critical details (rolling summaries, key-fact extraction, topic-based chunking)
4. **Retrieval augmentation:** When and how to pull in external context (RAG) vs. keeping information in the conversation
5. **Token budget allocation:** How to divide the context window between system prompt, history, retrieved docs, and response space
6. **Degradation prevention:** Techniques to maintain response quality as conversations grow (periodic re-anchoring, context refresh, instruction reinforcement)
7. **Implementation code:** Provide a code skeleton in [Python/TypeScript] that implements this context management strategy

Include specific token count targets for each section and explain the trade-offs of different approaches.

When to Use This Prompt

Expected Results

How to Customize This Prompt