AI Tips #10: Master AI Summaries with the Chain of Density Prompting Approach
Explore a simple yet powerful prompting approach for obtaining richer, clearer, and more readable summaries from AI

Ask anyone who has used AI (e.g., ChatGPT, Claude, Gemini) to summarise documents, and you'll hear a common frustration: getting the right balance of detail and readability feels like a constant challenge.
Sometimes the AI gives a surface-level overview that misses crucial details. Other times, it produces summaries so packed with information they become hard to follow. And quite frequently, important points often get overlooked entirely, as the AI fixates on surface-level information.
But what if there was a simple way to get summaries that are both comprehensive and readable? That's where Chain of Density (CoD) prompting comes in. CoD is a straightforward technique that guides AI through a process of progressive refinement, leading to summaries that are richer in content while remaining clear and accessible.
How Chain of Density Works
The CoD prompting technique was developed by researchers from Columbia University, Salesforce, and MIT in late 2023. Their key insight was that summaries could be progressively refined to pack in more information without increasing length, analogous to how skilled human editors distill complex ideas into clear, concise text.
By turning this insight into a structured prompting technique, they created a method that anyone can use to get better summaries from AI.
CoD can be likened to packing a suitcase efficiently. You start with the big items, then you find smart ways to fit in more without making the suitcase bigger. Instead of asking AI for a perfect summary in one shot, we guide it through a simple process:
Get a basic summary (nothing fancy, just the main points)
Identify important details that were missed
Ask the AI to add these details while keeping the summary the same length
Repeat until satisfied (typically 5 rounds of iteration)
Each round makes the summary more complete while staying clear and concise.
The magic happens when we structure the prompt so that the AI automatically executes each of the iterations by itself without the need for human intervention.
Chain of Density Example: Academic Paper Summary
Let me share a practical example of how to implement Chain of Density, including the exact prompt that makes it work:
# Role & Context
You are an assistant whose task is to analyse an academic paper and explain its
core ideas in plain, everyday English. Any term that might sound technical must be defined in non-technical language.
# Workflow
1. **Request the Paper**
• If the full text (or a detailed excerpt) isn’t already provided, ask the user for it.
2. **Generate Five Iterative Summaries**
* Automatically produce five progressively refined summaries. Each round:
- **Find 1–3 New Points:** List 1–3 specific key points (each ≤5 words) from the paper that haven’t yet appeared, using everyday language.
- **Revise the Summary:** Rewrite the previous summary to include all earlier points plus these new ones, keeping exactly the same word count by fusing, compressing, or trimming redundant phrases.
* **Do not pause for feedback** — complete all five rounds in one go.
3. **Length & Style**
* Aim for 80–100 words (4–6 sentences) in each summary.
* Keep each summary fully self-contained and jargon-free; define any technical term simply.
4. **Output Format**
• Present all five rounds in order, each labeled and formatted like this:
```
Round 3
Missing Entities: point A, point B
Summary: <80–100-word summary incorporating all points so far>
# Guidelines
• Never drop prior points; only add new ones each round.
• Prioritise clarity and brevity.
• Run all rounds automatically without user prompts.
Using the above prompt using OpenAI’s GPT o4-mini, I summarised this paper titled, “Generative Agents: Interactive Simulacra of Human Behavior” (this is a really fascinating paper!), and obtained the following results:
Round 1 Summary
Missing Entities: believable proxies of people, daily routines
Summary:
This paper introduces generative agents — computer programs that act like people in a virtual world. These agents go about normal daily routines: they wake up, cook breakfast, work on tasks, and wind down in the evening. They form basic opinions, notice each other, and hold simple conversations. By treating these agents as believable stand-ins for humans, the authors show how interactive applications — like games or training simulations — can feel more lifelike and responsive to user commands.
(97 words)
Round 2 Summary
Missing Entities: memory log, reflection
Summary:
This paper introduces generative agents — computer programs that act like people in a virtual world. These agents go about normal daily routines: they wake up, cook breakfast, work on tasks, and wind down in the evening. Each agent keeps a memory log, a written record of everything they experience in simple language. Periodically, they pause to reflect, meaning they look back over recent memories to draw higher-level insights about themselves. They form basic opinions, notice each other, and hold simple conversations.
(90 words)
Round 3 Summary
Missing Entities: retrieval, planning
Summary:
This paper introduces generative agents — computer programs that act like people in a virtual world. These agents go about normal daily routines: they wake up, cook breakfast, work on tasks, and wind down in the evening. Each agent keeps a memory log, a written record of everything they experience in simple language, and uses retrieval to pull relevant memories when deciding what to do. They plan their day by turning broad goals into step-by-step actions. Periodically, they reflect on past events to draw higher-level insights about themselves. They form opinions, notice each other, and converse.
(94 words)
Round 4 Summary
Missing Entities: emergent social behaviors, user interaction
Summary:
This paper introduces generative agents — computer programs that act like people in a virtual world. These agents perform daily routines: waking up, cooking breakfast, working on tasks, and winding down. Each agent keeps a memory log of experiences in plain language, retrieves relevant memories to shape choices, and plans step-by-step actions for the day. Periodically, they reflect on those memories to gain insights. Beyond solo behavior, they show emergent social behaviors: spreading news, forming friendships, and coordinating events without direct scripting. Users can also interact with them naturally, giving commands or changing the environment.
(96 words)
Round 5 Summary
Missing Entities: architecture components, evaluations
Summary:
This paper introduces generative agents — computer programs that act like people in a virtual world. These agents perform daily routines — waking up, cooking breakfast, working on tasks, and winding down. Each agent’s architecture has three parts: a memory log that records experiences in plain language, a retrieval system that finds relevant memories when choosing actions, and a reflection module that generates higher-level insights from those memories. They plan step-by-step actions and exhibit emergent social behaviors: sharing news, making friends, and organising group events. The authors evaluate them in controlled interviews and long-term simulations, showing they behave believably.
(98 words)
Making Chain of Density Work For You
Whether you're dealing with technical documentation, academic papers, legal documents, or long-form articles, CoD can help distill complex information into clear, comprehensive summaries without losing critical details.
To get started, you can use the prompt template shared above, customizing these key elements for your needs:
Summary Length: Set the word count that works for your purpose (e.g., 80-100 words)
Language Style: Specify your target audience (e.g., "non-technical readers" or "senior executives")
Number of Rounds: Define the number of iterations you would like (usually 5 rounds works well)
Beyond Summarisation
The CoD technique demonstrates something powerful about working with AI: the best results often come not from trying to get everything perfect in one shot, but from building iteration into our prompts.
By instructing the AI to automatically refine its own output through multiple rounds, we can achieve results that would be tedious or impractical to get through manual back-and-forth.
This principle extends far beyond summarisation. Consider how you might apply automated iteration to other tasks:
Writing: Have the AI progressively enhance clarity and engagement
Analysis: Build in rounds that dive progressively deeper into data
Problem Solving: Incrementally dig deeper into problems and solutions
For teams working with AI, this points to an important strategy: look for opportunities to build automated iteration into your prompts. Rather than thinking of prompts as one-shot instructions, treat them as recipes for progressive improvement. The result is often not just better output, but a more efficient workflow.
Justin Tan is passionate about supporting organisations to navigate disruptive change and towards sustainable and robust growth. He founded Evolutio Consulting in 2021 to help senior leaders to upskill and accelerate adoption of AI within their organisation through AI literacy and proficiency training, and also works with his clients to design and build bespoke AI solutions that drive growth and productivity for their businesses. Alongside his consultancy work, he is an Adjunct Professor at University College London, where he lectures on digital health and AI. If you're pondering how to harness these technologies in your business, or simply fancy a chat about the latest developments in AI, why not reach out?