You had a breakthrough conversation with an AI last week. A real one — the kind where the pieces clicked, a framework emerged, a decision got made, and for twenty minutes you could see the whole architecture of the thing you're building.
Then you closed the tab.
Three days later, you opened a new chat and started from scratch. The context was gone. The framework was a faint memory. The decision you made? You're not even sure of the exact reasoning anymore. So you re-explain, re-derive, and re-discover — or worse, you diverge from what you'd already figured out without knowing it.
This is the default state of AI-assisted work in 2026. And it's not a bug in any particular tool. It's a missing layer in the stack.
# The Two Laws of Meaning
After a year of building tools for AI-assisted thinking, I've arrived at two principles that I believe are as fundamental to knowledge work as thermodynamics is to physics:
Law 1: Recursion compounds. When you feed an idea back through a system — refine a framework against new evidence, test a decision against a different context, let an extraction seed generate new connections — the meaning gets denser. Not bigger. Denser. More precise. More connected. More useful.
Law 2: Summarization collapses. When you compress meaning to save time — bullet-point a nuanced conversation, flatten a framework into a one-liner, lose the reasoning behind a decision — you don't preserve meaning at reduced resolution. You destroy the signal that made it meaningful in the first place.
These two laws are in direct tension with how most AI tools work today. Every "conversation summary" feature, every "here are the key takeaways" button, every knowledge management system that reduces your 10,000-word breakthrough session to five bullet points is operating under Law 2. It feels productive in the moment. But it's burning your intellectual capital.
# What Recursion Actually Looks Like
Consider what happens when you run the same source material through multiple extraction passes instead of summarizing it once.
First pass: a conversation transcript goes through nine categories of extraction — decisions made, frameworks developed, golden passages worth preserving verbatim, definitions coined, exemplars of quality, actions to take, questions left open, offers of actionable next steps, and emergence patterns that are forming but not yet named.
The output isn't a summary. It's a structured map of meaning, where every extracted element points back to the exact words in the original conversation that generated it.
Second pass: you bring that structured map into a new conversation. The AI doesn't have your old context. But it has something better — your meaning, organized by category, with provenance. When it suggests an approach, it can reference the decision you made last week and the reasoning behind it. When it notices a pattern forming across multiple conversations, it can name it as emergence.
Third pass: the emergence patterns from pass two become explicit frameworks in pass three. The open questions from pass one get answered and create new decisions. The golden passages accumulate into a vocabulary — your organizational language, documented and traceable.
This is recursion. The meaning gets denser with each pass. Nothing is lost. Everything connects.
# Why This Matters Now: The Agent Problem
If this were just about personal note-taking, it would be a nice-to-have. But we've entered a moment where the stakes are much higher.
Companies are deploying AI agents at scale. An agent monitors customer support. Another analyzes the sales pipeline. Another tracks engineering velocity. Each agent is powerful individually. But collectively, they're flying blind — because they have no shared understanding of what the organization knows, has decided, or is becoming.
The support agent doesn't know the engineering team decided to deprecate the feature customers are complaining about. The sales agent doesn't know the support team is seeing a churn pattern. The engineering agent doesn't know the sales team promised a feature that conflicts with last month's roadmap decision.
The missing piece isn't more data. These agents have access to all the data. The missing piece is structured meaning — decisions that are identified as decisions, frameworks that are identified as frameworks, emergence patterns that are tracked across conversations over time. A semantic layer that agents can read from and write to.
Without that layer, agents hallucinate. Not because they're bad at language — because they're building on summaries instead of provenanced, categorized, structured meaning.
# Aggregation Is Not Intelligence
There's a pattern emerging right now where companies build internal tools that aggregate all their data into one place — every Slack message, every Notion edit, every email thread — and then point an AI at the pile. "Here's everything. Now answer my questions."
This is aggregation. It's genuinely useful. But it's not intelligence.
Ask an aggregator "what happened yesterday" and you get a chronological list of events. Ask a meaning-aware system the same question and you get: three decisions were locked (with the reasoning that led to each), one emergence pattern is forming (with evidence from three separate conversations), and two open questions remain unresolved (with the context needed to resolve them).
Same data. Different layer. The difference is extraction — the act of identifying what kind of meaning lives in the raw material, and preserving it with enough provenance that an agent can trust it as a source of truth rather than treating it as another document to summarize.
# The Method
The practice I've been developing — and the tool I'm building to systematize it — rests on a few principles:
Verbatim is sacred. Extractions always point back to the original language. Provenance means character positions, line numbers, speaker attribution, timestamps. When an agent references a decision, it can show you the exact words that formed it. This is the antidote to hallucination.
Nine categories, not one. A single conversation might contain decisions, frameworks, golden passages, definitions, and emergence patterns simultaneously. Treating all of that as "notes" or "summary" loses the structural information that makes it useful. Each category has different properties, different lifetimes, and different relationships to other meaning.
Your keys, your compute, your meaning. The extraction methodology is the product. The compute that runs it should be yours — your LLM API keys, your model choice, your data flowing through your own provider under your own agreements. The meaning you extract should be portable — markdown with provenance, downloadable always, never locked in a proprietary format.
Surface before compute. Show the user their content immediately, then process. Intelligence should enhance what you see, not replace your ability to see it.
# The Recursive Era
On a recent episode of the All In Podcast, the word "recursive" appeared as the defining characteristic of the next generation of AI systems. Agents that check each other's work. Systems that run nightly passes to improve. Outputs that become inputs for deeper processing.
This is the transition from linear AI (prompt → response → done) to recursive AI (prompt → response → feed back → refine → compound). And it requires infrastructure that most organizations don't have yet — a place where meaning lives, grows, and remains trustworthy across sessions, across agents, across time.
Summarization collapses. Recursion compounds.
The choice between them isn't a feature preference. It's an architectural decision that determines whether your organization's AI investment creates lasting intelligence or generates an ever-growing pile of context that nobody — human or agent — can navigate.
# What's Next
I've been building this for a year. Not as an enterprise product (though the enterprise implications are real) — as a practice. A daily methodology for preserving and compounding the meaning generated in AI-assisted work.
The tool is called Multiplist. It runs nine categories of extraction across any conversation or document, preserves verbatim provenance, and feeds structured meaning back into future sessions. I use it every day, across every domain I work in.
Today I'm opening the beta to a small group. If you've felt the frustration of AI amnesia — if you've lost frameworks, forgotten decisions, re-derived insights you know you already had — I built this for you.
Reach out to me on LinkedIn and I'll send you an invite.
Summarization collapses. Recursion compounds. Let's build on what we've already thought.
Amy Blaschke is the founder of Multiplist. She writes about recursive methodology, neurodivergent-first design, and what happens when you stop letting AI forget.