An AI Workflow to Slow Down & Reflect in the Age of Inference-Speed
There's a lot going on in the AI space, more specifically the agentic-coding or call it agentic-engineering or whatever new term pops up in couple weeks. Every day I wake up, I come across atleast ~20 twitter threads or articles or videos about topics like Guide to 10x your coding, Ship like a team of five, Get 10x better results from Claude Code. Techniques like running 5 Claudes in parallel, and how people are Shipping at Inference-Speed.
It's all crazy & exciting but frankly, too overwhelming at the same time. It feels like the world is moving at such speed that I'm barely able to catch up.
I mean as much as the thought of "Shipping at Inference-Speed" seems exciting, what does it really mean for an engineer like me who's still trying to get better at the tech stack where I have already spent years, who's still learning the craft of engineering, who still loves to handcraft interfaces?
I've always been confident about my wits as an engineer. I lowkey miss the times when we used to figure things out by reading docs, asking questions in Discord communities, or digging through StackOverflow. There was a sense of joy & pride when solutions weren't a prompt away. But lately, I've been feeling like my skills are prolly falling behind the 10x engineering hype as I scroll through Twitter or LinkedIn.
I know what people will say: "Problem solving isn't gone. Writing code was never the goal, it was just the means." Sure, AI presents new problems to solve. But maybe I'm not fully ready for that world yet.
Not sure about others but I personally feel like I need to slow down!
So this post is not about a very sophisticated workflow that will 10x or 100x your agentic-engineering process or about generating code at inference speed. This article is about a very simple workflow that will help you "slow down & reflect". This workflow might seem counterproductive at first, but over time the value will compound.
Some context about the problem
The other day, I was in a chat session with my agent, debugging a weird build issue related to my monorepo setup using Turbopack. I was consistently getting build errors during server deployment on Render. Since agents are biased toward pre-training data, I suspect it had little framework or platform-specific context, and it didn't even pause to ask clarifying questions either.
So I provided relevant docs, along with error logs from Render, and other relevant context. We tried everything. Changed build commands. Modified package.json. Adjusted workspace configurations. Tweaking the tsconfig.json. Each attempt meant committing to git, waiting for the build to complete, and hitting a new error.
It took almost a solid 3-4 hours and eventually, it worked. Like magic!
But I had no idea what actually fixed it. I'd touched multiple files. Changed several lines of code. Not a lot individually, but enough that I couldn't tell which change mattered.
I couldn't explain:
- What the actual problem was
- What we tried that failed (and why)
- What finally worked (and why)
- What I'd do differently next time
All of those insights lived inside a chat session that's hard to review later and effectively inaccessible.
But this isn't just about this particular debugging session. The same problem applies to any session where you are building a new feature or working on something important with an agent. Architectural decisions, coding patterns it followed, commands or scripts it ran to test things, everything you didn't know before just feels like magic.
Why This Matters
Currently, all of us are spending more time in the chat window than the editor. I personally use AI coding assistants every day. Antigravity, mostly. And OpenCode too. Wait, I didn't mention "Claude Code"? Before you get mad at me, let me tell everyone that it's way, way, way better than anything else but at this point I'm just too finacially broke to afford it. They're incredible at getting things done. But then I realized:
I wasn't learning at the speed at which I'm generating code.
And as an engineer, I don't want my AI to replace my thinking. I want to learn & work with the agent, rather than just consume its output. I believe every agent chat session contains wisdom worth preserving.
When an AI debugs a tricky problem or implements a feature, it discovers valuable insights:
- Patterns that worked
- Approaches that failed
- The right sequence of tool calls
- Context that mattered
- Dead ends to avoid
Then the session ends. All that knowledge evaporates. Me, as an engineer, learn nothing. The AI agent learns nothing.
2 months later, I face a similar issue. I start from zero again.
So I want to solve this, at least for myself.
What I Tried First
First, I added this below section to my agents.md file:
## Learning & Documentation
When debugging, building features, or making architecture decisions, create a learning doc in `docs/learnings/` at the end of the session.
These docs should be:
- Succinct and to the point, not comprehensive
- Written like notes, not like AI-generated docs
- Focused on _what the problem actually was_, _what we tried_, _what actually fixed it_, and _why_
- Include inline comments or a small "Why?" note whenever something non-obvious is happening in the code
Name files concisely, the shorter the better, as long as it's clear what it's about.
If something feels "magical" or ambiguous during the session, flag it in the doc. During long debug loops, you can keep a running "attempt log" in the chat as we go, so we can distill it into the final learning doc even if the session is long. The point is: nothing should feel like magic after we're done.And based on what I knew about agents.md, I thought the agent will read this file, at the start of every conversation. And it'll just... do it.
I had this for 3 days. The AI never craeted any docs. Not at all.
Turns out, passive instructions are useless. The AI reads agents.md, nods politely, then immediately forgets about it the moment it starts solving your actual problem. There's no trigger. No reminder. It's just context that gets buried under 50 other things.
I needed something I could actually invoke. Something explicit.
What Actually Worked
Then I got to know about agent workflows. Most AI coding assistants support them. They're just markdown files in a .agent/workflows/ directory.
Here's the idea: you write a workflow file with instructions. Then you invoke it with a slash command. The AI reads it and follows the steps.
So I created /document-session.
Now, at the end of any session, I type /document-session. The AI creates a structured doc capturing what we learned. That's it.
No hoping it remembers. No passive instructions. Just a command and a result.
How It Works
The workflow does two things:
First, it figures out what kind of session this was.
Was this a new faeture implementation or about architectural decisions? Like "Why did we choose Fastify over Next.js API Routes?"
Or was it a learning/discovery from a debugging session? Like "How did we fix this better-auth session validation issue?"
Then, it creates the right kind of document.
Decisions go in docs/decisions/. They're timeless. No date prefix. Just the decision name.
Learnings go in docs/learnings/. They're date-prefixed so I can see when I learned something.
Each type has a template.
For learnings:
- What was the problem?
- What did we try that failed?
- What actually worked?
- Why did it work?
- What should I remember next time?
For decisions:
- What were we trying to solve?
- What options did we consider?
- What did we choose and why?
- What are the trade-offs?
The AI fills in the template. I review it. If I don't understand something, I ask. We iterate until it makes sense. Then I commit it.
Now that knowledge is searchable. I can grep for it. I can reference it. I can share it.
Here's What It Looks Like
Here's a real learnings doc that this workflow generated during one of my BetterAuth debugging session:
The key insights:
- Better Auth uses cookie-based sessions, not Bearer tokens
- The
__Secure-prefix requires HTTPS (won't work on localhost) - Database only stores the token ID, not the full token with signature
Next time I integrate Better Auth with a backend framework, I'll search docs/learnings/ and find this. I won't repeat the same debugging loop.
This won't have been possible with chat history.
What I've Learned Using This
I've been doing this for a week or so and here's what I realized:
Writing things down exposes what you don't understand.
When the AI creates the doc, I have to review it. That's when I realize: "Wait, I don't actually know why that worked."
Without the workflow, I would just move on. Code works, ship it.
Failed attempts are more valuable than the solution.
The template forces me to document what didn't work. That's the context that makes the fix make sense.
"We tried X, it failed because Y" is more valuable than "The solution is Z."
I actually go through these docs when necesssary.
I've referenced past learnings at least 2-3 times already. When I'm debugging something, I search docs/learnings/ first.
I've never once searched my chat history. It's too noisy. Too unstructured.
It's not just for me I hope.
If I ever work with other engineers, they can read these docs. They'll understand why I made certain choices. What gotchas to watch for. What I've already tried.
That's institutional knowledge. Right now, it's just me in the codebase. But it won't always be.
How This Relates to Other Approaches
If you've read Addy Osmani's excellent piece on Self-Improving Coding Agents, you will recognize a similar insight.
Addy talks about keeping a progress.txt file: a live log that agents append to after each iteration. It's their memory between sessions. When an agent restarts, it reads progress.txt to remember what it tried before.
His approach solves agent memory. This one solves human learning.
- Addy's
progress.txt: Agent memory between iterations (machine-readable) - My
/document-session: Human learning between sessions (human-readable)
Both solve the same root problem: without memory, you repeat mistakes.
For agents, that means they repeat failed attempts. For humans, it means we don't learn from our sessions.
The insight is the same: write it down, or lose it.
If you're running autonomous agent loops overnight (like Addy describes), you'll want both: a progress.txt for the agent and maybe /document-session docs for yourself. That way, the agent remembers what it did, and you understand why it worked.
Try It Yourself
This workflow is pretty simple. It's just a template and a trigger. But that's enough to turn evaporating knowledge into something you can actually use.
You don't need my exact workflow. Adapt it to your needs.
The core idea:
- Create a
.agent/workflows/directory - Add a markdown file with a template
- Invoke it with a slash command at the end of sessions
Here's a minimal version:
---
description: Document learnings from this session
---
# Document Session
At the end of a session, create a learning doc.
## Template
- What was the problem?
- What did we try?
- What worked?
- Why?
- What should we remember?That's it. The AI will follow the template when you invoke it with a slash command like : /document-session.
Here's my full workflow document that you can use:
Start Small
You don't need a perfect system. You don't need comprehensive documentation.
You just need to preserve the insights that would otherwise evaporate.
Try it after your next debugging session. Type /document-session (or whatever you call it). See what happens.
Maybe it'll help you slow down too.
Note: This should work with most AI coding assistants that support .agent/workflows/. But I have personally tried it with Antigravity. It should work for OpenCode and maybe Claude Code too. For Cursor, I assume you can do a similiar workflow with rules. Check your assistant's docs for specifics.
Let's Talk About It
I shared this on Hacker News because I'm genuinely curious about how others are handling this.
Are you feeling the same pressure to keep up? Have you found your own ways to slow down and actually learn from AI sessions? Or maybe you think this whole thing is unnecessary?
I'd love to hear your thoughts, critiques, or alternative approaches. The HN discussion is a great place to share what's working (or not working) for you.