The Danger of Agentic AI — And It's Not What You Think
What comes to mind when you hear “the danger of AI”? Rogue systems? Job displacement? Alignment problems? Or maybe the skeptic’s version: AI that writes mountains of unnecessary code, creating more mess than value?
Those are valid concerns. But after months of building AI workflows, I’ve found a different danger — one that’s closer to home.
Let me set the scene. When you work with agentic AI properly, you’re not just prompting a chatbot. You’re managing a team. Multiple teams, actually. You can have entire DevSecOps pipelines where agents review each other’s work, loop until quality gates pass, and only surface results when they meet your standards. The “unnecessary code” problem? That’s a framing problem. When you’ve set up proper review agents and defined what “good” looks like, “less is more” becomes achievable — the agents iterate until it’s right.
Steve Yegge recently described this trajectory in his post “Welcome to Gas Town”1 — where he’s running 20-30 AI agents simultaneously, orchestrated like an industrial coding factory. You become a Product Manager; your orchestrator becomes an “Idea Compiler.” As Yegge puts it: “Gas Town solves the MAKER problem2 (20-disc Hanoi towers) trivially with a million-step wisp.”
To the skeptics — this field moves fast. Don’t get stuck in beliefs that are already outdated.
This is genuinely powerful. Your agents are your teammates. You’re the manager of AI teams — or, in Yegge’s framing, an entire town.
The Agentic Doom Loop
Here’s how it starts. You build an agent to do the thing you already intended to do. It works. It delivers exactly what you wanted, in the style you wanted, because you’ve trained it that way. You’re done. Then the finished work gives you a new idea. Or you see a small improvement you can make fast. Or you spot the next task that would be easy now that the agent can handle it. You take it. The work gets better. The ideas keep coming.
This isn’t just about improving agents — it’s any work that completes fast and sparks the next idea.
And it feels great.
Gene Kim and Steve Yegge (yes, the same Yegge behind Gas Town) describe this mechanism in their book on vibe coding3. They frame it as the “good FAAFO” — Fast, Ambitious, Autonomous, Fun, and Optionality. As they put it: “Vibe coding turns your keyboard into a slot machine… Each little payout delivers a tiny dopamine hit.” They see this as a feature. I’m not so sure it’s only good — more on that later.
You’re doing work. Meaningful, technical, genuinely interesting work. The feedback is immediate: you change something, you see the result, and the work gets better. The loop tightens. One more tweak. One more improvement. Then the next thing.
Hours pass. You’re still in the loop.
This is the Agentic Doom Loop. It’s the dopamine loop that keeps you iterating on work itself, whether you’re drifting from value or staying on track.
This seems familiar
If you’re a parent, you’ve seen this. Your child loses hours to a game. You set boundaries: computer time separate from phone time. They adapt. Can’t play? They watch streamers play instead. YouTube walkthroughs. Twitch gameplay. The medium changes, the loop continues. They’re not trying to outsmart you — they’re following a pull they don’t fully understand. Watch that pattern. Now watch what happens when knowledge workers get access to tools that deliver the same kind of feedback loop.
Why This Is a Business Risk
In Lean manufacturing, there’s a concept called the “8 wastes” — categories of activity that consume resources without creating value4. Two of them map directly to our doom loop:
Overprocessing — doing more work than the customer (or the goal) requires. Polishing something beyond what’s needed.
Inventory — work that’s been done but not yet delivered. Features sitting in a branch. Improvements that haven’t been tested in production.
Agile has similar concepts. “Gold plating” — adding unrequested features or polish. “YAGNI” — You Aren’t Gonna Need It. “Premature optimization” — the classic engineering trap of perfecting something before you know if it’s the right thing.
The doom loop is all of these at once. You’re overprocessing. You’re building inventory. You’re optimizing prematurely. And it feels like flow state. But in reality, you diverge from value.
But even with guardrails, the loop still delivers the dopamine hit. That’s a different risk — compulsion. The loop doesn’t disappear just because you’re on the right track.
This is where it stops being a productivity problem. Agentic AI is a superpower. It lets individuals and small teams do things that previously required entire departments. That’s the promise, and it’s real.
But superpowers can be addictive. And when your employees have access to tools this engaging, you’re not just managing output — you’re managing attention. The risk isn’t that people will slack off. The risk is that they’ll work intensely on the wrong layer of the problem, or stay locked in even when the work is correct. They’ll optimize whatever is easiest to improve instead of shipping products. They’ll refine workflows instead of testing them with real users. They’ll build beautiful systems that never see production.
This is a new category of waste. And as agentic AI adoption scales, so does the risk.
Safeguards
I don’t have this figured out. But here’s what I’m experimenting with as individual tactics, plus one framework worth adopting at the organizational level:
Timeboxing. The simplest intervention. Pomodoro timers. Calendar blocks. Anything that forces a stop and asks: “Is this still the highest-value thing?”
Cycle completion before iteration. Borrow from Scrum: you must ship a working increment before you’re allowed to improve it. No refining agents that haven’t run a real workflow end-to-end.
Metrics that track value, not activity. It’s easy to measure commits, or hours spent, or number of improvements. Harder to measure: did this actually move the goal forward? But that’s the metric that matters.
An accountability partner. Someone who asks the uncomfortable question: “Have you shipped anything this week, or just improved things?” This could be a colleague, a manager, or even a scheduled self-review.
Kim and Yegge address this in their book. They introduce the Three Developer Loops5 — inner, middle, and outer — a framework for managing AI collaboration across different timescales, each with strategies for prevention, detection, and correction. It’s a fantastic read, and a good starting point for building your own guardrails.
The Manager’s Responsibility
Getting stuck on the wrong path to a goal — that’s a productivity problem. But that’s not the only danger I’m worried about.
The real danger is when someone stops taking breaks. When they skip lunch to stay in the loop. When they decline a coffee with a colleague because they want that next dopamine hit. You’ve seen kids completely immersed in games for hours — engaged, not misbehaving, but you’re not convinced it’s healthy. Same pattern. When they’re physically present but mentally somewhere else — optimizing, tweaking, improving — while their health and relationships quietly erode.
I recently listened to an episode of Sveriges Radio’s “Kropp & Själ” about workplace alcohol abuse6. What struck me wasn’t the addiction itself — it was how invisible it was. Johan functioned for decades. The symptoms were diffuse. Nobody intervened until it was a crisis.
In that episode, alcohol expert Anna Sjöström describes addiction development using a zone model. Green zone: no problem, doesn’t affect others. Yellow zone: risky use — you’re overdoing it, but it hasn’t caused visible problems yet. Orange zone: harmful use. Red zone: full dependency, crisis, intervention required.
Most workplace interventions happen in the red zone. Sjöström’s point is that we should be talking about it already in the green zone — normalizing the conversation before there’s a problem to solve.
I’m not saying agentic AI compulsion is the same as alcoholism. The closest parallel might be gaming addiction — people who get so absorbed they forget to eat, forget to sleep, forget to connect with others. Except this version is harder to spot, because it looks like exceptional work performance. Your manager might even praise you for it.
That’s what makes it vicious. You get fast dopamine hits from your own results. And then you get another hit when your manager says you’re doing a great job. The loop reinforces itself from multiple directions.
Here in Sweden, this falls under arbetsmiljölagen — employers are legally responsible for the psychological work environment. I suspect most organizations aren’t prepared to think about this yet. But they should be.
I predict we’ll see this pattern emerge faster than we expect. AI adoption is accelerating. The tools are genuinely engaging. And the line between “productive flow” and “compulsive behavior” is blurry enough that most organizations won’t notice until someone burns out.
The conversation needs to start now — in the green zone — before we’re dealing with interventions.
These are my observations — not clinical expertise, just curiosity and reading about how humans respond to feedback loops and compulsive patterns. If you’ve seen this pattern, or found your own safeguards, I’d love to hear about it.
I used AI to help me articulate these thoughts. The research and references are my own.