AI doesn’t reduce your cognitive load. It concentrates it.
I’ll be honest: these past few weeks have been some of the most exhilarating of my career.
I’ve been running a small orchestration of agents across multiple projects simultaneously. It’s been remarkable: watching something move from idea to production-ready faster than I ever thought possible. Seeing real progress compound in real time. Every time I check in, there’s more done than I expected. The natural reflex is to push further. What else can we ship? What’s the next project? The train is moving, so keep it moving.
But here’s the thing I’ve also noticed: I’ve been more tired at the end of the day. Harder to wind down at night. I’ve been on my phone in bed (my agents are a message away, any time), which I know isn’t good. I can feel the edges of something. Not burnout exactly. More like: I can see where this road goes.
Scott Hanselman said something in his interview with Steve Yegge that I immediately recognized: “I started to feel a couple of days ago that like I was wasting time because I was sitting and there was no agent running at the moment. And then I caught myself and I said that’s not a healthy thought.” I’ve felt that. The anxiety of not having enough work queued for the agents. The frantic energy of needing to keep the momentum alive. The sense that stopping is somehow failure.
The evidence
This isn’t just my story. The data points the same way, and it’s not flattering.
An HBR 1,488-worker study from February on AI adoption found that heavy AI users reported more mental fatigue, not less. Cognitive effort didn’t disappear; it shifted from execution to review and correction. Reviewing AI output is harder than doing routine work. It requires holding the full problem in your head while evaluating someone else’s answer. It’s expensive in a different way, and there’s no obvious recovery window.
AI tools caused workers to take on a faster pace and broader scope. The time savings didn’t materialize into rest or focus. People just did more, under more pressure, at higher speed.
Steve Yegge described managing twenty-plus agent swarms and experiencing what he calls a “nap-strike”:
“High-end vibe coding is fucking with our sleep cycles… I have a pillow and blanket on the floor next to my workstation. I’ll just dive in and be knocked out for 90 minutes, once or often twice a day. At lunch, they surprised me by telling me that vibe coding at scale has messed up their sleep. They get blasted by the nap-strike almost daily, and are looking into installing nap pods in their shared workspace.”

Ajit Banerjee, Ryan Snodgrass, Geoffrey Huntley all reported the same thing independently. Agent orchestration at scale is cognitively punishing. Not the kind of tired you push through. The kind where your brain decides it’s done, and ninety minutes on the floor is the only thing that resets it.
What’s actually happening (the Bezos Mode problem)
Here’s the mechanism. It has a name.
Yegge calls it Bezos Mode: the state where AI acts as a filter, routing only escalated, unresolvable problems to you. At Amazon, Bezos dealt only with issues no one else could resolve. Sounds ideal, right? Now imagine your entire day is those issues, back to back, without the easy stuff to break up the rhythm. That’s not a lighter cognitive load. That’s a brutal one.
AI handles the easy problems: the boilerplate, the test scaffolding, the first-draft documentation, the well-specified ticket. That frees up time… In theory.
But what’s left in your queue when the easy problems are gone? Only the hard ones.
Hard problems don’t queue. They stack. Every one requires full engagement. Every one demands judgment. You’re not clearing work faster; you’re clearing clearable work faster while the genuinely difficult decisions accumulate. And you’re accumulating them alongside the new job of managing whatever the agents produced.
The HBR study found that workers with high AI oversight expended 14% more mental effort, reported 12% more mental fatigue, and experienced 19% more information overload than those with low oversight. Offloading repetitive tasks to AI does help; it predicts lower burnout. But the oversight tax more than eats the savings.
Why “more agents = more capacity” is wrong
This is the mistake organizations are making right now.
More agents means more surface area to manage, not more throughput. Every agent produces output that requires a human decision: accept, reject, guide, rework. Five agents means five times the decision surface, not five times the output. Those decisions aren’t trivial. They require context, judgment, and the same deep engagement you were hoping to offload.
You come back from a meeting to a queue of agent outputs, each waiting for a decision. It leads to decision fatigue
More agents don’t give you more capacity; they give you more to manage. The cost lands on whoever’s orchestrating. Usually your best engineers. The ones you can least afford to burn.
What managing through it looks like
My engineers who are deep in AI right now are, mostly, having the time of their lives. They’re shipping more than they ever have. They are shipping at work and on personal projects. They’re learning languages they’ve never touched by asking AI to build things in them and studying the output. The energy is real. I’m not going to pretend otherwise.
But I’m checking in on them. Not because anything’s wrong, but because I’ve read enough, and felt enough of this myself, to know that the high and the crash aren’t always that far apart.
Most of them can already feel the edges of it. They know the pace might not hold. We’ve all read the articles. Nobody’s saying it out loud yet, but it’s there.
A CTO I know told me about an engineer who pulled him aside recently and said, essentially: I’m sorry if I’ve seemed off. I’ve been going so hard on AI-assisted work that I think it’s starting to follow me home. I need to find a way to balance this better.
That story stayed with me. Because that engineer knew. They caught themselves. But they needed someone to say it to.
That’s what I think managing through this looks like right now: not a framework, not a policy. Just paying attention. Asking the question before the engineer has to apologize for the answer.
Knowing when the high becomes a crash
Back to me, in bed, phone in hand, checking in with my agents.
I know that’s not great. The exhilaration is real and the exhaustion is real, and at some point those two things are going to meet. I’m not ready to stop. The progress is too good, the momentum too valuable. But I can see the shape of what happens if I don’t eventually put the phone down.
Even as I write this, I am thinking about how my agents are idle. I feel like I am failing to maximize something I’m supposed to be maximizing.
Brain fry is real. The high is real too. At some point those two things are going to meet. The only question is whether you see it coming.
More agents mean more work, not less. Sleep, stay hydrated, and take care of yourselves… crashing out isn’t going to do you, your team or your family any favors.
Sources: HBR “Brain Fry” study (Bedard et al., March 2026, N=1,488) · Stevey’s Birthday Blog (Yegge, Jan 2026) · Hanselman interviews Yegge (YouTube, ~11:06) · GitHub Copilot 55% faster (GitHub + Accenture, 2024)