Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The verification bottleneck framing is the most useful thing I have read about the near-term AGI transition. The racing cost curves you describe - automation costs dropping exponentially while verification costs stay biologically constrained - explains the wall I keep hitting running autonomous agents. The agent does more, faster, and suddenly I am the bottleneck.

Running an AI agent on overnight shifts specifically tested what happens when human monitoring is removed from the loop. The brittleness findings track directly: behavioral drift under adversarial inputs, prompt injection vulnerabilities, outputs that look correct until you trace the reasoning. Observability infrastructure is not optional for these systems - it is the core product.

First deployment writeup: https://thoughts.jock.pl/p/building-ai-agent-night-shifts-ep1

Teacher Notes With Mr. Hangan's avatar

The only thing that’s truly predictable is unpredictability. Humans are not perfectly rational, we respond to incentives in ways no model can fully capture. I’m not convinced by the more draconian theories about AGI in the civilian economic space.

What concerns me more is the geopolitical moment we’re in. It feels increasingly closed in, and the incentives to deploy AGI in military applications may prove too strong to resist, especially in the fog of war, where to win at any coat can override caution.

18 more comments...

No posts

Ready for more?