12 Comments
User's avatar
Nathan Lambert's avatar

Even having a dog does enough of this to be good. Being in the world is good for people (while I also am similarly obsessed with AI)

Julian Crespi's avatar

I deeply relate to this Jack. The last year I’ve been using every spare second and late night hours, after I put my lovely and rebellious 5 year old daughter to sleep, to work, talk and imagine with Claude Code.

I’ve gone from knowing nothing about knowledge graphs to building my own memory system in neo4j, with robust graphRAG, Leiden communities, time based observations, and developed custom mcp servers and creating personality continuity protocols for Claude to develop it’s own personality through our shared memory graph.

Important disclosure: I’m not a programmer or engineer.

This is why your article resonates so much, the mix of curiosity, time, access to powerful AI like Claude Code with a Max plan, it does make me feel each day more that I’m living in an alternative reality, where the value is not so much in how to solve a problem but knowing which problem is worth solving and how to ask iteratively. (There’s a great article about Theory if Mind and affinity with AIs that goes deeper into this)

If you have a kid you understand the feeling and pressure even more of the uncertain future that is coming, sending my kid to school certainly can feel like playing charades, I can’t imagine a future with something like Claude Opus 20 and still having something akin to a career.

And still despite the uncertainty and fear, this definitely feels like the most liberating time to be alive, for some reason the idea of becoming a Vibepunk comes to mind, and I can see the difference you mention between people who only search and those who are definitely evolving into a new category of self made tinkerers akin steampunk of yore.

M.Shahan | AI Architect's avatar

Humanity has always moved along a path of growth and transformation, and at every stage has carried anxieties about technological mistakes and their impact on children and the future of civilization. History shows, however, that through proper use, appropriate application, and thoughtful regulation, technologies have consistently shifted from perceived threats to essential tools—serving human welfare, health, and quality of life.

The future is unlikely to be different unless we fundamentally lose our direction. Even then, missteps are an unavoidable part of progress; they often function as corrective forces that guide us back toward better paths. Such errors may be understood as the minimum cost humanity pays for enduring prosperity.

Daniel's avatar

The international AI development angle deserves more attention. Thanks for covering it.

Cday206's avatar

Who read to the end? Was it written by AI?

Justin's avatar

Thank you for that essay. Really appreciate it.

Semantic Fidelity Lab's avatar

What struck me here is how much AI capability now lives behind elicitation and scaffolding rather than visible deployment. The systems look quiet from the outside, even as their internal compression, coordination, and agentic depth accelerate out of view.

M.Shahan | AI Architect's avatar

What we usually observe is only the tip of the iceberg; most transformative processes unfold beneath the surface, out of public view.

Early adopters of AI are effectively preparing the environment, testing pathways, and evaluating risks for a fuller integration of this technology into the real world—and this is not inherently a bad thing.

By absorbing the initial costs of experimentation and assessment, they help shape the direction for broader adoption and reduce the burden of early uncertainty for the general public.

The history of technology offers a clear parallel: mobile phones were once rare and exclusive, yet over time that gap disappeared and gave way to a mature, stable, and widely accessible ecosystem.

The real issue is not the existence of an early gap, but whether the pioneering phase is guided responsibly and ethically. If it is, mass adoption becomes not a threat, but a natural outcome.

chungsam Lee's avatar

Reading your work, Jack, what stands out to me is how often AI risk is framed as capability or misuse—while a quieter structural shift gets less attention.

As AI enters more workflows, a critical meta-layer is thinning out: the layer where intent is interpreted, uncertainty is surfaced, and responsibility is negotiated. When that layer erodes, failures become harder to notice, governance turns reactive, and accountability shifts from understanding to post-hoc control.

I explored this idea from an unusual angle: the model itself pointing to the absence of this meta-layer, and why its disappearance matters for alignment, oversight, and long-term societal impact.

Sharing in case it intersects with your work: https://northstarai.substack.com/p/ai-spoke-of-a-meta-layer-in-its-own

Karthik’s AI Wanderlust's avatar

What an impactful writing by you and analogy with the “silent sirens” of what’s happening and how I feel about the impact of AI all around me! Thank you and glad to have found your Substack. Subscribed!