Discussion about this post

User's avatar
Dominic Caldwell's avatar

Ivory exactly the use case we should celebrate—scarce human expertise multiplied by scale. On circuit design: equally telling, because it shows how easily knowledge work can be restructured when the bottleneck isn’t intelligence but attention and access. And on the human/LLM reasoning overlap: I’d frame it a little differently. We shouldn’t be surprised that large systems start to rhyme with human cognition; they’re constrained by the same physical universe, the same math of pattern recognition. That doesn’t mean they’re conscious, or even converging on our kind of awareness. It means they’re good mirrors—sometimes sycophantic, sometimes distorted. The through-line for me isn’t “AI looks like us, therefore treat it as us.” It’s “AI reorganizes where human attention, labor, and legitimacy sit.” In ivory, that’s a gift. In jobs, it’s a shock. Thank you for this, Jack.

Expand full comment
Steeven's avatar

I don’t think humans and AIs will converge on representations. Stockfish has a much different chess board representation compared to any human. The fact that it has a big table of all solved endgame positions and humans don’t is one example. With large language models, I expect RL to make the representations less human like over time

Expand full comment
3 more comments...

No posts