Discussion about this post

User's avatar
Bernard's avatar

You mentioned the P(Doom) debate. I’m concerned that this debate may focus too much on the risk of extinction with AGI, without discussing the risk of extinction without AGI. For a proper risk assessment, that probability should also be estimated. I see the current p(Doom) as very high, assuming we make no changes to our current course. We are indeed making changes, but not fast enough. In this risk framing, AGI overall lowers the total risk, even if AGI itself carries a small extinction risk

It’s a plausible story to me that we entered a potential extinction event a few hundred years ago when we started the Industrial Revolution. Our capability to affect the world has been expanding much faster than our ability to understand and control the consequences of our changes. If this divergence continues, we will crash. AI, and other new tools, give us the chance to make effective changes at the needed speed, and chart a safe course. The small AGI risk is worthwhile in the crisis we face.

Expand full comment
Rohit Krishnan's avatar

This was an excellent, excellent, retrospective on GPT-2 and the difficulties of arbitrarily "creating a power floor" in AI regulation.

The best idea is still to increase our knowledge, monitor the models, run evals, understand how they work, and then we will know enough that come the right time we know enough to know how to solve the problems they might cause!

Expand full comment
13 more comments...

No posts