Another major difficulty for a Senior Policymaker working on Eschatological AI policy is the prisoner's dilemma: the Policymaker has by now heard many times that 'if we slow down, China gets there first and will control the world.'
Long-time subscriber since 2018 here. Just had to leave a comment that I loved this "Eschatological AI Policy Is Very Difficult" piece!
If only Trump and Xi could be fully ASI-pilled. Like "I'm going to call Xi right now so we can clear our schedules and negotiate about how to carve up the light cone"-pilled. Seems like getting those two people FULLY ASI-pilled could be the lead domino that makes Taiwan an afterthought and international AI safety coordination a top global priority.
The problem with the AI doomsday scenario is that despite years of predictions, there is no widespread unemployment or real risks apparent from using AIs, assuming one uses them with a critical eye. Maybe those downsides will eventually show up in the data, but they don't yet.
So to buy in to the doomsday scenarios, you have to believe the story. And what congressperson will enact regulation that places real burdens on the economy – on the basis of a story?
Interesting that your policy expert example ignores the biggest ongoing threat - Global heating. Governments with the "help" of the fossil fuels industry have slow-walked even the smallest changes to decarbonizing the global economy, mostly ignored the support for vulnerable countries. There is not going to be some magical AI solution either. We pretty much know what we can do to "bend the curve" in the right direction, but we aren't.
AI remains an unknown technology threat. It is clearly a 2-edged sword, both useful and a threat, some of which we see every day. It certainly needs some regulatory guardrails, but the evangelists seem to have the ear of governments to stave this off.
Jack, are you also familiar with the Sentient Foundation? 85M seed raised last year for the Dobby models https://sentient.foundation/, ran into them digging around in the Ethereum community
I think distributed compute and distributed model governance are often conflated in the distributed AI community, partly because many of the contributors come from web3/blockchain. It seems to me that they’ll lag behind frontier labs by a few years because of their more lagged access to hardware (all of industry is gated on advanced chips), but the barrier to entry for sub-AGI models is lowering enough that they’ll become an increasingly important voice for autonomous, distributed software agents in the very near future
The AI problem reminds me of regulating high finance. If you truly had a good grasp of the galaxy brained trading strategies that teams of the smartest financial geniuses could find, well enough that you could perfectly regulate them to maximize public welfare directly, you’d probably be offered a job at those firms in the first place! Politicians are then left with highly motivated staffers but rarely world renowned experts
Yes, nice summary. Everything you touched on should mean that you and Anthropic and Dario (as he did in 2017) are advocating for a major effort between the US and Chinese governments to develop a global Framework for AI Governance and Safety, rather than advocating for say an AI Diffusion Framework and more export controls?
The problem is not that people do not understand the situation or its ethical implications. The problem is that mistrust between state actors is resulting in an AI arms race. And the key difference between the AI arms race and the nuclear arms race is that for AI there is no 'mutually assured destruction' equilibrium, the equilibrium that kept us all alive for the last 70 years.
Another major difficulty for a Senior Policymaker working on Eschatological AI policy is the prisoner's dilemma: the Policymaker has by now heard many times that 'if we slow down, China gets there first and will control the world.'
Good luck to us all.
Long-time subscriber since 2018 here. Just had to leave a comment that I loved this "Eschatological AI Policy Is Very Difficult" piece!
If only Trump and Xi could be fully ASI-pilled. Like "I'm going to call Xi right now so we can clear our schedules and negotiate about how to carve up the light cone"-pilled. Seems like getting those two people FULLY ASI-pilled could be the lead domino that makes Taiwan an afterthought and international AI safety coordination a top global priority.
The problem with the AI doomsday scenario is that despite years of predictions, there is no widespread unemployment or real risks apparent from using AIs, assuming one uses them with a critical eye. Maybe those downsides will eventually show up in the data, but they don't yet.
So to buy in to the doomsday scenarios, you have to believe the story. And what congressperson will enact regulation that places real burdens on the economy – on the basis of a story?
Interesting that your policy expert example ignores the biggest ongoing threat - Global heating. Governments with the "help" of the fossil fuels industry have slow-walked even the smallest changes to decarbonizing the global economy, mostly ignored the support for vulnerable countries. There is not going to be some magical AI solution either. We pretty much know what we can do to "bend the curve" in the right direction, but we aren't.
AI remains an unknown technology threat. It is clearly a 2-edged sword, both useful and a threat, some of which we see every day. It certainly needs some regulatory guardrails, but the evangelists seem to have the ear of governments to stave this off.
Jack, are you also familiar with the Sentient Foundation? 85M seed raised last year for the Dobby models https://sentient.foundation/, ran into them digging around in the Ethereum community
I think distributed compute and distributed model governance are often conflated in the distributed AI community, partly because many of the contributors come from web3/blockchain. It seems to me that they’ll lag behind frontier labs by a few years because of their more lagged access to hardware (all of industry is gated on advanced chips), but the barrier to entry for sub-AGI models is lowering enough that they’ll become an increasingly important voice for autonomous, distributed software agents in the very near future
The AI problem reminds me of regulating high finance. If you truly had a good grasp of the galaxy brained trading strategies that teams of the smartest financial geniuses could find, well enough that you could perfectly regulate them to maximize public welfare directly, you’d probably be offered a job at those firms in the first place! Politicians are then left with highly motivated staffers but rarely world renowned experts
Yes, nice summary. Everything you touched on should mean that you and Anthropic and Dario (as he did in 2017) are advocating for a major effort between the US and Chinese governments to develop a global Framework for AI Governance and Safety, rather than advocating for say an AI Diffusion Framework and more export controls?
The problem is not that people do not understand the situation or its ethical implications. The problem is that mistrust between state actors is resulting in an AI arms race. And the key difference between the AI arms race and the nuclear arms race is that for AI there is no 'mutually assured destruction' equilibrium, the equilibrium that kept us all alive for the last 70 years.
Could you share a link to the research you mentioned in the last sentence?