Given the overall optimism in Dario Amadei's recent manifesto, I was struck by how he/we have no idea how the benefits of a "powerful AI" will be distributed. Will these be shared broadly for the benefit of all or available only to a relative few? Will this indeed create abundance, or will this lead to vast unemployment? That these questions cannot be answered, and such Powerful AI could be only a couple of years away suggests a brewing crisis or cascading crises that is largely going undiscussed by leaders unless this is happening behind closed doors. The Intelligence Rising paper you cite seems to underscore this need. "Outcomes leading to positive futures almost always require coordination between actors who by default have strong incentives to compete — this applies both to companies and to nations."
I’m confused by the sub-heading ‘will UX innovations become as important as research innovations?’ I’m on the free plan so might be missing that content. Regardless, it’s a point I am currently thinking about a lot and have written about recently - link below.
Something I find fascinating and frustrating is that there seems to be a blind spot for AI forecasters around the possible implications of algorithmic advancement.
Currently, frontier AI requires big datacenters. Likely the next step involves next gen frontier AI models helping with AI R&D. What if that works though? What if that automated R&D process finds such dramatic algorithmic improvements that anyone with those algorithms can train an AGI for cheap? That changes the strategic landscape!
Given the overall optimism in Dario Amadei's recent manifesto, I was struck by how he/we have no idea how the benefits of a "powerful AI" will be distributed. Will these be shared broadly for the benefit of all or available only to a relative few? Will this indeed create abundance, or will this lead to vast unemployment? That these questions cannot be answered, and such Powerful AI could be only a couple of years away suggests a brewing crisis or cascading crises that is largely going undiscussed by leaders unless this is happening behind closed doors. The Intelligence Rising paper you cite seems to underscore this need. "Outcomes leading to positive futures almost always require coordination between actors who by default have strong incentives to compete — this applies both to companies and to nations."
Omni Math paper link is incorrect pls correct the same https://arxiv.org/abs/2410.07985
I’m confused by the sub-heading ‘will UX innovations become as important as research innovations?’ I’m on the free plan so might be missing that content. Regardless, it’s a point I am currently thinking about a lot and have written about recently - link below.
https://open.substack.com/pub/olliec/p/post-5?r=2a55up&utm_medium=ios
I just read this post by Hayden Belfield (it's a classic EA post) that is relevant to the race dynamics of contemporary AI development - Are you really in a race? https://forum.effectivealtruism.org/s/dg852CXinRkieekxZ/p/cXBznkfoPJAjacFoT
I loved the tale
Something I find fascinating and frustrating is that there seems to be a blind spot for AI forecasters around the possible implications of algorithmic advancement.
Currently, frontier AI requires big datacenters. Likely the next step involves next gen frontier AI models helping with AI R&D. What if that works though? What if that automated R&D process finds such dramatic algorithmic improvements that anyone with those algorithms can train an AGI for cheap? That changes the strategic landscape!