"'Unless we can rule out the possibility, we should be proactive and figure out how to navigate the terrain ahead of time', they write."
This is an attractive idea in technology: Predict what will happen and plan for it. It worked for Wayne Gretzky in hockey, why not in tech? But I've never seen it actually work in practice. The consequences of any general-purpose technology are fundamentally unknowable, and the problems that arise turn out to be different than anticipated.
For example consider the web. Who in 1993 was talking about the risks of social media, information bubbles, and polarization? Virtually no one. If we had stopped development in 1993 to solve the problems we saw, we would have solved the wrong problems.
So it goes with AI. The only thing I'm 100% sure of is that nobody today is correctly thinking about the negative consequences of AI. That doesn't mean we shouldn't study it of course. But it means that any call to action that impedes progress is DOA.
Looks like we're in for a wild ride with AI taking over the scene. Can't wait to see how this all plays out, but those robot hands definitely need some work!
I thought that advanced AIs would be like Murderbot and enjoy watching human television series and media as a form of learning human characteristics and as an escape from bot duties.
I think the difficulty of talking about military use of AI is there's very little public information that you can reference. I do from time to time do some research here, but it's quite tough to penetrate. Great area for investigative journalist to start a substack!
"'Unless we can rule out the possibility, we should be proactive and figure out how to navigate the terrain ahead of time', they write."
This is an attractive idea in technology: Predict what will happen and plan for it. It worked for Wayne Gretzky in hockey, why not in tech? But I've never seen it actually work in practice. The consequences of any general-purpose technology are fundamentally unknowable, and the problems that arise turn out to be different than anticipated.
For example consider the web. Who in 1993 was talking about the risks of social media, information bubbles, and polarization? Virtually no one. If we had stopped development in 1993 to solve the problems we saw, we would have solved the wrong problems.
So it goes with AI. The only thing I'm 100% sure of is that nobody today is correctly thinking about the negative consequences of AI. That doesn't mean we shouldn't study it of course. But it means that any call to action that impedes progress is DOA.
Looks like we're in for a wild ride with AI taking over the scene. Can't wait to see how this all plays out, but those robot hands definitely need some work!
'threshold level of substantial AI-lewd software acceleration'
*lewd is level typo?
as much as I'd love things to be "AI-lewd", the correct phrasing was "AI-led" - have updated. Thanks!
I thought that advanced AIs would be like Murderbot and enjoy watching human television series and media as a form of learning human characteristics and as an escape from bot duties.
Sometimes computer using AI systems like to look at pictures of national parks, so you might not be far off...
It is time to start talking about the military uses of AI: https://www.projectcensored.org/military-ai-watch/
I think the difficulty of talking about military use of AI is there's very little public information that you can reference. I do from time to time do some research here, but it's quite tough to penetrate. Great area for investigative journalist to start a substack!
Great focus, this is an evolution, not a revolution, and it might be surprising in what it delivers.