Thank you for opening the mind way up on this issue Jack. I'm going to enjoy the upcoming confusion. Meanwhile I'll continue to nurse my self-doubt over the whole landscape.
> Should AI development be centralized or decentralized?
I don't think there is another option apart from a centralized frontier followed by decentralized fine-tuning. If top labs started to release their model weights, I suppose that would decentralize everything, but given statements from those labs, that seems unlikely unless meta creates a frontier model. As far as *should*, I'm a little apprehensive of the best models being exclusively held by, and RLHF'ed on, the morals of academia and tech companies.
> Is safety an 'ends justify the means' meme?
Yes, it certainly is. If you have high P(Doom), top models are going to kill everyone, then self-defense is the appropriate response. I still don't think that would work since bombing data centers is a blunt instrument and seems likely to start a war which carries much more immediate risk.
> Does progress always demand heterodox strategies? Can progress be stopped, slowed, or choreographed?
I think progress has already been slowed. The counterfactual I'm considering is 'where would progress be if top labs open sourced their models?'. I think we would have GPT-5 level capabilities now or very soon if GPT-4 had been open-sourced because it would give people good ideas on how to improve their own frontier models. I think some progress can be slowed, but I think open source models will eventually push the frontier forward.
> How much permission do AI developers need to get from society before irrevocably changing society?
The idea that you need to seek permission from society seems bad since needing permission to do things leads to many problems elsewhere. I don't know what a legitimate process of seeking permission from society to do something would look like, and we have many bad examples of that process. Do you literally ask everybody on Earth? How? Do you get a representative group? How?
The converse is also true. What if society demands that AI developers deploy a model that they personally do not want to deploy? When Sam Altman spoke at an event and asked rhetorically if he should open-source GPT-4, lots of people in the audience said that he should. Why didn't he listen to them if he is seeking permission from society? For what it's worth, I think it's good that he didn't
Thank you for opening the mind way up on this issue Jack. I'm going to enjoy the upcoming confusion. Meanwhile I'll continue to nurse my self-doubt over the whole landscape.
Personal guesses
> Should AI development be centralized or decentralized?
I don't think there is another option apart from a centralized frontier followed by decentralized fine-tuning. If top labs started to release their model weights, I suppose that would decentralize everything, but given statements from those labs, that seems unlikely unless meta creates a frontier model. As far as *should*, I'm a little apprehensive of the best models being exclusively held by, and RLHF'ed on, the morals of academia and tech companies.
> Is safety an 'ends justify the means' meme?
Yes, it certainly is. If you have high P(Doom), top models are going to kill everyone, then self-defense is the appropriate response. I still don't think that would work since bombing data centers is a blunt instrument and seems likely to start a war which carries much more immediate risk.
> Does progress always demand heterodox strategies? Can progress be stopped, slowed, or choreographed?
I think progress has already been slowed. The counterfactual I'm considering is 'where would progress be if top labs open sourced their models?'. I think we would have GPT-5 level capabilities now or very soon if GPT-4 had been open-sourced because it would give people good ideas on how to improve their own frontier models. I think some progress can be slowed, but I think open source models will eventually push the frontier forward.
> How much permission do AI developers need to get from society before irrevocably changing society?
The idea that you need to seek permission from society seems bad since needing permission to do things leads to many problems elsewhere. I don't know what a legitimate process of seeking permission from society to do something would look like, and we have many bad examples of that process. Do you literally ask everybody on Earth? How? Do you get a representative group? How?
The converse is also true. What if society demands that AI developers deploy a model that they personally do not want to deploy? When Sam Altman spoke at an event and asked rhetorically if he should open-source GPT-4, lots of people in the audience said that he should. Why didn't he listen to them if he is seeking permission from society? For what it's worth, I think it's good that he didn't