5 Comments

In regards to this line: “GPT-4 should be thought of more like a large-scale oil refinery operated by one of the ancient vast oil corporations at the dawn of the oil era” — I wrote up a piece forecasting authoritarian and sovereign power uses of LLMs (https://taboo.substack.com/p/authoritarian-large-language-models). One thing I'm curious about is how many countries will start treating access to LLMs as a national security economic risk in the same way we treat access to oil. How often LLMs will be used in embargoes? If a country or company controls a powerful one, they could cut off API access.

Expand full comment

We'll see this soon with China for what its worth. We're already seeing access to the best model makes it way easier to bootstrap the second best model (regardless of terms of service). And UK throwing almost $1trillion at a modeling effort.

Expand full comment

Yeah, how expensive it is to make the best (or nearly best) model is going to really determine how powerful these are for state actors to use them for economic leverage. If alpaca can keep up with the best models (I'm still wondering if it was tested on training data) then everyone will have them and they won't be as helpful for exercising sovereign power.

Expand full comment
author

My sense is that there's still quite a large diff in terms of capabilities between the expensive models and the cheaper open source ones. An important thing here is the phenomenon of capability emergence where some things may only be unlocked via scale. Unfortunately, the science of emergence is still v primitive - though we have scaling laws, we don't really have ways to predict when there might be phase changes in terms of new capabilities

Expand full comment

* "capability overhang" great term! The political angle is well said, and needs to be repeated by folks with following. I covered the same thing in my piece in the societal section, and called it the "Keynote Era" of ML, equating them to tech companies rather than oiligopolies https://robotic.substack.com/

* RE Alpaca, we've been seeing it's less good on factual answers, while being decent at this "fun stuff". I wonder if that's a result of being fine-tuned on instruction data with a less good base model.

Expand full comment