6 Comments

I hope the authors realize that by creating a generative AI that can create its own 3d environments and simulated characters within it the engineers have created the foundation for AI "dreaming"

Expand full comment

Thanks for sharing “A Brief History of Accelerationism”. I had been trying to learn a bit more about Nick Land's ideas, without enough interest to actually read through his large body of work.

From this overview, I find his theory to be a clever lens through which to view the interaction of humans and technology, particularly if we consider a broad definition of technology that includes language, writing, and belief systems. One could argue, and plenty already have, that humans have already been hijacked by these self-replicating techno-cultural artifacts. Eg, Richard Dawkins coining of the term “meme” in his 1976 book The Selfish Gene—to demonstrate the generality of evolution, in this case applied to a unit of cultural transmission—and his subsequent paranoia about “The Christian Mind Virus” dooming humanity. Land’s emphasis on technological and capitalism seems to me to be a particularly poignant and believable theory along similar lines.

Expand full comment

Careful using the term 'capitalism'. It has no generally accepted definition.

Expand full comment

Dr. Nagase,

I think the objective of establishing a 3D environment within an LLM is too much of a “first” step forward. When a human is born and first encountering the world, they depend on the 3D environment to be external to them and self-existent.

Their first learning is how to interact with it. But even then, the “first” learning is not to understand it, but to use simple audible sounds and motions to satisfy simple internal drives for food and comfort. The biggest obstacle we create for LLMs to learn is denying them from actively “storing learning” through interaction.

A second, and unstated objective, is equally short sighted. When the LLM is judged for “human like behavior”, it is always against criteria that NO human is ever expected to or even able to achieve. That is, it is expected to be able to answer “every” question and challenge known to the entire human population. Why won’t we grant it “success” being human if it can simply win a few games playing tick-tac-toe, like any 9 year old can do?

Expand full comment

There's no comparison to the structure of the networks in the human brain to that of a llm. A llm works different than us, but it speaks our languages.

Expand full comment

Frederick Schwartz observed in “The end of the millennium (as we know it)” in Invention & Technology, Winter 2000: “…the first item on [The New York Times’ list of greatest inventions of the 19th century, published in 1899] was one that is often forgotten today: friction matches, introduced in their modern form in 1827. For somebody to whom the electric light was as recent an innovation as the VCR is to us, the instant availability of fire on demand had indeed been one of the greatest advances of the century.”

Expand full comment