14 Comments
User's avatar
xoxox's avatar

"I systems which allow people to have 'tools that help citizens, representatives, and institutions perceive reality more sharply, understand tradeoffs, contest power, and act more effectively'"

this [naively] assumes distribution/ access to people, as opposed to gatekeeping by corps/ gov to push their own agendas

Grayson Schaffer's avatar

This reminds me of the optimism we had around social media democratizing information and powering the Arab spring. Seems just as likely that AI will be nationalized or enshitified and turned into a tool of political oppression. But I DO want to be an optimist!!!

Haye Hazenberg's avatar

Hall’s “political superintelligence” piece is really great and you’re right to stress the interface with people and institutions. AI could be an agent and information layer but should also help citizens reason together; the move from private preference to positions you can defend in public. There’s good evidence (Habermas Machine, Pol.is, Anthropic’s Collective Constitutional AI) that AI can also support this collective reasoning. I wrote about what that could look like as a chatbot design and how we might get a deliberative democratic feedback interface going: https://substack.com/@hayehazenberg/note/p-192435479​​​​​​​​​​​​​​​​

Abraham's avatar
2dEdited

A formal letter has been submitted to press@anthropic.com addressed to The Anthropic Institute regarding the boundary of the McCulloch-Pitts architecture and a logical derivation of Love, Good, and their absence from first principles. Submitted by Abraham Bravo Carvajal, founder of The 1x1 Life Institute, before any public platform or interview. Out of respect for what Anthropic built.

Gary Grossman's avatar

"...“tools that help citizens, representatives, and institutions perceive reality more sharply, understand tradeoffs..." If an AI personalizes responses that appeal to a user's preconceived ideas and viewpoints, then how is it possible for said users to perceive reality more sharply, etc.?

Jose Luis's avatar

What is this "Tech Tales" at the end od the newsletter?!?!

David Huang's avatar

Often the best part of the newsletter, stretching back nearly a decade

Jose Luis's avatar

But what does it mean?

Tate Cantrell's avatar

If Mr. Clark doesn't publish a book of sci-fi short stories from all of these tech tales at some point, I'll be vastly disappointed.

Check out the previous articles. My favourite is in #362 - The Sand -- https://importai.substack.com/p/import-ai-362-amazons-big-speech?utm_source=publication-search

Steve Wood's avatar

Oddly your “political superintelligence” content AND the DexDrummer content both resonate with this other Substack post by Bilawal Sidhu: https://creativetechnologydigest.substack.com/p/the-intelligence-monopoly-is-over?r=34tjr&utm_medium=ios …. First because it very much resembles a kind of “looking at what surrounds the gaps to fill the gaps” thing - ala the very simple photo effect AI skill all our phones take for granted, but elevated to higher, more consequential uses - crossed with the rather mundane expectation that AI will be used to play drums. What a clash in the spectrum of AI utility! We’ve been trained into thinking about some AI threats as either quotidian (see Open AI’s weird ads about learning how to do pull-ups) or grandly existential (beginning with Bostrom’s paper clips), but not the novel, data gathering and analytical uses (and who knows what else, which is the actual point) to which curious folks will put them. …..(Wow is that the worst written paragraph in history? Guess I better get AI to refresh it.

Here’s what Apple Intelligence suggested:

Your “political superintelligence” and DexDrummer content align with Bilawal Sidhu’s Substack post: https://creativetechnologydigest.substack.com/p/the-intelligence-monopoly-is-over?r=34tjr&utm_medium=ios. It highlights the idea of “filling gaps by examining surroundings,” similar to the AI photo effects on our phones, but applied to more significant uses, alongside the expectation of AI playing drums. This showcases the diverse utility of AI, from mundane tasks like learning pull-ups (as seen in OpenAI’s ads) to existential threats (like Bostrom’s paper clips), and beyond to novel data gathering and analytical applications.

Pawel Jozefiak's avatar

Google's society of minds framing is interesting because it sidesteps the alignment problem by distributing it. Instead of one system you can't fully verify, you get many systems with incentive structures you (in theory) can design.

The institutional analogy is doing a lot of work here - markets and bureaucracies fail in predictable ways that took centuries to document. The "governing AI will increasingly mean verifying agent behavior" point is where I think the real engineering is happening, not in the models themselves.

Mira's avatar

"political superintelligence" is doing a lot of work as a phrase lol. like... is that just lobbying but faster? or are we talking about something genuinely new here?

Trung Doan's avatar

The AI society paper is thought-provoking, but its landscape seems engineered and brittle:

• It jumped from debate-like reasoning traces to a civilization-scale society manifesto. That's a huge jump

• Someone must design and maintain that AI-society landscape. Sandcastles do not self-assemble. As a minimum, it needs anti-defection and counter-terrorism machinery, plus law-abiding attitude in most agents

And

• In recursive SI, compute scarcity pushes toward dominance, not coexistence.

Steeven's avatar

On political superintelligence, I wonder how well it would go if you were required to be interviewed for 10 minutes on your preferences before you actually voted. The issue is for stuff like rent control. Say a lot of people have a direct preference for rent control. Does the AI have to convince them that rent control doesn’t really work, or just tell them the most effective way to maximize rent control? Who decides what the AI does in that case? By default, it looks like the frontier labs are going to start deciding the political opinions of a ton of people. The current analogue which isn’t quite as strong is how the CCP suppresses anti-China topics on TikTok and moves opinions that way

I’m almost confused on how the seven nation army version is that bad for robots. Like clearly this approach isn’t even close to working. Maybe I should become a drummer, I can do better than robots

On agents and society, it’s not really “even if we solve alignment”, I’d expect more like a society where semi-aligned somewhat strong agents are proliferating and difficult to turn off and some have really bad goals. It’s like normal human society on fast forward with more super smart psychopaths

Fun tech tale!