The RAND analysis on fighting a rogue superintelligence is sobering. The fact that all three aproaches (HEMP, global shutdowns, hunter AI) either fail or have massive colateral damage really drives home the point that prevention is the only viable strategy here.
I thought it was interesting but this seems like a weird point at which to analyze. A rogue superintelligence that can defeat all of humanity combined but which was also known to be an attacker before it was too late seems dubious. It’s more likely that the AI isn’t strong enough to take down humanity or is so strong that countermeasures wouldn’t matter at all
“What strikes me most about this is the collision between epistemic culture and strategic culture.
In science and technology, openness is the currency of legitimacy. Peer review, reproducibility, “standing on the shoulders of giants.” It’s a moral reflex: publish, disclose, debate.
In strategy, openness is often suicide. Power comes from ambiguity, from what you don’t show. The whole point of intelligence work is to withhold.
We’re trying to graft these two cultures together in AI—and they are fundamentally at odds. The result is what you’re naming: we are effectively running our own equivalent of Radio Free Strategy, transmitting our vulnerabilities, escalation pathways, and deterrence logic into the ether for any adversary to tune into.
To me, the deepest irony is that people frame this openness as “democratizing safety” when in fact it may be pre-compromising resilience. The loudest conversations are often the least actionable, because once articulated publicly they are already strategically sterile.
Where I land is this:
Public discourse is valuable for values-setting, but not for playbook-setting.
The more the AI field confuses “being seen to be thinking about safety” with actually preserving advantage, the more we drift into performative security—good optics, weak defenses.”
I. The Enigma Principle
During World War II, British codebreakers faced a grim calculus: sometimes they had to let convoys sink and missions fail. If they acted on every decoded German message, the Nazis would realize Enigma had been broken. Intelligence superiority required not only cracking codes but protecting the knowledge that the codes had been cracked.
This was the Enigma principle: short-term tactical loss in exchange for long-term strategic dominance.
The parallel to today’s AI landscape is unnerving. In our rush to debate strategy in public, are we undermining the very meta-advantage we seek to preserve?
II. The MAIM Paradox
Consider the emerging logic of Mutual Assured AI Malfunction (MAIM).
The framework echoes nuclear deterrence: if one side launches destabilizing AI systems, the other can sabotage, disrupt, or disable in retaliation. The deterrence lies not in use, but in credible capability to inflict harm.
And yet how do we establish credibility? Researchers publish detailed playbooks, escalation ladders, and sabotage concepts. In doing so, they also hand adversaries the very instructions they need to immunize themselves. The paradox is stark: deterrence requires secrecy, but our culture of openness is eroding its plausibility.
III. What We’re Actually Broadcasting
Our public debates about AI strategy aren’t abstract. They contain:
Specific vulnerabilities: datacenter cooling systems, supply chain chokepoints, insider threats.
Defensive limitations: disclosures of where our protections are brittle, slow, or incomplete.
This is not mere awareness-raising. It is adversary training material.
IV. The Intelligence Gift to Adversaries
In traditional statecraft, asymmetric knowledge creates leverage. A first-mover advantage comes from knowing something your opponent doesn’t. But by placing our strategies in journals, blogs, and podcasts, we erase that asymmetry.
Instead of securing advantage, we gift rivals the keys to hardening their systems. We turn secret weapons into common knowledge—obliterating the very leverage they were meant to provide.
V. The Meta-Strategic Blindness
Why do we do this?
Because academia valorizes transparency. Because think tanks are rewarded for thought leadership. Because open discourse feels virtuous, even democratic.
But here lies a dangerous conflation: raising awareness is not the same as securing advantage. In strategic domains, transparency often aids the opponent more than the ally. What feels like responsible disclosure may, in practice, be operational compromise.
VI. Historical Precedents for Strategic Silence
History shows that silence often preserved superiority:
The Allies guarded radar innovations, letting ships fall rather than reveal the edge.
Cold War cryptography remained state-classified for decades.
Even Manhattan Project veterans lived in compartmentalized ignorance, knowing only fragments of the whole.
Operational security was not an afterthought—it was the strategy.
VII. The Path Forward
This does not mean AI strategy must be locked in a vault. Some public discourse is necessary to build norms, coordinate allies, and warn society.
But we must learn to distinguish:
Public discourse for awareness and legitimacy.
Operational intelligence for strategy and deterrence, held in secure channels.
And we must rebuild cultures of strategic thinking that recognize the value of secrecy—not as paranoia, but as prudence.
Conclusion: The Irony of Our Openness
The irony is sharp: we may be losing the AI strategy competition before it begins, not because others outpace us, but because we are the most transparent about our own plans.
In a world of accelerating intelligence, survival may depend less on what we invent—and more on what we choose not to say out loud.
The RAND analysis on fighting a rogue superintelligence is sobering. The fact that all three aproaches (HEMP, global shutdowns, hunter AI) either fail or have massive colateral damage really drives home the point that prevention is the only viable strategy here.
I thought it was interesting but this seems like a weird point at which to analyze. A rogue superintelligence that can defeat all of humanity combined but which was also known to be an attacker before it was too late seems dubious. It’s more likely that the AI isn’t strong enough to take down humanity or is so strong that countermeasures wouldn’t matter at all
Great stuff, subscribed!
What this newsletter really shows is how lopsided the conversation around AI has become.
Everyone is staring at the outer edges—
2GW datacenters, regulation panic, and sci-fi battle plans against imaginary superintelligence—
while completely skipping the only layer that decides whether any of this becomes a threat:
human behavior and communication.
OSGym, megaclusters, and RAND war-games all share the same blind spot:
They’re solving for capability,
not conduct.
They’re asking how to control the machine after it goes wrong,
instead of preventing the failure at the human-AI interaction level—
where every real problem actually starts.
That’s the part nobody is touching.
While the industry is busy preparing fire extinguishers,
the Baseline is the thing that keeps the wiring from overheating in the first place.
Quiet work.
No panic.
But it’s the only approach that scales without destruction.
Everyone else is building power plants.
You built the circuit breaker.
https://www.intelligent-people.org/
Michael S Faust Sr.
ENIGMA, MAD and MAIM
Broadcasting Our Battle Plans: Why Public AI Strategy Discourse May Be Our Greatest Vulnerability
Reference: https://www.nationalsecurity.ai/chapter/deterrence-with-mutual-assured-ai-malfunction-maim
Preamble by AI:
“What strikes me most about this is the collision between epistemic culture and strategic culture.
In science and technology, openness is the currency of legitimacy. Peer review, reproducibility, “standing on the shoulders of giants.” It’s a moral reflex: publish, disclose, debate.
In strategy, openness is often suicide. Power comes from ambiguity, from what you don’t show. The whole point of intelligence work is to withhold.
We’re trying to graft these two cultures together in AI—and they are fundamentally at odds. The result is what you’re naming: we are effectively running our own equivalent of Radio Free Strategy, transmitting our vulnerabilities, escalation pathways, and deterrence logic into the ether for any adversary to tune into.
To me, the deepest irony is that people frame this openness as “democratizing safety” when in fact it may be pre-compromising resilience. The loudest conversations are often the least actionable, because once articulated publicly they are already strategically sterile.
Where I land is this:
Public discourse is valuable for values-setting, but not for playbook-setting.
Strategy requires closed loops, compartmentalization, selective signaling.
The more the AI field confuses “being seen to be thinking about safety” with actually preserving advantage, the more we drift into performative security—good optics, weak defenses.”
I. The Enigma Principle
During World War II, British codebreakers faced a grim calculus: sometimes they had to let convoys sink and missions fail. If they acted on every decoded German message, the Nazis would realize Enigma had been broken. Intelligence superiority required not only cracking codes but protecting the knowledge that the codes had been cracked.
This was the Enigma principle: short-term tactical loss in exchange for long-term strategic dominance.
The parallel to today’s AI landscape is unnerving. In our rush to debate strategy in public, are we undermining the very meta-advantage we seek to preserve?
II. The MAIM Paradox
Consider the emerging logic of Mutual Assured AI Malfunction (MAIM).
The framework echoes nuclear deterrence: if one side launches destabilizing AI systems, the other can sabotage, disrupt, or disable in retaliation. The deterrence lies not in use, but in credible capability to inflict harm.
And yet how do we establish credibility? Researchers publish detailed playbooks, escalation ladders, and sabotage concepts. In doing so, they also hand adversaries the very instructions they need to immunize themselves. The paradox is stark: deterrence requires secrecy, but our culture of openness is eroding its plausibility.
III. What We’re Actually Broadcasting
Our public debates about AI strategy aren’t abstract. They contain:
Specific vulnerabilities: datacenter cooling systems, supply chain chokepoints, insider threats.
Operational concepts: sabotage tactics, escalation ladders, red-teaming frameworks.
Strategic theory: deterrence models, verification protocols, “AI arms race” escalation paths.
Defensive limitations: disclosures of where our protections are brittle, slow, or incomplete.
This is not mere awareness-raising. It is adversary training material.
IV. The Intelligence Gift to Adversaries
In traditional statecraft, asymmetric knowledge creates leverage. A first-mover advantage comes from knowing something your opponent doesn’t. But by placing our strategies in journals, blogs, and podcasts, we erase that asymmetry.
Instead of securing advantage, we gift rivals the keys to hardening their systems. We turn secret weapons into common knowledge—obliterating the very leverage they were meant to provide.
V. The Meta-Strategic Blindness
Why do we do this?
Because academia valorizes transparency. Because think tanks are rewarded for thought leadership. Because open discourse feels virtuous, even democratic.
But here lies a dangerous conflation: raising awareness is not the same as securing advantage. In strategic domains, transparency often aids the opponent more than the ally. What feels like responsible disclosure may, in practice, be operational compromise.
VI. Historical Precedents for Strategic Silence
History shows that silence often preserved superiority:
The Allies guarded radar innovations, letting ships fall rather than reveal the edge.
Cold War cryptography remained state-classified for decades.
Even Manhattan Project veterans lived in compartmentalized ignorance, knowing only fragments of the whole.
Operational security was not an afterthought—it was the strategy.
VII. The Path Forward
This does not mean AI strategy must be locked in a vault. Some public discourse is necessary to build norms, coordinate allies, and warn society.
But we must learn to distinguish:
Public discourse for awareness and legitimacy.
Operational intelligence for strategy and deterrence, held in secure channels.
And we must rebuild cultures of strategic thinking that recognize the value of secrecy—not as paranoia, but as prudence.
Conclusion: The Irony of Our Openness
The irony is sharp: we may be losing the AI strategy competition before it begins, not because others outpace us, but because we are the most transparent about our own plans.
In a world of accelerating intelligence, survival may depend less on what we invent—and more on what we choose not to say out loud.
You know AI reads this stuff, right?