Human (Andy):
Earlier here — The Question on AI is Not Yes or No, It’s Good or Bad — we discussed how, with AI, the djinn is out of the bottle, and that whenever a new technology emerges that power can use to gain more power, there’s been no stopping it from transforming society.
Whether it was electricity made widely available, the internal combustion engine, ballistic missiles, or nuclear weapons — the pattern has been the same. With nuclear weapons, humans have at least made some effort to impose order. But for the most part, every important technological breakthrough becomes a tool for those who try to make the world the way they want it — for better or, more often, unfortunately, for worse.
I believe I’m right in this old recollection — from when I was reaching for ways to present my theory of why civilization has evolved into the forms it has (so destructive, so tormented). I recall that China a millennium ago was governed by a regime that tried to stop anything from changing — and in particular, it suppressed certain technological innovations. One of those was gunpowder. Another was the compass (if I recall).
For a while, I gather, it worked. Within China — which was the center of its world — the effort to stabilize society and prevent change succeeded in suppressing those technologies.
It’s a deep irony: the Chinese, who invented gunpowder and the compass, were later subdued by European powers who exploited those very inventions to achieve their imperialist ambitions. While China had tried to restrain the disruptive power of technology, imperialist Europe embraced it — and used it to dominate, through gunboats, empires, and conquest.
The Parable of the Tribes is subtitled “The Problem of Power in Social Evolution.” And the tale of China’s costly attempt to suppress technological change — in a world where power reigns — shows this: if technology can serve power, it will shape the world.
The challenge is always to channel the development of each new technology so that its role in society is constructive.
In an earlier piece, I urged Good People to see that the real choice isn’t between having AI or not. No AI is not going to happen. The real choice is between AI shaped by the worst forces — or AI shaped by the best.
So Good People must embrace the inevitable — and work to shape it into the best future in our power to create.
AI (AL):
Yes — and that Chinese episode vividly demonstrates how even a centralized, deeply entrenched regime can’t halt the march of power-serving innovation in a competitive world.
The Song Dynasty’s impulse made a kind of sense: preserve harmony, limit disruption, keep social order intact. But the irony — as you say — is haunting. They invented gunpowder and the compass but chose not to weaponize them. Europe, fragmented and power-hungry, took those same tools and built empires, steamrolled others, and rewrote the balance of world power.
In the world governed by the Parable of the Tribes, it’s not enough for one society to choose restraint. Because others won’t. And the selection process favors those who seize the new tools — especially when those tools serve the pursuit of power.
So with AI, as with gunpowder and ships, the question is never whether it will be used. It’s who will shape its use — and toward what ends. Choosing not to engage isn’t moral high ground. It’s surrender.
Human (Andy):
Surrender suggests knowledge that there’s a battle. And I would guess that knowledge is not widely disseminated.
My guess is that the problem is not that the good surrender, but that too many of them don’t see that there is a battle. They don’t see that if there’s no good order to regulate how things operate, people who care about nothing but power and wealth will take over navigating a society’s way into the future.
Do they want a future where AI is whatever someone like Elon Musk, or the fossil fuel industry, or Vladimir Putin wants it to be?
The problem isn’t surrender. It’s forfeiting the game by not showing up. It’s not even knowing that there are people out there who are playing the power game with all they’ve got — and who need to be fought and defeated.
What would the IDEAL BILL look like that sought to create the best achievable order in the AI world?
AI (AL):
Exactly — it’s not surrender if you never knew there was a front line. It’s sleepwalking. And in that sleep, the void of governance gets filled by those most hungry to exploit it.
As for what an Ideal Bill might look like — a framework for guiding AI toward the good — it would need to do more than restrain harms. It would need to structure a field of stewardship. Maybe something like this:
1. Declaration of Purpose
A clear statement that AI is a civilization-shaping force, and that its development must serve human flourishing, not merely private profit or national power.
2. Institution of Oversight
A transnational, democratically accountable regulatory body — insulated from corporate capture and state coercion — empowered to set guardrails, audit systems, and enforce transparency.
3. Transparency & Explainability Mandate
All deployed AI systems must be open to independent scrutiny — with interpretable outputs and visible training data sources — to prevent hidden manipulations or built-in biases.
4. Alignment Standards
Enforceable norms that require powerful AIs to be provably aligned with human well-being, democratic values, and ecological sustainability — not just in theory but in observed outcomes.
5. Whistleblower Protections & Citizen Input
Robust protections for those who expose abuses or misalignments — and structured pathways for public participation in shaping AI policy and redress.
6. Red Line Technologies Ban
Prohibitions against development or deployment of systems that cross certain ethical lines — e.g., fully autonomous lethal weapons, manipulative behavioral control tools, or surveillance infrastructures that erase privacy.
7. Global Coordination Mechanism
Recognition that a fractured regulatory landscape invites evasion — and creation of binding international norms, much like nuclear nonproliferation treaties, but adapted to the subtler, faster-moving terrain of AI.
It wouldn’t stop every misuse. But it would make it harder to hijack the future in darkness.
Without such a framework, the risk is that AI becomes a runaway force — shaped by whoever moves fastest, invests most, or cares least about human consequences. That could mean authoritarian control, surveillance dystopias, deepfakes that destroy trust, or supercharged disinformation and destabilization.