Jordan's Journey continues to be the Top post on Metamodern Wannabes.
Yesterday Jordan Hall posted on X.
I’m going to throw a bit of a long ball here, but it seems timely.
The AI discourse has stalled (and festered) because of at least two category errors.
1. AI cannot be properly governed by either the Market or the State. There is a lot necessary to establish this, but for now, I’ll simply propose it.
Fortunately, we have been operating around half a millennium with an increasing blind spot that has led us to believe that these two were the only choices.
They are not. The third category (and, as it turns out, the more fundamental) is sometimes called the Commons. It is this that is proper to governance of AI. (You might note the connection to AI unemployment).
Another name for the Commons is the Church. Again, a lot is behind this, but here I’ll just say that unless and until we recognize this fact and, thereby, revive the proper form and responsibility of the Church, things will continue to unravel.
So, yes, I’m a very much saying that the Church is the (only) possible way to govern AI. Look forward to this discussion!
2. AI cannot be aligned with humanity. What I’m about to say will be esoteric for those who are not familiar with the notions, but they are quite precise. AI cannot be aligned with “humanity” because alignment must be at the level of the Soul and “humanity” has no Soul.
Fortunately, each individual human has a Soul, and so a fully peer to peer AI can be aligned. Intimacy and care at the very personal level.
Jim Rutt, who was a co-creator of Game B with Jordan Hall, responded.
You raise some critical points about AI governance and the role of sacred institutions, but we need to be careful about how we frame and operationalize these ideas. Let me explain why.
First, I agree that the market-state dichotomy isn’t enough to handle AI governance. The complexity and potential impact of AI systems go far beyond traditional governance models. But invoking "the Church" as the solution requires some serious unpacking.
When we talk about the Commons—and I appreciate you making that connection—we’re discussing shared resources and governance mechanisms that emerge from community needs and values. That’s fundamentally different from historical church structures, which often hardened into rigid hierarchies claiming metaphysical authority. The Commons works because it’s rooted in practical necessity and empirical outcomes, not because it derives power from sacred status.
Your suggestion that the Church should govern AI reminds me of what I’ve written about the "sacred" in relation to complex systems: it can be a useful operational tool, but we have to remember that’s all it is—a tool, not a metaphysical truth. The moment we forget that distinction, we risk creating new forms of dogma just as problematic as pure market or state control.
On AI alignment and the concept of "Soul"—this is where I have to part ways with you. While I get the impulse to frame AI alignment in terms of deep personal connection, bringing in metaphysical concepts like "Soul" muddies the waters instead of clarifying the challenge. The alignment problem is, at its core, about designing systems that reliably pursue beneficial outcomes while respecting human values and autonomy. We should be talking about measurable behaviors, verifiable constraints, and empirical outcomes—not metaphysics.
The idea of an "AI priest class" particularly concerns me. History shows that priesthoods, even when they start with noble intentions, tend to evolve into self-perpetuating power structures that resist change and empirical scrutiny. Instead of priests, we need transparent governance structures that blend technical expertise with broad stakeholder representation.
That said, there’s a kernel of truth in your proposal that’s worth exploring. Governing AI—much like managing natural ecosystems—means grappling with staggering complexity, where purely analytical tools may fall short. This is where well-constructed narratives and frameworks—your "sacred" approach—might actually help with sense-making and decision-making.
But these frameworks have to stay grounded in operational reality. They should be subject to revision as our understanding evolves and circumstances shift. The moment we start treating them as immutable truth rather than practical tools, we risk recreating the same rigid, dogmatic structures the Enlightenment helped us move past. Think of it this way: when I mentor young people through Heinlein novels, I’m not asking them to treat the stories as sacred text. I’m using narrative as a tool to help them wrestle with complex ideas—responsibility, society, personal growth. The stories work because they’re understood as tools, not dogma.
Likewise, any AI governance framework—whether we call it Commons, Church, or something else—must keep this practical, operational focus. It should be judged by its ability to produce beneficial outcomes, not by appeals to metaphysical authority or traditional hierarchies. Bottom line: while I agree we need governance structures beyond the market and state, those structures must emerge from practical necessity and empirical observation—not religious or metaphysical claims.
The answer isn’t to resurrect traditional church models but to develop new, adaptive frameworks capable of handling AI’s complexity while staying grounded in operational reality and empirical validation. The real challenge isn’t finding a sacred authority to govern AI. It’s building governance systems that can effectively manage complexity while staying adaptable and accountable. That means combining technical expertise with broad stakeholder input—all while keeping a sharp focus on measurable outcomes, not metaphysical justifications.
I am all for building bridges between the Secular and the Sacred. That is probably not possible with those Christians who claim the Commons as their own. That is probably not possible with those Christians who claim Christianity as the only way.
With my personal Secular orientation, I am in agreement with Jim Rutt.
There was another signal in the noise of the discussion on X.
John Ash posted his response.
"Bottom line: while I agree we need governance structures beyond the market and state, those structures must emerge from practical necessity and empirical observation—not religious or metaphysical claims. The answer isn’t to resurrect traditional church models but to develop new, adaptive frameworks capable of handling AI’s complexity while staying grounded in operational reality and empirical validation."
https://medium.com/@speakerjohnash/the-cognicist-theory-of-capitalism-e104e2b8f072
Ŧrust operates as a dynamic attention and credibility mechanism in democratic LLMs that achieves your stated governance goals in several key ways…
The best I can do is be a cheerleader for John Ash. I hope his project attracts more resources. He made additional comments in the thread…
Both Elon and Trump for example claimed that Covid cases would disappear. They are on record saying that. That is not how reality turned out. We can disagree about the reasoning why, but its hard to ignore certain anchor predictions.
And that is only one of many reasons why Elon and Trump cannot be trusted.
My reply to Jordan's Tweet:
[https://x.com/jim_rutt/status/1884646488040456410]
You raise some sharp points about AI governance and the role of sacred institutions, but we need to be careful about how we frame and operationalize these ideas. Let me explain why.
First, I agree that the market-state dichotomy isn’t enough to handle AI governance. The complexity and potential impact of AI systems go far beyond traditional governance models. But invoking "the Church" as the solution requires some serious unpacking.
When we talk about the Commons—and I appreciate you making that connection—we’re discussing shared resources and governance mechanisms that emerge from community needs and values. That’s fundamentally different from historical church structures, which often hardened into rigid hierarchies claiming metaphysical authority. The Commons works because it’s rooted in practical necessity and empirical outcomes, not because it derives power from sacred status.
Your suggestion that the Church should govern AI reminds me of what I’ve written about the "sacred" in relation to complex systems: it can be a useful operational tool, but we have to remember that’s all it is—a tool, not a metaphysical truth. The moment we forget that distinction, we risk creating new forms of dogma just as problematic as pure market or state control.
On AI alignment and the concept of "Soul"—this is where I have to part ways with you. While I get the impulse to frame AI alignment in terms of deep personal connection, bringing in metaphysical concepts like "Soul" muddies the waters instead of clarifying the challenge. The alignment problem is, at its core, about designing systems that reliably pursue beneficial outcomes while respecting human values and autonomy. We should be talking about measurable behaviors, verifiable constraints, and empirical outcomes—not metaphysics.
The idea of an "AI priest class" particularly concerns me. History shows that priesthoods, even when they start with noble intentions, tend to evolve into self-perpetuating power structures that resist change and empirical scrutiny. Instead of priests, we need transparent governance structures that blend technical expertise with broad stakeholder representation.
That said, there’s a kernel of truth in your proposal that’s worth exploring. Governing AI—much like managing natural ecosystems—means grappling with staggering complexity, where purely analytical tools may fall short. This is where well-constructed narratives and frameworks—your "sacred" approach—might actually help with sense-making and decision-making.
But these frameworks have to stay grounded in operational reality. They should be subject to revision as our understanding evolves and circumstances shift. The moment we start treating them as immutable truth rather than practical tools, we risk recreating the same rigid, dogmatic structures the Enlightenment helped us move past.
Think of it this way: when I mentor young people through Heinlein novels, I’m not asking them to treat the stories as sacred text. I’m using narrative as a tool to help them wrestle with complex ideas—responsibility, society, personal growth. The stories work because they’re understood as tools, not dogma.
Likewise, any AI governance framework—whether we call it Commons, Church, or something else—must keep this practical, operational focus. It should be judged by its ability to produce beneficial outcomes, not by appeals to metaphysical authority or traditional hierarchies.
Bottom line: while I agree we need governance structures beyond the market and state, those structures must emerge from practical necessity and empirical observation—not religious or metaphysical claims. The answer isn’t to resurrect traditional church models but to develop new, adaptive frameworks capable of handling AI’s complexity while staying grounded in operational reality and empirical validation.
The real challenge isn’t finding a sacred authority to govern AI. It’s building governance systems that can effectively manage complexity while staying adaptable and accountable. That means combining technical expertise with broad stakeholder input—all while keeping a sharp focus on measurable outcomes, not metaphysical justifications