top of page

Automationism: Collective Ownership of AI Production

  • Writer: landonrshumway
    landonrshumway
  • Dec 24, 2025
  • 10 min read

Updated: 17 hours ago

Part 2: Decentralization


A large, upside-down pyramid hovers above a plaza at sunset. The city skyline is visible in soft pastel tones.


We don't need money to collaborate.


Imagine you wake up tomorrow and the government of your country no longer exists. What happens to the money in your bank account? It's worthless. There is nothing backing it now. Nothing ever really was.


And yet... society depends on this illusion of currency to function. We all have to believe that our units of currency means something in order to play this curious game. Why? Because I don't trust you, and if I don't trust you, I likely will not help you meet your needs, hoping that you reciprocate and help me meet my needs. We must make an arrangement so that I have confidence that my needs will be met should I help you meet your needs. This is the crux of economies at scale: people must believe they are getting something by collaborating, else why bother?


Notice that I said will not help you instead of cannot. There are no physical laws preventing us from collaborating without formal arrangements or money. Families and friends commonly meet each other's needs, not wondering if their own needs will be met. We call this type of trust love, and it is arguably more fulfilling to acquire than money.


But love simply does not scale. It is intrinsically linked to relationships between individuals, and such relationships take time to mature. Our relationships are biologically limited to a certain number of individuals. This is why the ideals envisioned by the counterculture movement of the 20th century were destined to fail from the beginning. Money scales, love doesn't.


At least not yet. Perhaps someday, with the aid of technology.


In my previous post, we introduced the concept of automationism, a theoretical economic model which places the ownership of AI-driven production directly into the hands of communities, organized into self-sustaining units known as cells. This second post on automationism hopes to explore how such units can coordinate at scale within networks.


But before we design a new system, we must first understand the current rules of this game we call society. Once we understand those rules, we can begin to create better ones that take what is working well in our current state of affairs and discard what is not working well, that our descendants may go on to improve them even further with time.


So then, if we want to figure out how to scale trust without money, let us consider...


The Inevitability of Hierarchy


Why do humans organize into hierarchies?


The question seems almost naive until you really sit with it. From the earliest tribal chieftains to modern corporate executives, from ancient priesthoods to contemporary governments, humans consistently arrange themselves into pyramidal structures of command. Is this merely cultural habit, or something deeper—something biological?


Consider the practical limitations of human cognition. Anthropologist Robin Dunbar famously proposed that humans can maintain stable social relationships with approximately 150 individuals—a number determined by the size of our neocortex. Beyond this threshold, we simply cannot track the complex web of reciprocity, obligation, and trust that direct relationships require.


Whether or not that number is accurate, the concept highlights a fundamental problem: human societies vastly exceed 150 people, yet we still need to coordinate. Hierarchy emerges as our species' solution to this cognitive limitation. We cannot trust everyone, so we trust someone, who trusts someone else, who trusts someone else, until a chain of trust extends across millions, across billions.


So then, if hierarchy is such an effective solution to human cognitive limitations, why challenge it?


Because hierarchy concentrates power, and concentrated power corrupts. Across civilizations and centuries, we observe the same pattern: those at the top accumulate resources far beyond their contribution, while those at the bottom receive the minimum incentive necessary to maintain their cooperation.


Consider the modern corporation. The ratio of CEO compensation to median worker pay in the United States has grown from approximately 20:1 in the 1960s to over 300:1 today. Has the cognitive labor of leadership become fifteen times more valuable in sixty years? Or have those at the apex simply leveraged their position to capture more of the pool?


Hierarchy also creates inefficiency through latency and information loss. Every layer of orchestration is a filter through which information must pass—both upward and downward. At each layer, information is compressed, interpreted, and potentially distorted. By the time ground-level reality reaches the apex, it may bear little resemblance to the original signal. And by the time direction reaches those at the bottom, it may be divorced from the conditions it was meant to address.


Yet for all its flaws, we have found no alternative that scales. This is poised to change, however, with the arrival of AI.


But we're getting ahead of ourselves. Before we discuss how we might leverage AI to transcend hierarchy, we must first dissect its anatomy.


The Anatomy of Command


Every hierarchy, regardless of its cultural trappings, consists of three fundamental components:


Actuators: Those who execute. In traditional societies, these are workers, laborers, soldiers—the hands and feet of the social body. Actuators implement instructions. They transform directives into reality through physical or cognitive labor. The farmer who plants the seed, the factory worker who assembles the product, the engineer who writes the code—these are the actuators of society.


Orchestrators: Those who direct. Managers, executives, chieftains, priests—the minds that coordinate actuators toward collective ends. Orchestrators exist in layers, each one directing the layer below while receiving direction from above. A curious thing about orchestrators: they are also actuators. They perform the work of coordination in addition to other responsibilities. Yet the higher one rises in a hierarchy, the less one acts and the more one orchestrates. At the apex, the orchestrator's only product is direction itself. While orchestrators are powerful, it is not because they themselves are powerful, but because the actuators beneath them execute their commands. Without any actuators, an orchestrator's commands cannot be realized, which is why orchestrators must incentivize actuators.


Incentive: The substrate of motivation. The reason actuators continue to actuate, the fuel that keeps the hierarchical engine running. Incentive need not be money. Throughout history, hierarchies have sustained themselves through many forms of incentives: food and shelter, certainly, but also prestige, honor, spiritual reward, the promise of paradise, the threat of punishment, or simply the sense of belonging to something greater than oneself. What all these have in common is that they provide actuators with a reason to cooperate with orchestrators. Remove the incentive, and the hierarchy collapses. Actuators stop actuating. The social body dies. The game comes to an end, until another is started once more.


Diagram showing a hierarchy: Orchestrators at top, leading to Orchestrators/Actuators, then Actuators. Includes icons and directional arrows.

This, then, is the machinery of human society: orchestrators provide direction, actuators execute direction, and incentive flows as compensation for this execution. Money is merely the most efficient form of incentive we have thus far invented—liquid, divisible, universally accepted. It is not the only possible form, and in the future, it may not be needed at all.


The Automationist Reconstruction


What if actuators required no incentive?


This question sounds absurd applied to human beings—and it is. Humans must be motivated to execute direction. But artificial intelligence is not human. An AI system requires no salary, seeks no prestige, fears no punishment, and hopes for no heaven. It executes instructions because execution is what it does. Its "incentive" is electricity.


Here we glimpse the revolutionary potential of intelligent automation: for the first time in history, we can decouple execution from human motivation. Under the automationist model, the concept of hierarchy reconstructs itself around this new reality. It is the members of a cell, all of them, which orchestrate direction. There is no apex, no concentration of orchestrative power. Humans become egalitarian orchestrators in a direct democracy. Strategic decisions—what to produce, how to invest, whom to partner with—are made democratically by the community.


As humans govern, intelligent automation becomes the actuators. AI systems manage production, logistics, quality control, customer service—the full spectrum of execution that once required human labor. They do not require incentive to cooperate. They do not strategically withdraw effort to negotiate better compensation. They simply execute the direction provided by the collective orchestrator.


This arrangement dissolves the traditional incentive problem. There is no longer a pool of motivation flowing from orchestrators to actuators, which the orchestrators might capture for themselves. Instead, the output of the actuators—the goods and services produced by intelligent automation—flows directly to the members of the cell as shared prosperity. Every cell member is simultaneously an orchestrator (participating in democratic direction) and a beneficiary (receiving the fruits of automated labor).


Diagram titled "Automationist Cell" showing two sections: "Actuators" with robots, "Orchestrators" with people, connected by an arrow.

But Does This Scale?


Here the skeptic raises a valid objection. Perhaps a single cell can coordinate without hierarchy—one hundred, even one thousand people engaged in direct democracy, directing their automated production systems. But a nation? A planet? Billions of human beings?


We return to the problem of trust. A cell member might trust their fellow members, but what of the cell on the other side of the continent? What of the cell in another nation, speaking another language, holding different values? How do cells coordinate without hierarchy emerging to manage their relationships?


Our bodies offer an intriguing solution to scaled coordination.


Consider this: you contain trillions of cells, organized into systems of magnificent complexity, yet there is no "boss cell" directing the operation. No single neuron holds executive authority over the liver. No council of cells convenes to allocate resources. Instead, your cells coordinate through chemical signals—hormones, neurotransmitters, cytokines—that communicate needs and resources throughout the organism in real time.


When your muscles need energy, they release signals indicating glucose depletion. Your liver receives these signals and releases stored glycogen. Your pancreas detects rising blood sugar and releases insulin. Rather than hierarchy—it is a network of mutual response. Each organ attends to its function while remaining responsive to the needs of the whole. No conscious direction is required. The coordination emerges from the system's architecture.


Can we replicate this pattern at the societal level? How can automationist cells coordinate with one another in a similar manner?


A Protocol for Trust


The biological metaphor points us in the right direction, but cells in our bodies don't cheat. They don't promise to deliver resources and then disappear. Human organizations have no such automatic honesty—so how do we set up an environment where automationist cells honor their agreements without resorting to courts, contracts, and the coercive power of centralized authority?


Over the past fifteen years, blockchain technology has demonstrated something remarkable: we can establish trust through cryptography rather than institutions. Bitcoin proved that strangers across the globe can agree on who owns what without a bank mediating between them. Ethereum proved that agreements can execute themselves automatically when conditions are met—no lawyers required. These systems aren't perfect, but they've shown that decentralized trust is possible at scale.


We can build on this foundation.


Imagine a trading protocol designed specifically for automationist cells. When two cells agree to exchange goods or services, both put something at stake—locked into the agreement by cryptography, not controlled by either party. If both confirm successful completion, the trade executes and both are rewarded. If one party cheats or disappears, their stake is reduced. The network remembers every transaction, building a public history that lets cells evaluate potential partners based on their past behaviors.


The details matter, and they're complex. But the core principle is simple: structure the incentives so that honest trade is the most profitable strategy, and dishonest behavior guarantees loss. No kings required. No courts. Just mathematics and mutual benefit.


With such a protocol handling the mechanics of trust, we can turn our attention to a more interesting question: how do cells find the right trading partners in the first place?


The Resource Intelligence Nexus


In my novel We Can Be Perfect: The Paradox of Progress, I wanted to explore an idea that I've mulled over for years: could AI be used to distribute resources to everyone as they needed it, when they needed it? How would such a technology coordinate on such a massive scale? Through this exploration, a concept emerged which I referred to as the Resource Intelligence Nexus—or RIN.


Imagine a distributed intelligence system that perceives the needs of an entire network of cells in real time. It has the ability to track all the resources available across all cells in the network. It knows that Cell A has surplus energy production while Cell B faces shortage. It knows that Cell C produces agricultural goods that Cell D requires. It knows the transport capacity between them, the current demand, the projected needs.


RIN does not command. It coordinates. It presents options to cells, suggests efficient distributions, identifies mutual benefit. Each cell retains full autonomy over its internal affairs—its democratic governance, its strategic direction. But when cells need to cooperate, RIN provides the informational substrate that makes cooperation possible without hierarchy.


Think of it as an artificial nervous system for the social body. Just as your biological nervous system coordinates organs without conscious intervention, RIN coordinates cells without executive authority. It is the chemical signal made digital, scaled to the scope of civilization.


Critically, in the architecture explored in the novel, there is no single RIN—no central server that might be captured or corrupted, bringing down the whole network with it. Each cell maintains its own RIN system, and these systems communicate with one another in a decentralized mesh. The knowledge is distributed. The coordination is emergent. No single point of failure exists, and no single point of control.


Diagram showing five interconnected "Automationist Cells," each with "Orchestrators", "Actuators", and a resource intelligence nexus. The cells are interlinked in a decentralized cell network.

With AI managing communication between cells, could such a system eliminate hierarchy entirely? Could it enable human civilization to function as our bodies do—a vast coordination of specialized parts, each serving the whole, none commanding it?



Questions Without Answers


I will not pretend to have answered these questions. The purpose of these articles is not to present automationism as a perfected blueprint, but to open a space for imagination. For too long, we have assumed that the economic systems we inherited are the only possible systems. We have mistaken historical contingency for natural law.


Artificial intelligence is about to transform the fundamental relationship between human labor and economic value. Whether we use this transformation to reinforce existing hierarchies or to transcend them is a choice—a choice we are making right now, in the policies we enact, the technologies we develop, the conversations we have.


If we had the capacity to coordinate without hierarchy, would we choose it? If actuators required no incentive, how would we restructure our social architecture? If trust could scale through technology rather than money, what new forms of human organization become possible?


These are not rhetorical questions. They are design challenges.


In Part 3, we will explore how automationist cells might actually govern themselves—the mechanics of collective decision-making, the role of AI and advanced civic media in facilitating democratic discourse, and the safeguards against new forms of power concentration. We will examine whether direct democracy can truly function at scale, or whether some hierarchy is inevitable even in an automated world.


Until then, I leave you with this: every social system was once an invention. Monarchy, democracy, capitalism, socialism—all were imagined before they were implemented. They emerged from conversations like this one, from people willing to question what seemed inevitable.


The future is not predetermined. It is built.


What will we choose to build?


---


This post is part of a series exploring automationism, an economic framework explored in the novel We Can Be Perfect: The Paradox of Progress. Join the conversation and share your thoughts on how artificial intelligence might reshape human economic organization at r/automationism



E Pluribus Unum — Out of many, one







 
 
bottom of page