Automationism: Collective Ownership of AI Production
- landonrshumway

- Feb 10
- 9 min read
Part 3: Civitas

---
What if scrolling through your feed made society better?
Unfortunately, we have grown accustomed to the opposite. Social media, as it exists today, is an extraction engine. It mines the ore of human attention and refines it into advertising revenue. Whatever connection or community emerges is incidental, a byproduct of the true purpose: engagement metrics that can be sold.
But the underlying technology is not inherently extractive. The feedback loops, the network effects, the ability to coordinate millions of people in real time—these are tools. They could be wielded differently. What if we designed civic systems with the same sophistication we currently reserve for selling products? What if the algorithms that now predict what you'll click were instead trained to predict what you'd support, what you'd contribute to, what governance decisions actually need your input?
This is the vision of civic media: technology that channels the connective power of digital networks toward coordination rather than extraction, toward collective decision-making rather than collective manipulation.
In Part 2, we explored how automationist cells might dissolve traditional hierarchies, with AI serving as impartial advisor to democratic collectives. But we left a crucial question unanswered: what does this democracy actually look like? How do everyday citizens—busy with lives, families, interests beyond governance—participate meaningfully in directing their communities?
This concept of civic media is explored in We Can Be Perfect: The Paradox of Progress, a novel co-authored with AI, where automationist societies develop a platform called Civitas — a system that allows members of automationist cells to shape the course of their communities through digital collaboration.
But before we imagine what's possible, we must first understand what exists... and why it's failing.
The Exhaustion of Analog Democracy
Consider the machinery of contemporary democracy. It is, at its core, an 18th-century technology struggling to govern a 21st-century world.
Citizens vote for representatives. Those representatives gather in chambers designed for hundreds of people to speak sequentially. They draft legislation through labyrinthine committee processes. Bills wind through readings, amendments, procedural votes. Months pass. Years. The gap between a problem emerging and a democratic response landing can span decades—if a response comes at all.
This sluggishness is not a bug; it was a feature. The architects of modern democracies deliberately built friction into the system, fearing the "mob rule" of direct popular governance. In an era of slow communication, limited literacy, and genuine logistical barriers to gathering public input, representative democracy was a reasonable compromise between autocracy and chaos.
But the constraints that justified this compromise have evaporated. We carry supercomputers in our pockets. Information traverses the globe in milliseconds. More people have access to education and information now than ever before. Every citizen could, in principle, participate in every decision. The infrastructure for direct democracy exists.
So why don't we pivot? Because our current systems suffer from structural pathologies that no amount of good-faith participation can overcome:
Attention scarcity. Citizens have finite time and cognitive resources. They cannot research every issue, evaluate every candidate, track every bill. So they rely on shortcuts—party affiliation, media narratives, tribal loyalty—that substitute for genuine deliberation. Representatives know this, and craft messages for the shortcuts rather than for understanding.
Information asymmetry. Those seeking power control the framing of choices. They decide which options appear on ballots, which issues dominate campaigns, which facts reach voters. The electorate chooses, but from a menu designed by those who benefit from limited options.
Temporal disconnection. Elections happen on fixed schedules, regardless of when decisions are actually needed. A crisis emerges in March; the next election is in November. By then, the moment for input has passed. The people's will arrives too late to matter.
Binary compression. Complex, multidimensional issues get compressed into yes/no votes, for/against candidates. The rich texture of public opinion—the conditional support, the partial agreements, the creative alternatives—gets flattened into crude binaries that satisfy no one fully.
These pathologies are not failures of democracy's ideals. They are failures of democracy's implementation—artifacts of analog constraints that no longer apply. Civic media offers a different implementation.
The Recursive Democracy
Imagine a different relationship between citizen and governance.
You wake up. Your civic interface—perhaps an app, perhaps a conversation with your personal AI assistant—presents you with a gentle notification. A proposal has emerged within your cell that aligns with topics you've indicated interest in: local food systems, sustainable agriculture, community resilience. The proposal suggests partnering with a neighboring cell to establish a shared vertical farming facility.
You have options. You can engage deeply—reading the full proposal, examining the AI-generated impact analysis, exploring the projected costs and benefits, reviewing arguments from community members on various sides. Or you can engage lightly—indicating general support or opposition, trusting that others with more time and expertise will refine the details. Or you can delegate—assigning your voice on agricultural matters to a community member whose judgment you trust, while retaining full participation rights on issues you care more about.
The proposal doesn't go to a vote immediately. First, it enters a period of recursive refinement. Community members suggest modifications. The AI advisor models how each modification would alter outcomes. Alternative proposals emerge, perhaps from those who share the goal but prefer different means. These alternatives don't compete in winner-take-all fashion; the system looks for synthesis, for hybrid approaches that incorporate the best elements of multiple visions.
Throughout this process, you receive updates calibrated to your indicated level of engagement. The algorithm doesn't optimize for your attention—it optimizes for your informed participation. It learns that you check in briefly in the mornings, prefer concise summaries, want detailed analysis only when your values seem directly at stake. It respects your time because its purpose is governance, not engagement.
When the proposal reaches sufficient refinement—when the recursive process has incorporated objections, addressed concerns, and achieved something like community resonance—then it moves to ratification. Not a bare majority, but a threshold that indicates genuine collective will. The specifics of that threshold can themselves be democratically determined, refined over time as the community learns what works.
This is recursive democracy: a continuous, living process of proposal, refinement, and ratification that treats citizens not as occasional voters but as ongoing participants in an evolving collective intelligence.
The Architecture of Civic Media
How does this actually work? The mechanics matter. Let's dive into some of the core features and processes of such a system:
Proposal Emergence. Any citizen can propose anything—a new initiative, a modification to existing arrangements, a question that needs collective resolution. But raw proposals don't immediately flood everyone's attention. AI systems perform initial analysis: Is this proposal coherent? Does it overlap with existing initiatives? What domain does it touch? Who in the community has relevant expertise or stated interest? The proposal is then routed to those most likely to engage meaningfully, expanding outward only as it gains traction.
Impact Modeling. For each proposal, AI assistants generate transparent impact analyses. Not predictions—the future is not predictable—but models. "If we assume X, then Y becomes more likely. If we assume Z, then W". The assumptions are explicit, challengeable, themselves subject to community input. Citizens can ask: what if we modified this parameter? What if we combined this with that other proposal? The AI responds in real time, making policy experimentation cognitive rather than costly. AI serves as an advisor, not an oracle. Their recommendations are shaped by the assumptions built into them, which are themselves subject to democratic scrutiny.
Argument Mapping. As community members weigh in, the system doesn't just collect opinions—it maps the structure of disagreement. What are the core values in tension? What factual claims underlie each position? Where do people agree on ends but disagree on means? This mapping makes productive discourse possible. Instead of repetitive arguments that talk past each other, the system identifies cruxes—the specific points where resolution would cascade into broader agreement.
Delegation Networks. No citizen can engage deeply with every issue. Civic media accommodates this through fluid delegation. You might delegate agricultural decisions to a neighbor who farms, technical infrastructure decisions to an engineer you trust, cultural programming to an artist whose taste you share. These delegations are transparent, revocable at any moment, and themselves subject to community norms. They are not representatives in the traditional sense—they cannot bind you without your consent, and you can override them whenever you choose to engage directly.
Synthesis Engines. Perhaps most critically, the AI doesn't just analyze proposals—it helps generate alternatives. When the community seems stuck, when two factions appear irreconcilable, the synthesis engine asks: is there a third option that honors the core concerns of both? It draws on the full corpus of human governance experiments, the wisdom embedded in countless historical attempts to solve similar problems. It offers these not as commands but as possibilities—seeds for the community's own creativity.
Transparency Layers. Every element of this system is auditable. The algorithms that route proposals, the models that project impacts, the mappings that structure arguments—all are open to inspection. Any citizen can ask: why did I see this proposal? Why was this impact weighted this way? The system's workings are not occult mysteries guarded by a technical priesthood. They are public infrastructure, maintained by the community, subject to the same democratic refinement as any other policy.
The Safeguards
Power wants to concentrate. This is something like a law of social physics. Any system of governance, however egalitarian its intentions, faces constant pressure from those who would capture it for private benefit. Civic media is not immune.
What safeguards could prevent recursive democracy from collapsing into new forms of domination?
Distributed Infrastructure. The civic media platform cannot be owned or controlled by any single entity. Like the internet's original design, it must be a protocol rather than a product—a shared grammar that any community can implement, no single party can monopolize. The moment civic media becomes proprietary, it becomes capturable.
Algorithmic Transparency. The AI systems that facilitate democracy must themselves be democratically governed. Their training objectives, their optimization targets, their operational parameters—all must be visible, challengeable, modifiable by the community they serve. An opaque algorithm is an unelected ruler. Civic media demands algorithmic accountability.
Attention Protection. Unlike commercial platforms that profit from maximizing engagement, civic media must minimize unnecessary cognitive burden. The system should actively protect citizens from decision fatigue, information overload, and manufactured urgency. If an issue doesn't require your input, you shouldn't see it. If a controversy is artificial — generated to capture attention rather than resolve genuine disagreement — the system should dampen rather than amplify it.
Exit Rights. No cell can compel membership. The right to leave—to join another cell, to form a new one, to exist outside the network entirely—must remain inviolable. This exit right provides the ultimate check on cellular governance. If a cell's democracy becomes dysfunctional, members can vote with their feet. Competition between cells for members creates evolutionary pressure toward good governance.
Federated Authority. Some decisions affect only a single cell; others ripple across the network. Civic media must maintain clear boundaries between local and network-wide governance, ensuring that higher-level coordination cannot override cellular autonomy except where genuinely necessary for collective action. The principle of subsidiarity—deciding issues at the lowest level capable of addressing them—must be structurally encoded.
Human Override. Finally, and perhaps most importantly: humans remain sovereign over AI. The advisory systems can model, project, synthesize, and suggest. They cannot decide. The locus of authority remains irreducibly human. This is not merely a philosophical commitment but a practical safeguard. AI systems, however sophisticated, can err in ways that other AI systems might not detect but human judgment can. The human veto provides a circuit breaker against cascading machine failure.
A Vision Forward
We stand at a peculiar moment in history. The technologies that could enable unprecedented human coordination are instead being used to fragment us. This is not inevitable. It is a choice—or rather, a series of choices made by those who currently control these technologies, choices shaped by an economic system that values extraction over coordination.
Automationism offers an alternative. Civitas—the Latin root of "civilization," meaning simply the body of citizens—becomes something we actively construct together, moment by moment, choice by choice.
This vision raises as many questions as it answers. How do we transition from current systems to this architecture? What happens to existing power structures as this transition unfolds? How do cells interact with legacy institutions—nations, corporations, international bodies—that will not simply dissolve because we have imagined something better?
These are questions for Part 4, where we will explore the roadmap from here to there: the practical steps, the pilot experiments, the coalitions that might carry automationism from theoretical framework to lived reality.
For now, let's sit with this possibility: a world where you speak, and your voice brings change, where technology serves deliberation rather than distraction, where the collective intelligence of humanity is finally given an architecture worthy of its potential.
We have built systems that bring out our worst: our tribalism, our short-sightedness, our susceptibility to manipulation. There is no law of nature that says we must continue. We can just as deliberately build systems that bring out our best.
The choice is ours. The tools are ready.
What are we waiting for?
---
This post is part of a series exploring automationism, an economic framework first explored in the novel 'We Can Be Perfect: The Paradox of Progress.' Join the conversation and share your thoughts on how civic technology might reshape democratic participation.
E Pluribus Unum — Out of many, one

