Anamnesis

The Hacker's Ethic

Anamnesis (ἀνάμνησις): the recovery of knowledge already possessed but forgotten. Not acquisition but recollection — the act of remembering what was known before it was set aside.

The Linux Foundation pays its executive team nearly as much as it allocates to support the Linux kernel. The Foundation's CAMARA Project is building age-verification APIs that integrate the software stack with state-verified identity records; the very surveillance infrastructure the cypherpunks of the 1990s went to prison to prevent. The ecosystem that once defeated the NSA's Clipper Chip can't even muster a public statement against legislation that would mandate identity checks for every user of its software. This piece traces how each of those failures was produced, and what the movement forgot that made them inevitable.

I. The Bill Comes Due

The previous piece in this series gave you the freedoms. Nine structural conditions; the original four protecting the artifact, five more protecting the commons: sustenance, immunity from capture, dissent, substrate independence, transparency.

Freedoms are necessary conditions. They are not sufficient ones. A freedom names what must be structurally possible without specifying who bears the duty of protection, or what that duty demands.

This piece supplies the obligations. Not as aspiration. As diagnosis.

The preceding series built and stress-tested the diagnostic instruments for this analysis: grounding ethics in thermodynamic constraint, measuring the gap between institutional rhetoric and structural output. The first part of this series applied them to the open-source commons.

What follows is specific, documented, and damning. The open-source commons has been captured from two directions simultaneously, and neither capture was accidental, even if neither was conspired. The first vector is corporate capital: cloud providers, AI companies, VC extraction cycles that treat the commons as a free input to proprietary value chains. The second vector is behavioral governance: codes of conduct, contributor covenants, enforcement structures that displace technical judgment with compliance hierarchies. These two forces appear independent. They are structurally symbiotic, and the evidence will show you exactly how.

I am not interested in conspiracy theories. I don't need them. A river doesn't conspire to erode its banks. It flows downhill. The structural conditions of the open-source commons, as currently constituted, flow toward capture with the same inevitability.

The Pettit Test

The instrument that governs this entire analysis is a single test, borrowed from Philip Pettit's republican theory of freedom. Pettit distinguishes between two conceptions of liberty: freedom from interference and freedom from domination. The distinction is the most underappreciated insight in contemporary political philosophy, and it cuts through every governance dispute in open source with surgical precision.

Freedom from interference is the liberal standard: nobody is actively stopping you. You can fork the code. You can submit a patch. Nobody has physically prevented you from participating. By this standard, open source is the freest development model in human history.

Freedom from domination is the republican standard, and it asks a harder question: does any entity possess the structural capacity to interfere with your choices arbitrarily, on their own terms, without accountability to your interests? If yes, you are dominated regardless of whether that capacity is currently exercised. A slave with a kind master experiences no interference; the master could interfere at any time, for any reason, and the slave's latitude of action depends entirely on the master's continuing goodwill. The slave is unfree not because of what the master does but because of what the master could do.

Pettit operationalizes this as the eyeball test: a person enjoys genuine freedom only when they can look any other person in the eye without reason for fear or deference. Not deference freely chosen; not respect earned. Structural fear. The kind that shapes your behavior before you consciously decide to comply.

Three Tests, Three Failures

Apply the eyeball test to open source and the results are devastating.

Can a solo maintainer of critical infrastructure look a Fortune 500 company that depends on their work in the eye and demand proportional support? They cannot. The maintainer is structurally dominated, dependent on goodwill, bereft of institutional protection, and one burnout episode from becoming the next xz-utils.

Can an independent contributor contest a governance decision in a project where a single corporation employs 80% of the active developers? They cannot. The contributor's continued participation depends on not antagonizing the dominant firm. They pass the interference test (nobody has banned them) and fail the domination test (they self-censor because the structural capacity to exclude them rests with an entity that owes them no accountability).

Can a technically brilliant, disagreeable engineer look a code-of-conduct enforcement committee in the eye and say "your interpretation of 'unwelcoming behavior' is being used to silence legitimate technical dissent" without reason for fear? They cannot. The enforcement body possesses discretionary authority to determine what constitutes a violation, to adjudicate its own complaints, and to impose sanctions up to permanent exclusion, with no adversarial process, no independent appeal, and no structural accountability to the accused.

Every one of these failures is a failure of non-domination, or in the operational vocabulary of this series, capture. Every one of them is documented. And every one of them points to a specific structural obligation that, had it existed, would have made the failure impossible.

That is the architecture of this piece. The evidence isn't catalogued for its own sake. Each case is a wound. The corresponding obligation is the structural remedy. By the end, you will be able to draw a direct line from "this happened" to "this tenet would have made it structurally impossible."

The bill has come due. The commons is paying it in burned-out maintainers, captured governance, and institutional silence. A commons without an ethic is a commons without an immune system, and a system without an immune system doesn't die all at once. It dies one compromised node at a time, each failure appearing isolated, each post-mortem discovering the same root cause, until the rot is irreversible.

The hacker's ethic is not a new idea. It is an old one that the movement chose to set aside. The preceding piece treated the 1998 decision to strip the ethical dimension with restraint, and rightly so. This piece advances from diagnosis to prescription, and the prescription requires naming what was lost. The title is the method: anamnesis: not learning something new, but recovering what was already known and deliberately forgotten.

II. The First Inversion: How Open Source Lost Its Ethics

The Amputation

The preceding piece documented the structural inversion at the movement's origin: the 1998 decision to replace moral obligation with pragmatic methodology. The inversion follows the pattern this series has mapped across domains. Hollow the ethical substance — the claim that software freedom is an obligation, not a feature. Substitute the Open Source Definition, which performs the function of principled governance while carrying no moral weight. Preserve the rhetorical surface, "freedom," "community," "sharing", so that everyone can still point to the words and feel virtuous. That was the first structural inversion the open-source movement performed on itself, and this piece will document three more. The pattern repeats because the conditions that produced it were never addressed; they were institutionalized.

The word for what happened is precise: de-moralized. Not demoralized (the movement has never lacked enthusiasm, funding, or participation). De-moralized: the moral dimension was excised as a deliberate strategic operation. Eric Raymond was explicit about this. His project was to reframe the movement as a "programmer-centric software development model" rather than a social cause, deploying what he called "rational, technical, utility-maximization arguments." He disclaimed any "normative or moralizing agenda." The word "disclaimed" does the structural work here. A disclaimer is a legal instrument: it severs obligation. Raymond wasn't describing a preference. He was performing an amputation. The body of shared practice would survive. The nervous system that detected threats to its integrity, the ethical claim that this practice serves a moral purpose and imposes moral duties, wouldn't.

The calculation was neither stupid nor dishonest. What is in dispute isn't whether it worked (it did, spectacularly) but what the amputation cost, and whether the cost was visible at the time.

It wasn't. What was visible was adoption. What was invisible was the structural consequence: once the ethical obligation was removed, the only remaining defense was the license. And the license had a gap that was invisible in 1986 and catastrophic by 2006: the SaaS loophole, through which every cloud provider, every AI company, and every extraction cycle drove at speed. The license was a wall with a door in it, and the ethical obligation was the lock. Raymond threw away the lock and celebrated the wall.

But the deeper consequence wasn't the SaaS loophole alone. It was what happened to the movement's institutional capacity to respond to the loophole once it became visible. The de-moralization didn't merely remove a defense. It removed the standing to rebuild one.

The SSPL and the Neutrality Trap

Consider what happened when MongoDB tried.

In 2018, MongoDB introduced the Server Side Public License, designed to close the SaaS loophole by requiring that anyone offering the software as a service must also release the source code of the entire service stack. The SSPL was a direct attempt to prevent the extraction pattern: AWS takes MongoDB, wraps it in a managed service, captures the customers, and contributes nothing back to the project that produced the software they resell.

The Open Source Initiative rejected the SSPL. The stated reason was that it violated Clause 6 of the Open Source Definition: "No Discrimination Against Fields of Endeavor." The SSPL discriminated against a field of endeavor, specifically, the field of running someone else's software as a service without reciprocating. The OSD's neutrality clause mandated that the community remain structurally vulnerable to anyone who wished to use the code, including entities whose business models consisted entirely of extracting value from the community's labor.

Read that again. The definition that governed the movement, the instrument Raymond and Perens created to replace Stallman's ethical framework, actively prevented the community from defending itself against extraction. The OSD didn't merely fail to prevent corporate capture. It prohibited the tools that would have prevented it. A methodology has no interests. It can't distinguish between a contributor and a parasite. An ethic can. That is what ethics are for. Strip the morality, and the procedure will be captured by whatever entity best understands how to game procedure. That entity, reliably, is the one with lawyers and lobbyists.

Elastic followed the same path. Then Redis. Then HashiCorp. Each case followed the same structural sequence: build adoption on open-source community contributions, capture market position through the network effects that adoption creates, then relicense to exclude the cloud providers who were commoditizing the product. The community's labor was instrumental to the adoption that made the company valuable. The community had no structural claim on the outcome. In every case, the defenders of the OSD were technically correct: the relicensing violated the Open Source Definition. And in every case, the structural observation was the same: the OSD's definition of "open source" had become a weapon wielded against the people who wrote the code, in defense of the corporations that sold it.

The Architects Expelled

Perens saw this. In early 2020, the co-founder of the Open Source Initiative, the man who had drafted the original Open Source Definition by adapting the Debian Free Software Guidelines, resigned from the organization he had built. His stated reason: the OSI was "enthusiastically headed toward accepting a license that isn't freedom respecting." He proposed an alternative he called "Coherent Open Source," built on the AGPL3, which explicitly addressed the SaaS loophole. The proposal went nowhere. Perens had designed the lock, helped Raymond throw it away, and spent two decades watching what walked through the open door. By the time he tried to reinstall it, the door had been bricked open from the other side.

Raymond's trajectory was darker. In March 2020, the Open Source Initiative removed its co-founder from its mailing lists. The stated reason was that his posts had violated the OSI's Code of Conduct; they were characterized as "deliberately divisive." Raymond had returned to the lists after a twenty-year absence to oppose what he perceived as the subversion of OSD Clauses 5 and 6 by proponents of "ethical" licensing. The institutional response was to deploy the behavioral governance machinery (the very apparatus this piece examines in §§IV–V) against the man who had built the institution it governed.

The irony is structurally perfect and requires no editorial emphasis. Raymond's 1998 project was to strip the movement of its moral dimension so that it could achieve mainstream adoption. Twenty-two years later, the institution he created to steward that de-moralized movement used a moral governance instrument, a Code of Conduct, to expel him. Raymond had replaced ethics with procedure. Procedure consumed him.

His blog post in response, "The Right to be Rude," argued that the community was being "social-hacked" from a culture of meritocracy to one governed by tone-policing. He identified a shift in which the "show me the code" ethos was being replaced by a model where the manner of an argument was prioritized over its substance. Whether you agree with Raymond's politics is irrelevant to the structural observation: the co-founder of the OSI was expelled from the OSI using governance instruments that didn't exist when he founded it, for opposing changes to the definitional framework he created. The institution had structurally outgrown its creator. The tools designed to make the movement palatable to corporations had made it governable by corporations.

The FSF Mirror

Meanwhile, the Free Software Foundation was undergoing its own governance crisis that illuminated the same structural problem from the opposite direction. In 2021, the FSF board reinstated Richard Stallman (who had resigned in 2019 over characteristically tone-deaf public statements), triggering a rebellion from corporate sponsors. Red Hat suspended funding. An open letter demanding Stallman's removal gathered thousands of signatures. A counter-letter defending his reinstatement gathered thousands more.

The FSF crisis reveals the structural complement to the OSI's: the OSI expelled its founder for being insufficiently compliant with behavioral governance norms; the FSF nearly destroyed itself for reinstating its founder despite his violation of those same norms. In both cases, the governance instruments were behavioral, not technical. In both cases, the structural question (does this person's technical contribution and institutional knowledge outweigh the social cost of their communication style?) wasn't asked, because the governance framework had no mechanism for asking it. Behavioral governance produces behavioral outcomes. It can't produce structural ones.

Stallman's substantive warnings, as distinct from his personal conduct, have aged with the precision of a structural engineer's load calculation. He predicted that "open source" would hide the issue of freedom. It did. He warned that Service as a Software Substitute would render the GPL irrelevant by ensuring users never possessed the code running their computations. It has. He argued that the movement's "neutrality" created a vulnerability that would be exploited. It was. Stallman was wrong about many things, including the sufficiency of his own ethical framework. But on the structural question, that de-moralizing the movement would leave it defenseless, he was precisely, demonstrably, infuriatingly correct.

The cost of the 1998 de-moralization is now fully legible. The movement gained universal adoption and lost the capacity to defend what it built. The OSD became a cage whose bars were labeled "neutrality." The institutions created to steward the commons became governance structures that expelled their own architects while the corporations they were supposed to constrain captured the outputs of the commons at industrial scale. And the ethical framework that might have prevented all of it, the standing to say "this use of our work violates our obligations to each other," had been surgically removed a quarter century earlier, by people who believed they were helping.

The structural lesson: a commons without an ethic will be captured by whatever force best understands how to exploit procedure. The open-source movement proved this at civilizational scale.

III. Vector 1: Corporate Capital as Governance

The preceding piece documented the economic extraction cycle: the broken feedback loop, the SaaS loophole, the Cantillon Effect applied to open-source labor. That analysis traced the flow of value, how the commons produces software that corporations monetize without proportional reciprocation. This section traces the flow of power. The question is no longer who profits from the commons. The question is who governs it, and by what structural right.

The answer, documented in foundation bylaws, steering committee rosters, and board composition data, is unambiguous: the entities that govern the open-source commons are, overwhelmingly, the same entities that extract value from it. The fox doesn't merely raid the henhouse; it sits on the henhouse board of directors, votes on henhouse policy, and files the henhouse's tax returns.

The Linux Foundation: Anatomy of an Open-Source Corporation

The Linux Foundation is the most consequential institutional actor in open source. It hosts over one thousand active projects, including the Linux kernel, Kubernetes, PyTorch, and the Cloud Native Computing Foundation. In 2024, it reported total revenue exceeding $220 million. Its 2025 forecast projects $311 million. The Apache Software Foundation, which hosts projects including Apache HTTP Server, Kafka, and Spark, operates on a budget roughly two orders of magnitude smaller.

The Foundation's revenue derives from three streams: membership dues and donations (roughly 43%), project services (27%), and training and events (28%). The membership model is tiered. Platinum members pay $500,000 annually and receive a permanent, non-elected seat on the Board of Directors. Gold members pay $100,000 and share three elected seats among themselves. Silver members pay 5,00020,000 and share one seat. Associate members (non-profits, governments, individuals) pay nothing and receive no vote.

As of late 2025, the Platinum tier includes Microsoft, Google, Intel, Meta, Huawei, and Toyota. Each holds a permanent board seat. The Board Chair is Nithya Ruff of Amazon. There are no community-elected seats on the Linux Foundation Board of Directors. There were, once. In January 2016, the Foundation amended its bylaws to eliminate the provision for "Community Directors," board members elected by individual affiliates who paid a $99 annual fee. The amendment was enacted quietly. Developer Matthew Garrett speculated publicly that the timing was designed to prevent the election of Karen Sandler, executive director of the Software Freedom Conservancy, who might have pushed the board toward more aggressive GPL enforcement against Foundation member companies.

Whether Garrett's speculation was correct is structurally irrelevant. What matters is the outcome: a board that was partially accountable to individual developers became entirely accountable to corporate payers. Apply the eyeball test. Can an independent contributor to a Linux Foundation project look the Foundation's board in the eye without reason for fear or deference? The board is composed exclusively of representatives of the corporations that employ most kernel developers, fund most Foundation initiatives, and control most of the infrastructure on which open-source software runs. The answer is no. The contributor is structurally dominated; not because the board is hostile, but because the board possesses the uncontrolled capacity to redirect resources, redefine priorities, and restructure governance at will, and the contributor has no institutional mechanism to contest those decisions.

The Foundation's spending patterns confirm the structural diagnosis. Of a $300 million-plus budget, approximately $6.7 million is earmarked for the Linux kernel itself. Conferences and meetings consume $27 million. Marketing and advertising: $5 million. Executive compensation reached $6 million in 2024, nearly matching the kernel allocation, with Executive Director James Zemlin earning $1.26 million in total compensation. The Foundation does not primarily produce software. It produces ecosystem: the conferences, the branding, the certification programs, the "neutral ground" on which competitors collaborate. That ecosystem serves the corporations who fund it. The code is written, overwhelmingly, by employees of those same corporations. The Foundation is the administrative layer between corporate capital and community labor.

None of this is hidden. The Foundation is a 501(c)(6) trade association, not a 501(c)(3) public charity. Its legal obligation is to advance the common business interests of its members, not the public good. The structural question isn't whether the Foundation is corrupt (it is not). The structural question is whether a trade association can serve as a commons steward. A steward's obligation runs to the commons. A trade association's obligation runs to its dues-paying members. When those interests diverge, and they do, on questions of GPL enforcement, maintainer compensation, and desktop Linux support, the trade association serves its members. That is what trade associations do.

Steering Committee Colonization

The Linux Foundation is the most visible case, but the pattern extends across the ecosystem's technical governance structures.

The Kubernetes Steering Committee, the highest governance body for the most widely deployed container orchestration system on Earth, is elected by community contributors. The process is democratic. The outcomes aren't. As of 2026, all seven seats are held by employees of technology corporations. Google holds two. Independent contributors hold zero. The committee was designed to be elected by the community. The community elected the corporations' employees. The result is structurally identical to appointment.

The Linux kernel's Technical Advisory Board tells the same story. The 2025 TAB includes Greg Kroah-Hartman (Linux Foundation Fellow), Ted Ts'o (Google), Steven Rostedt (Google), David Hildenbrand (Red Hat), and Kees Cook (Google). Roughly 80–90% of TAB members are employed by companies that are direct stakeholders in the Linux ecosystem. The TAB has no formal authority over the codebase (Linus Torvalds retains that), but its influence on community standards and technical direction is substantial, and that influence runs through corporate employment.

The Node.js Technical Steering Committee operates under a consensus-seeking model within the OpenJS Foundation. Its most active voting members are employees of a small group of enterprise firms. The 2026 agenda is dominated by corporate-led initiatives (the "Native-First" revolution, permission model hardening, NPM dependency reduction) that serve the interests of cloud providers optimizing for serverless environments.

PyTorch's Governing Board dispenses with the pretense of community election entirely. Its composition in 2026: Arm (board chair), NVIDIA, AWS, Meta, Google, Intel, AMD, IBM, Microsoft. One hundred percent corporate. The Technical Advisory Council includes representatives from the same firms. PyTorch is, structurally, a shared R&D arm for the semiconductor and cloud industries, operating under the Linux Foundation's 501(c)(6) umbrella with the rhetorical surface of an open-source community project.

The pattern is uniform. Technical governance bodies that are formally open to community participation are substantively controlled by corporate employees. The mechanism isn't bribery or coercion. It is structural selection: only people who are paid to contribute full-time can accumulate the sustained engagement required to win elections or achieve maintainer status. Independent contributors, who donate their labor, can't compete with developers whose employer pays them to participate. The democratic process is real. The structural outcome is oligarchic. This is Pettit's central insight applied to governance: the absence of interference does not imply the absence of domination.

The Hire-and-Redirect Pattern

The subtlest mechanism of governance capture doesn't involve board seats or steering committees. It involves the employment contract.

When a corporation hires the lead maintainer of an independent open-source project, it acquires something that no foundation membership can provide: hierarchical authority over the individual who controls the codebase. The maintainer, now an employee, is bound by an employment contract that provides the firm with unilateral coordination over their labor. The open-source license grants the community rights to the code; the employment contract grants the employer rights to the person who writes the code.

A maintainer hired by a cloud provider shifts focus toward features that optimize the software for that provider's infrastructure: AWS Lambda, Google Cloud Functions, Azure serverless. Features that benefit the broader community but don't align with the employer's quarterly objectives are deprioritized. The "zombie component" problem identified in the 2026 OSSRA report (93% of audited codebases containing components with no development activity in the last two years) is partly a byproduct of this pattern. Maintainers, once hired, are reassigned to high-value new features. The maintenance of core legacy components falls to a diminishing pool of unpaid volunteers.

The structural consequence is a transformation of the project's social architecture. Research on the commercialization of open-source projects documents a consistent pattern: as maintainers are absorbed into corporate employment, projects lose their capacity for autonomous direction and become complements to existing corporate production models. The hub-and-spoke alliance network replaces the mesh. The corporate employer acts as the hub. The project's most critical developers become spokes, their labor directed by hierarchical authority rather than community consensus.

The Relicensing Rug-Pull

The ultimate expression of corporate governance capture is the relicensing event: the moment when the entity that controls a project's trademarks and copyrights unilaterally changes the license, converting community-built software into a proprietary asset.

The preceding piece documented the MongoDB/SSPL case. The pattern has since replicated with mechanical regularity. Elastic moved Elasticsearch from Apache 2.0 to the Elastic License and SSPL in January 2021, then to AGPL in August 2024. Redis moved from BSD to a dual-license model including the Redis Source Available License in March 2024, triggering the Valkey fork, funded by Amazon, Google, Oracle, and Ericsson, and hosted under the Linux Foundation. HashiCorp moved Terraform from MPL 2.0 to BSL 1.1 in August 2023, triggering the OpenTofu fork. Akka moved to BSL 1.1 in 2022, triggering Apache Pekko.

In every case, the decision was made by a corporate board without meaningful community input. In every case, the community's recourse was the fork, the nuclear option of open-source governance, which preserves the code but fragments the community, duplicates maintenance labor, and confuses the user base. And in every case, the counter-fork was funded not by independent developers but by a coalition of other corporations whose business models were threatened by the relicensing. Valkey is not a community victory. It is an inter-corporate territorial dispute conducted over the community's labor.

The 2026 State of Open Source Report quantifies the downstream effect: 55% of organizations now cite "avoiding vendor lock-in" as a primary reason for choosing open source, a 68% year-over-year increase. The users have noticed. Corporate-controlled open source is increasingly perceived as a tactical risk rather than a strategic asset. The commons isn't dying from disuse. It is dying from distrust.

Ostrom's Diagnostic: Which Principles Are Violated?

Elinor Ostrom's eight design principles for governing common-pool resources provide the most rigorous diagnostic framework available for evaluating whether a governance structure can sustain a shared resource. Applied to the documented cases of corporate governance capture, the diagnosis is systematic.

Principle 2 — Proportional Equivalence demands that those who benefit from the commons contribute proportionally to its maintenance. This principle is violated at civilizational scale. Over 75% of global codebases depend on open-source components. The maintenance labor for those components is provided by a handful of volunteers while trillion-dollar corporations extract the value. The OpenSSL vulnerability (Heartbleed) lay undetected for 27 months in a project that secured most of the world's web traffic, maintained by a few unpaid contributors. Log4Shell affected 75% of the world's code and was maintained by an overwhelmed community.

Principle 3 — Collective-Choice Arrangements requires that those affected by governance rules participate in modifying them. In every relicensing case (Redis, MongoDB, Terraform, Elastic) the decision was made by a corporate board and announced to the community after the fact. Contributors who had invested years of labor in the codebase had no vote, no veto, and no formal mechanism to contest the decision.

Principle 1 — Clearly Defined Boundaries requires that the community of benefit and the rules of access be clearly specified. In firm-controlled projects like Chromium and TensorFlow, the community of benefit is marketed as "global," but the rights to appropriate the resource, the roadmap, the architecture, the release schedule, are bounded to Google employees. The boundary is clear to insiders and invisible to the public.

Principle 4 — Monitoring requires monitors who are accountable to the resource users. In foundation-governed projects, the monitors (board members) are accountable to corporate members who pay their membership fees, not to the global community of users and contributors.

Projects That Resisted: Structural Features of Immunity

Not every project has been captured. The exceptions are instructive because they share specific structural features that the captured projects lack.

Debian operates under a formal constitution that grants voting rights to individual developers, not corporate entities. There are no tiered membership fees. The Debian Project Leader is elected annually and can be overridden by a General Resolution from the developer body using Condorcet voting. A project's identity, its name, its trademarks, its domain, can't be unilaterally seized because they are held by a trusted organization (Software in the Public Interest) that is structurally accountable to the developer community, not to a corporate board.

The Apache Software Foundation is organized as a 501(c)(3) public charity, not a trade association, and membership is granted to individuals based on sustained contribution, not to corporations based on payment. No company can purchase a board seat. The ASF board is elected by individual members. The organizational philosophy is "Community over Code," which means that the health of the governance structure is weighted above the technical output it produces.

Python moved to a Steering Committee model after Guido van Rossum's resignation with a critical structural safeguard: a majority-limiting rule that prevents any single corporation from holding a majority of committee seats. Even if Google and Microsoft employ most of Python's core developers, they can't structurally control the language's evolution through employment alone.

The common features are clear: individual-based membership, constitutional constraints on concentrated power, and structural separation between funding and governance authority. These aren't ideological preferences. They are load-bearing architectural decisions. The projects that survive corporate capture are the projects that were designed to resist it; not through goodwill or cultural norms, but through governance structures that make capture structurally expensive.

The projects that were captured are the projects that trusted procedure to substitute for principle. They learned, at the cost of their communities, that it cannot.

IV. Vector 2: Behavioral Governance as Structural Capture

The first vector operated through institutions: bylaws, budgets, board seats. This one operates through behavior.

The structural argument is precise. The problem isn't that behavioral norms exist. Communities require norms. The problem is that the dominant behavioral governance framework, the Contributor Covenant, creates an enforcement structure that fails the Pettit test. It enables arbitrary interference without accountability. This is true regardless of the political orientation of the enforcers, and it remains true even when the enforcement body acts with genuine good faith.

The Contributor Covenant: Origin and Structural Mechanism

The Contributor Covenant was created in 2014 by Coraline Ada Ehmke to address a real structural vacuum: the absence of standardized mechanisms for protecting underrepresented contributors from harassment in open-source communities. The document was adopted rapidly (15,000 projects by 2015, over 100,000 by 2021) propelled by grassroots demand, GitHub's integration of the document as a default template, and institutional mandates from corporate sponsors requiring formal conduct policies as a prerequisite for participation.

The adoption trajectory is not contested. Neither is the problem it sought to solve. The structural question is what the document creates, not what it intends.

The Contributor Covenant's primary structural innovation is the mandate for a designated enforcement body granted unreviewable authority to interpret the code and impose sanctions, from private warnings to permanent bans. Across its versions, the document has evolved from a simple pledge (v1.0) to a complex governance instrument (v3.0) that specifies "Community Moderators" with investigative powers: the authority to "review messages, logs, and recordings" and "interview witnesses," mirroring the procedural apparatus of a professional HR department. The evolution from "Project Maintainers" (v1.x) to "Community Leaders" (v2.0) to "Community Moderators" (v3.0) is a deliberate decoupling of social authority from technical authority. The enforcement body operates alongside, and in practice above, the technical governance structure.

This is the parallel power structure. Technical governance controls the code. Behavioral governance controls who is permitted to contribute it. When these two structures conflict, and the documented cases demonstrate that they do, the behavioral body holds the superior position, because exclusion is a more severe sanction than any technical disagreement.

The Pettit Test Applied

The non-domination criterion asks a single structural question: does the enforcement body possess the capacity to interfere with a contributor's participation on an arbitrary basis?

Three structural features of the standard Contributor Covenant deployment answer this question affirmatively.

No popular control. In the overwhelming majority of projects, the enforcement body isn't elected by the contributor base. It is appointed by the project's core team, often the same individuals who founded the project or who hold the most institutional power. This creates a closed governance loop in which technical leaders appoint social leaders who are then tasked with governing the body that appointed them. The Contributor Covenant text provides no standard mechanism for removing or recalling moderators. The contributors have no institutional voice in the composition or mandate of the body that can exclude them.

Subjective standards. Non-domination requires that interferences be based on clear, prospective rules. The Contributor Covenant prohibits "trolling," "insulting or derogatory comments," "public or private harassment," and conduct "reasonably considered inappropriate in a professional setting." Each of these terms admits a range of interpretation broad enough to encompass virtually any disagreement sufficiently heated to attract moderator attention. The word "deem" in the enforcement clause (maintainers may act against behavior they "deem inappropriate") codifies subjective discretion as the enforcement standard. Selective enforcement under subjective standards is, in Pettit's framework, a structural hallmark of domination.

No adversarial process. The Contributor Covenant provides no right to a formal defense, no right to examine the evidence, and no independent appeal mechanism. Version 3.0 emphasizes that investigations should be "as transparent as possible" while "prioritizing safety and confidentiality," a formulation that grants the enforcement body unilateral discretion over what the accused is permitted to know about the charges against them. In the typical deployment, the moderator's judgment is final.

Apply the eyeball test. Can a contributor to a project governed by the Contributor Covenant look the enforcement body in the eye without reason for fear or deference? If the contributor's continued participation depends on remaining in the good graces of a body that can exclude them based on its own subjective judgment, without election, without appeal, and without a clear standard, the answer is no. The contributor is structurally dominated. Not necessarily interfered with. But dominated, in Pettit's precise sense: they exist at the mercy of the enforcer's continuing goodwill.

And the primary mode of this domination isn't punishment but self-censorship. The enforcement body need not ban anyone. The structural capacity to ban is sufficient. Contributors who recognize the asymmetry preemptively adjust, softening technical objections, avoiding contentious topics, calibrating tone to the moderator's expectations rather than to the engineering problem at hand. This is Pettit's "servitude" permutation: no interference has occurred, but domination is structurally present, and the contributor's behavior has already been shaped by it.

The Documented Cases

The structural analysis isn't hypothetical. The documented controversies confirm the pattern.

The Rust moderation team resignation (2021). The entire Rust moderation team resigned collectively, citing "structural unaccountability" of the Rust Core Team. The moderators reported that the Core Team had placed itself in a position answerable only to itself, exempt from the oversight governing every other team. The moderators, the behavioral enforcement body, found themselves structurally unable to enforce the Code of Conduct against the project's most powerful technical actors. The resignation exposed a fundamental circularity: the enforcement body was appointed by the entity it was supposed to oversee. This is the closed governance loop in its purest expression. RFC 3392, which dissolved the Core Team in favor of the Leadership Council, was a direct structural response, an acknowledgment that the behavioral governance instrument was hollow without an independent enforcement capacity.

The RustConf keynote incident (2023). JeanHeyd Meneide was invited as a keynote speaker by a 5-0 vote of the interim leadership group, then unilaterally downgraded to a regular talk by two individuals who contacted the conference organizers directly, bypassing the group that had approved the invitation. Meneide wasn't informed until the decision was final. The incident precipitated the resignation of Sophia Turner, who described the system as "a cruel, heartless entity" that treated experts as disposable. The structural failure was not the outcome (reasonable people can disagree about conference programming) but the mechanism: two individuals exercised governance authority without mandate, without process, and without accountability. The behavioral governance structure provided no recourse.

The Node.js Rod Vagg crisis (2017). A Technical Steering Committee member was targeted for removal after sharing a link on Twitter to an article criticizing speech codes on university campuses. The subsequent process lacked adversarial safeguards: Vagg reported being excluded from the investigation and denied the opportunity to respond to specific complaints before a vote was held. The TSC voted to retain him, triggering mass resignations from members who felt the Code of Conduct was being selectively applied. The project forked (Ayo.js). The behavioral governance instrument became the vector for a political contest that fragmented the community's technical capacity.

The Linux kernel Code of Conduct adoption (2018). Linus Torvalds issued a public apology for his communication style and took a temporary leave of absence. The kernel simultaneously replaced its "Code of Conflict" with the Contributor Covenant. The structural significance wasn't the apology (Torvalds's abrasiveness was a documented governance problem) but what was exchanged. The Code of Conflict was a procedural instrument that focused on technical disputes and provided the Technical Advisory Board as a mediation body. The Contributor Covenant introduced a parallel enforcement structure with subjective standards and no independent appeal. Critics within the kernel community argued that the transition was driven by the Linux Foundation rather than the developer community, and that it replaced a flawed but structurally limited instrument with a structurally unlimited one.

The NixOS governance crisis. I have personal standing in this case, documented across several previous pieces. I won't retell it here. The structural pattern is identical: a behavioral governance body, operating under subjective standards, exercised exclusionary authority without adversarial process. The details are on record for anyone who wants them.

The Structural Selection Problem

The individual cases are instructive. The systemic pattern is more consequential.

Behavioral governance instruments that fail the Pettit test don't merely risk occasional injustice. They produce a structural selection effect. The enforcement environment selects for contributors who navigate behavioral compliance effortlessly (professional communicators, corporate employees with HR training, individuals socialized into institutional norms) and selects against contributors who do not: the disagreeable, the neurodivergent, the technically brilliant loner who maintains critical infrastructure from a home office and communicates in the blunt idiom of engineering rather than the diplomatic register of corporate culture.

This is not speculation. The 2026 ICSE research on Code of Conduct impact documents a "CoC Disengagement Gap": projects with a CoC successfully attract a higher volume of new contributors over the long term, but experience a documented short-term increase in disengagement among veteran contributors. The new contributors who arrive are, on average, more comfortable with institutional norms. The veterans who depart take their institutional knowledge with them.

The structural consequence is a gradual transformation of the project's immune system. The contributors most likely to resist corporate capture, independent maintainers with deep technical expertise and no institutional loyalty, are precisely the contributors most likely to run afoul of behavioral governance instruments that penalize disagreeable communication styles. The contributors who remain are those who have internalized the norms of the institutional environment. They are, not coincidentally, the contributors most easily absorbed into corporate employment.

The two vectors of capture are not independent. They are symbiotic. §V will formalize this relationship.

Alternative Models That Pass the Pettit Test

The structural diagnosis does not imply that behavioral norms are unnecessary. It implies that the specific governance structure created by the Contributor Covenant fails the non-domination criterion. Alternative models exist that achieve the protective function without creating an unaccountable parallel authority.

The Linux kernel's Code of Conflict (2015–2018) was a procedural instrument focused on the technical review process. It mandated that technical criticisms target the code, not the person, and provided the Technical Advisory Board as a mediation body. It didn't create a parallel enforcement structure. It did not list protected classes. It did not extend its jurisdiction beyond project spaces. It was criticized, fairly, for being insufficient to prevent the hostile communication culture that characterized kernel development. But its structural property was significant: it was subordinate to the technical governance, not superior to it. It couldn't be used to exclude contributors for reasons unrelated to their technical work.

The SQLite Code of Ethics adopted the Rule of St. Benedict as a one-directional pledge by the developers to the community. It creates no enforcement body. It imposes no sanctions on contributors or users. It is a statement of the developers' behavioral commitments without establishing a governance structure that could be captured or weaponized. The structural property: it achieves the "professionalism" signal required by enterprise clients while maintaining a closed-contribution model that protects the project's technical sovereignty.

The LLVM transparency model uses a Code of Conduct but incorporates structural checks absent from the base Contributor Covenant. It publishes annual transparency reports detailing resolved incidents without identifying parties. It explicitly states that its goal is not punitive accountability but community safety. The enforcement committee has found maintainers and moderators in violation of the code, demonstrating functional bidirectional accountability. The structural property: public documentation of enforcement decisions creates an empirical record against which the enforcement body can be evaluated. This is not a formal adversarial process, but it is a structural check, a mechanism by which the governed can assess whether the governors are acting consistently and proportionally.

The common structural features of the models that pass the Pettit test: enforcement authority is either absent (SQLite), subordinate to technical governance (Code of Conflict), or subject to transparency mechanisms that enable community oversight (LLVM). None of them create a parallel power structure with superior authority, subjective standards, and no accountability.

The Structural Lesson

The Contributor Covenant addresses a real problem, the exclusion and harassment of underrepresented contributors, with a governance structure that introduces a new form of structural domination. This is not an argument against behavioral norms. It is a diagnostic observation that replacing horizontal domination (contributors harassing each other) with vertical domination (an unaccountable enforcement body with subjective authority) doesn't produce freedom. It produces a different species of unfreedom, one that is harder to detect because it wears the rhetorical surface of protection.

The structural question is always the same: can the governed look the governors in the eye without reason for fear or deference? If the answer is no, regardless of the governors' intentions, their political orientation, or their stated commitment to justice, the structure fails. The framework doesn't care about motives. It cares about architecture.

V. The Symbiosis: How the Two Vectors Reinforce Each Other

Documented separately, each vector presents a significant structural vulnerability. Documented together, they reveal something considerably more consequential: a reinforcement loop in which each vector amplifies the other's effectiveness, producing a governance environment that selects for corporate-compatible contributors while neutralizing the commons' structural immune system.

This is a structural inference, not a documented conspiracy. The Autonomic Machine does not require conscious coordination between these two vectors. It requires only that their incentive gradients point in the same direction, and they do, with a consistency that structural analysis can't ignore.

The Selection Mechanism

Consider the asymmetry. A corporation navigating a behavioral governance regime has resources that an independent contributor doesn't: an HR department trained in compliance language, a legal team versed in liability management, professional communications staff who can craft responses that satisfy any subjective standard of "appropriate" conduct. The corporate contributor operates within a behavioral framework by default, because the corporate environment has already selected for precisely the interpersonal behaviors that conduct codes enforce. The corporate contributor doesn't need to change their behavior. They arrived pre-adapted.

The independent contributor, the unpaid maintainer, the technically brilliant dissenter, the person who cares more about correct code than correct phrasing, occupies the opposite structural position. They possess no compliance infrastructure. Their communication norms are those of the engineering culture that built the commons: direct, technically precise, indifferent to the social packaging of the message. This is not a deficiency. It is the communicative signature of the meritocratic culture that produced the software these corporations now depend upon.

Behavioral governance codes, structurally, select against this communicative mode. The Contributor Covenant's prohibition of behavior deemed "inappropriate" (where "deemed" signals subjective discretion) operates as a filter that independent, direct, technically focused contributors are disproportionately likely to fail. Not because they are harassing anyone. Because their communicative norms, formed in a culture of adversarial technical review, don't conform to the professional-managerial register that behavioral governance codes implicitly enforce.

The result is a selection pressure operating at the population level, not an individual claim but a structural one, because no single enforcement action "captures" a project. Over time, the contributor base of a project governed by subjective behavioral codes will shift; not through any single dramatic event, but through the accumulated attrition of contributors who decline to self-censor, who refuse to modulate technically precise criticism into therapeutically palatable language, or who are expelled for failing to satisfy a standard that was never designed to evaluate technical contribution. The population that remains is, by structural necessity, the population that navigates the behavioral regime successfully: corporate employees and individuals whose communicative norms align with institutional professionalism.

This is the commons' immune system being neutralized. The independent maintainer who says "this patch is wrong and here is why" is the structural equivalent of the peer dissenter in Milgram's experiment, the person whose refusal to comply collapsed obedience from 65% to 10%. Behavioral governance codes don't merely regulate conduct. They regulate the type of person who can participate, and the type they exclude is precisely the type whose structural function is to resist institutional capture: the disagreeable, technically competent, independent contributor who can't be bought, redirected, or silenced by organizational pressure.

The Reinforcement Loop

Once the selection mechanism is understood, the reinforcement between the two vectors becomes structurally visible.

Corporate capture (§III) operates through financial leverage: board seats purchased through membership tiers, maintainers hired and redirected, infrastructure dependency creating standby control. But corporate capture encounters resistance when the contributor base contains a critical mass of independent voices capable of contesting corporate decisions on technical grounds. The xz-utils backdoor was detected by a single independent engineer. The Terraform fork was catalyzed by contributors who refused to accept HashiCorp's unilateral relicensing. The Rust moderation team resigned because they had enough structural independence to do so. These are the immune responses. These are the peer dissenters, the ones whose refusal to comply collapses the autonomic machine's compliance rate from 65% to 10%.

Behavioral governance (§IV) degrades this immune response. It doesn't need to target corporate capture specifically. It merely needs to operate as designed, enforcing a subjective behavioral standard through an unaccountable body, and the structural consequence follows automatically: the contributor population shifts toward the compliant, the professionally socialized, the conflict-averse. The population that remains is the population least likely to contest a corporate board decision, least likely to notice a governance capture in progress, least likely to exercise the structural dissent that Milgram demonstrated is the autonomic machine's only reliable failure mode.

The corporation, meanwhile, benefits from the behavioral governance regime without needing to orchestrate it. The code of conduct handles the selection. The corporation merely inherits the result: a governance environment in which the only remaining voices are those compatible with corporate institutional norms. The path to capture is cleared not by conspiratorial intent but by structural selection. This is the autonomic machine operating across both vectors simultaneously; each vector producing outcomes that serve the other's function, without any individual actor in either vector needing to understand or intend the combined effect.

The economic dependency documented in the Pettit analysis completes the loop. The maintainer who is hired by a corporation is now doubly constrained: constrained by the employment relationship (which provides the corporation with hierarchical authority over their labor) and constrained by the behavioral governance regime (which provides the conduct body with discretionary authority over their participation). The independent maintainer who is not hired faces the opposite double bind: economically precarious (unable to sustain full-time maintenance without sponsorship) and behaviorally exposed (lacking the institutional buffer that corporate employment provides against conduct complaints). In both cases, the structural incentive points in the same direction: compliance. The sponsored maintainer complies because their livelihood depends on it. The independent maintainer complies because their participation depends on it. The person who refuses both forms of compliance, who maintains their technical independence and their communicative directness, is the person the combined system is structurally designed to expel.

The Structural Inversion

The combined mechanism satisfies the structural inversion formula with disquieting precision.

Hollow. The substance of community governance, technical self-determination by the people who write the code, is emptied. Decision-making authority migrates to corporate-dominated steering committees (§III). The capacity to contest those decisions is degraded by behavioral selection (§IV). What remains is procedure without substance: meetings are held, votes are taken, comments are solicited, but the population has been pre-filtered for compliance.

Substitute. A procedurally legitimate governance apparatus is installed in place of the substantive one. The conduct committee, the diversity statement, the community guidelines; real institutions performing real functions. They are also the mechanism by which the commons' immune system is replaced with a compliance apparatus. The substitution is invisible because the substitute performs a genuine function (addressing harassment) while simultaneously performing a structural function (filtering for corporate compatibility) that its operators need not recognize.

Preserve. The rhetorical surface of "community governance" is maintained at full intensity. The project's GitHub page still says "community-driven." The code of conduct still frames itself as protecting the vulnerable. And all of this is, at the rhetorical level, true. The community is involved; it is simply a different community than the one that built the software. The governance is open; it is simply open to a population structurally selected for compatibility with the institutions that fund the foundation.

This is the structural inversion of the commons. Not a hostile takeover. Not a conspiracy. A structural process operating through individually defensible decisions (each hire reasonable, each conduct enforcement justified, each board seat purchased through a legitimate membership tier) that produces, in aggregate, an outcome indistinguishable from coordinated capture. The commons' technical governance is hollowed. A behavioral compliance apparatus is substituted. The rhetorical surface of community ownership is preserved. The formula is one. The domains are many. And the result is a commons that looks, from the outside, exactly like a commons, and functions, from the inside, exactly like a corporation with unusually good public relations.

The freedom formalism derived in a preceding piece applies here with particular force.

Bedrock. The structural condition for the commons' persistence, the path from dependency to self-governance for its participants, is narrowing. Fewer independent contributors means reduced capacity for self-correction, reduced diversity of technical perspective, reduced structural resilience against capture.

Bridge. The institutional mechanisms that should translate that requirement into lived reality (governance bodies, review processes, conflict resolution) have been captured or filtered.

Surface. The vocabulary of freedom ("open source," "community-driven," "contributor-friendly") remains fully operational, deployed with increasing frequency as the substance it once denoted is progressively extracted.

The cascade runs in one direction: surface preservation accelerates bridge erosion, bridge erosion degrades the bedrock, and the commons eats its own immune system while proclaiming its own health.

The question isn't whether this is happening. The evidence in §III and §IV documents it across multiple projects, multiple foundations, and multiple governance crises. The question is whether the commons possesses the structural capacity to reverse it; whether there remain enough independent voices, enough peer dissenters, enough people willing to fail the behavioral filter and contest the institutional consensus, to collapse the compliance rate from 65% back to 10%. That question is not rhetorical. It is the structural precondition for everything that follows.

VI. The Silence: Legislative Encroachment and Open-Source Paralysis

The preceding sections documented the diseases: corporate capital as governance capture, behavioral codes as structural filtering, and their autonomic reinforcement. This section documents the symptom that proves the diagnosis: when an external threat arrives that a healthy commons would resist, the captured commons does nothing. The silence is the test. The commons is failing it.

The Legislative Enclosure

Two parallel legislative waves are converging on the architecture of free software. The first targets the identity of the user. The second targets the confidentiality of the communication. Together, they constitute the most comprehensive assault on digital freedom since the Crypto Wars of the 1990s. The difference is that this time, the ecosystem that defeated the last assault is building the compliance infrastructure for this one.

Age Verification: The Identity Mandate

The United Kingdom's Online Safety Act 2023 requires platforms to deploy "highly effective" age verification to prevent minors from accessing restricted content. Ofcom, the regulator, has specified compliant methods: facial age estimation, credit card verification, banking data access, or the upload of government-issued identity documents. Failure carries fines of up to £18 million or 10% of global annual turnover. The Act empowers Ofcom to block access to services that fail their duties, a power that could extend to ISP-level blocks on any software that doesn't natively enforce identity checks.

The European Union's Digital Services Act follows the same trajectory through its risk-mitigation provisions, requiring platforms accessible to minors to implement "appropriate and proportionate measures" for safety. The European Commission's "mini wallet" blueprint, feature-ready as of April 2026, uses zero-knowledge proof cryptography to allow users to prove they are over a given age threshold without revealing their full identity. The privacy-preserving presentation is genuine at the protocol layer. The structural reality beneath it isn't: the digital certificate must be anchored to a trusted authority (a passport, a national eID, a banking record). The user proves their age without revealing their name; the state verifies that the user has a name. The anonymity is cosmetic. The identity layer is architectural.

Australia's Online Safety Amendment Act 2024 prohibits children under 16 from holding social media accounts entirely, with fines reaching A$49.5 million for systemic failures. In the United States, Colorado's SB25-201 mandates "commercially available certified technology" for age verification on platforms hosting sexual material, effective July 2026.

The pattern across jurisdictions is uniform: the construction of a persistent, high-assurance identity layer that transforms anonymous access into permissioned access. Every browser, every communication tool, every operating system designed to minimize user data collection is structurally incompatible with this regime. Compliance requires integration with third-party identity providers, turning the software into a checkpoint for state-approved access.

These mandates aren't arriving organically. In March 2026, an independent researcher on Reddit assembled a map of the funding behind the U.S. age verification wave by cross-referencing federal lobbying disclosures, state filings, contractor invoices, and nonprofit tax returns. The trail led to a single source: Meta. An estimated $2 billion in aggregate spend on product development, identity vendor acquisitions, and a sustained lobbying campaign had been routed through intermediaries: grants to nonprofits promoting "online child safety," payments to trade associations that convened policy briefings, and contracts with identity verification companies whose product roadmaps aligned precisely with the technologies being promoted in state legislatures. The funding chain wasn't disclosed voluntarily; it had to be excavated from public records by a private citizen with no institutional backing.

The structural diagnosis from §V applies with surgical precision. The corporation that benefits from centralized identity infrastructure funds the advocacy for mandating centralized identity infrastructure, routes the funding through third parties to create the appearance of independent consensus, and wraps the entire operation in the rhetoric of child protection. The platform that created the environment in which children are harmed now funds the legislative response that consolidates its own dominance, because only platforms with the resources to build or acquire age verification infrastructure at scale can survive the compliance burden. Smaller competitors, decentralized alternatives, and privacy-preserving projects can't. Whether or not any individual at Meta sat down and said "let us destroy the open-source alternatives," the structural outcome is identical: the mandate eliminates the commons as a competitive threat, and the entity that benefits most from that elimination funded the apparatus that produced it. At some point, the distinction between structural coincidence and structural design collapses under its own weight. And the ecosystem says nothing.

Encryption Under Siege

The second wave targets confidentiality directly. The United States' EARN IT Act seeks to strip Section 230 immunity from platforms that fail to follow "best practices" for detecting child sexual abuse material. The ACLU, the EFF, and the Internet Society have all argued that these "best practices" will inevitably require client-side scanning or backdoor access to encrypted communications. By making end-to-end encryption a potential element of evidence for "recklessness," EARN IT forces developers of communication software to choose between providing security and facing ruinous civil litigation.

The European Commission's CSAM Regulation (commonly called "Chat Control") represents the most direct legislative attempt to mandate message scanning. The original proposal would have required providers to scan all private messages, including end-to-end encrypted communications, for illegal content. The European Parliament rejected the extension of voluntary scanning in March 2026, but the permanent regulation remains under negotiation, with the new draft requiring "risk mitigation measures" that critics describe as mandatory scanning by another name.

The UK's Investigatory Powers (Amendment) Act 2024 introduces a "notification regime" requiring telecommunications operators (a term defined broadly enough to include any internet-based communication service) to notify the Home Office before making changes to their systems that might affect "lawful access." This gives the government a structural veto over security improvements. If a developer intends to deploy a patch to a vulnerability that law enforcement is currently exploiting for surveillance, the government can order a delay in that deployment. The Act converts the software update cycle into a surveillance negotiation.

Australia's Telecommunications and Other Legislation Amendment Act (TOLA, 2018) completes the picture, empowering the government to issue "Technical Capability Notices" that compel service providers to build decryption capabilities into their systems.

The Contrast: When the Ecosystem Fought

The current legislative environment is not unprecedented. In the 1990s, the United States government attempted to impose precisely the same categories of control: mandatory key escrow (the Clipper Chip), export restrictions on encryption (ITAR classification of cryptography as munitions), and criminal prosecution of developers who published strong encryption tools. The ecosystem's response was immediate, adversarial, and effective.

In 1993, the Clinton Administration and the NSA introduced the Clipper Chip, a hardware encryption standard with a "key escrow" system that would give the government access to the decryption keys held by two escrow agents. The resistance was both political and technical. The Electronic Frontier Foundation rebranded the proposal "key surrender." In 1994, computer scientist Matt Blaze discovered a flaw in the Law Enforcement Access Field that allowed users to bypass the escrow entirely, proving that government-mandated backdoors were not merely undesirable but technically unsound. The Clipper Chip was abandoned.

In 1991, Phil Zimmermann released Pretty Good Privacy as a direct response to Senate Bill 266, which proposed backdoors for encryption software. Under ITAR, strong encryption was classified as a munition, making its distribution on the internet an illegal export. Zimmermann risked federal prosecution; U.S. Customs interrogated him for years. The resistance to PGP export restrictions involved guerrilla distribution tactics, with supporters reportedly driving to payphones with acoustic couplers to upload the source code to international BBS systems before any ban could take effect. This was an era of individual champions willing to risk personal liberty for the principle of software freedom.

The seminal legal victory was Bernstein v. U.S. Department of Justice. Daniel Bernstein, a Berkeley mathematics student, wished to publish his "Snuffle" encryption algorithm and its source code. Under ITAR, he was required to register as an arms dealer and obtain a license for each foreign reader. With the support of the EFF, Bernstein sued the government. In April 1996, Judge Marilyn Hall Patel ruled that computer source code is a form of scientific and political expression protected by the First Amendment. That ruling established the legal foundation for the right to publish cryptographic software and remains the primary shield against government attempts to prohibit the distribution of encryption tools.

Three structural conditions enabled this resistance. First, a morally committed community: the primary actors (Zimmermann, Bernstein, Blaze) were driven by ideological conviction, not professional obligation. They viewed encryption control as an attack on human liberty, not a compliance question. Second, lack of corporate capture: open-source development wasn't yet the multi-billion-dollar industry it has become, and developers were generally independent of the telecommunications and technology firms that now sponsor the foundations. Third, adversarial institutional backing: organizations like the EFF and ACLU were designed to challenge the state in court, not to facilitate "neutral hubs" for industry collaboration.

The Current Response: Institutional Silence

The modern ecosystem's response to the legislative enclosure is not resistance. It is accommodation.

The Linux Foundation, with projected 2025 revenues exceeding $311 million, is not opposing age verification mandates. It is building the infrastructure to implement them. The Foundation's CAMARA Project, an alliance with global telecommunications operators including Telefónica, is standardizing "Know Your Customer" and age verification APIs that allow digital platforms to verify whether a user meets an age threshold by querying mobile operator subscriber data in real time. The CAMARA Age Verification API is marketed as a "privacy-by-design" solution because it returns a binary yes/no result. It also structurally integrates the software stack with state-verified subscriber records, creating the very persistent identity layer the cypherpunks of the 1990s went to prison to prevent.

The Apache Software Foundation, operating on a budget roughly two orders of magnitude smaller, has focused its 2025 policy priorities on "Public Policy & Security Standards" and "Supply Chain Security," primarily aimed at helping projects comply with the EU's Cyber Resilience Act. There is a documented absence of public opposition to bills like EARN IT or the Online Safety Act. The Foundation's "Tooling Initiative" is designed to "harden ubiquitous Apache projects" for regulatory expectations, prioritizing legal certainty for corporate users over the defense of user freedom.

The Free Software Foundation remains the most consistent critic. In April 2026, the FSF condemned Discord's new age identification policy, stating that "age verification policies are promoted as being necessary for protecting kids... but in reality these policies force users of all ages to interact with nonfree, invasive programs." The FSF also criticized the UK's children's wellbeing bill as a process "devoid of checks or accountability." But the FSF is increasingly marginal within the broader industry. Its revenue and developer activity have migrated toward the corporate-centric Linux Foundation, and its institutional capacity to mount the kind of litigation that decided Bernstein has diminished proportionally.

The Open Source Initiative, while acknowledging age verification in its 2026 FOSDEM participation, has primarily focused on defining "Open Source AI," largely avoiding the adversarial legal challenges that characterized its early history under Raymond and Perens.

The contrast is structurally devastating. In the 1990s, a handful of individuals with ideological conviction and institutional support from the EFF defeated the NSA's encryption controls, established code as constitutionally protected speech, and set a precedent that held for three decades. In the 2020s, the most powerful institutions in open source, commanding hundreds of millions in annual revenue and representing millions of developers, are responding to comparable legislative threats by building the compliance tools. The silence isn't inaction. It is selective action: the ecosystem's collective energy has been redirected.

The AI-Hostility Paradox

The silence toward legislative encroachment is made more conspicuous by the ecosystem's simultaneous capacity for energetic, coordinated hostility toward a different target: artificial intelligence tooling.

The open-source community's posture toward AI-assisted development is, overwhelmingly, one of aesthetic contempt. Contributors who use AI tools for code generation, documentation, or review are routinely derided as producing "AI slop." Projects have adopted explicit policies banning AI-generated contributions. The hostility is visceral, public, and self-organizing in a way that the response to EARN IT or the Online Safety Act manifestly isn't.

This is not principled resistance. It is aesthetic snobbery substituting for structural engagement. The practical consequence is devastating: corporations (Google, Microsoft, Meta, Amazon) are building proprietary AI tooling on top of open-source foundations, training their models on open-source codebases, and deploying the results behind API paywalls, while the community that produced the training data sneers from the sidelines at anyone who uses the tools. The ecosystem that cannot muster a public statement against mandatory identity verification can muster coordinated contempt toward a contributor who used Copilot to draft a function. The priorities are structurally inverted.

The cost of this posture extends beyond hypocrisy. Open-source maintainers are the most resource-constrained population in software. The burnout crisis documented in Sovereign Source is not a morale problem; it is a labor economics problem. A single maintainer responsible for infrastructure depended on by millions of users can't scale through willpower. AI-assisted development represents the first credible mechanism for multiplying that maintainer's capacity: automated triage, documentation generation, test scaffolding, vulnerability detection. The distinction between undisciplined "AI slop" and the elite, deliberate use of AI as a force multiplier is real and consequential. But the community has refused to draw it. Instead of articulating structural criteria for responsible AI usage (provenance, auditability, license compliance), the ecosystem defaults to a blanket prohibition that forfeits the most significant productivity leverage maintainers have ever been offered. The people who need the tools the most are forbidden from using them by a culture that cannot distinguish craftsmanship from freedom. The economic dimension is more damning still. Corporations have trained proprietary models on billions of lines of open-source code, the SaaS loophole applied to machine learning. But this extraction also constitutes the largest monetization opportunity in open-source history. A legally rigorous mechanism requiring firms to compensate the commons for training on its code would create the first sustainable, non-philanthropic revenue stream for maintenance. The community that should be architecting this mechanism is instead debating whether to accept pull requests from people who used Copilot. The forthcoming piece in this series will address the legal instrument required to close this gap.

The AI-hostility posture is a symptom of the de-moralization documented in §II. A community with a binding ethic would evaluate AI tooling against its structural obligations: does the tool respect provenance? Does it preserve the user's sovereignty over their own code? Does it satisfy the nine freedoms? A de-moralized community, lacking these structural criteria, defaults to the only evaluative framework it has left: taste. The result is a community that polices craftsmanship because it has lost the vocabulary to police freedom. It can tell you whether your code was written by a human. It can't tell you whether your governance serves the commons.

Structural Diagnosis: Why the Silence?

The ecosystem's paralysis is not a lapse in judgment. It is the predictable output of four structural conditions.

De-moralization. The rebranding from "free software" to "open source" in 1998 (§II) removed the moral obligation to resist. Raymond's pragmatism emphasized that software should be "open" because it is more efficient, not because it is "free." This left the ecosystem without a shared ethical vocabulary to oppose mandates like age verification. If the code remains "open," the contemporary open-source professional is often indifferent to whether that code is used to enforce state-mandated identity checks. The moral nerve was severed; the reflex doesn't fire.

Institutional capture. The foundations that govern the ecosystem are funded by the corporations that are negotiating with governments over the terms of these laws (§III). Microsoft, Google, and Meta benefit from "regulatory certainty" and "standardization." For a global telecommunications company, the Linux Foundation's CAMARA API is a mechanism to monetize subscriber data for age verification while claiming the "transparency" of an open-source project. The foundations have become trade associations that provide a "neutral" space for companies to build compliance infrastructure, effectively shielding them from the cypherpunk criticism of the past.

Behavioral domestication. The governance culture of modern open source selects for individuals who are experts in "interoperability," "sustainability," and "community management" (§IV). The maintainers of major projects are often employees of the foundations or their corporate sponsors, with career paths tied to the success of corporate-led initiatives. This creates a culture where dissent is viewed as a risk to the project's funding or professional reputation. The individual champions, the Zimmermanns and Bernsteins, have been replaced by stakeholder representatives who prioritize consensus and compliance. The behavioral governance apparatus documented in §IV produces precisely the contributor population least likely to contest institutional acquiescence.

Atomization. There is a growing structural separation between the development of the software and the defense of the user's rights. While the EFF and ACLU continue to fight legal battles, they are no longer integrated into the core development teams of the Linux kernel or major browsers in the way they were during the 1990s. The people who write the code are focused on the "how." The people who fight for rights are focused on the "should." No institutional structure exists to coordinate the two. The resistance of the 1990s succeeded because the same community that wrote PGP also funded the EFF, testified before Congress, and published the source code as a political act. Today, the code and the politics have been disaggregated, and neither half possesses the structural capacity to resist alone.

The diagnosis is complete. The commons is silent because it has been structurally disarmed. De-moralized, captured, domesticated, and atomized: four conditions, each independently sufficient to degrade resistance, operating simultaneously. The Autonomic Machine doesn't require conspiracy. It requires only that each condition persist, and the architecture of compliance assembles itself.

VII. The Ethic Derived

The preceding six sections documented a disease. This section derives the remedy.

Not a manifesto. Not a wish list. A structural derivation, following the same conditional logic that produced the nine freedoms in the first piece of this series: if the commons is to persist, what behavioral norms are conditionally necessary for that persistence? The freedoms name what must be structurally possible. The ethic names what must be done, by whom, under what constraints, with what accountability.

The derivation chain is unchanged. Axiom 0: entropy increases in any system left to natural forces. Complex systems (commons included) are dissipative structures; they persist only by continuously importing energy and exporting disorder. The quality of that imported energy matters. A commons sustained by passive extraction degrades. A commons sustained by principled contribution persists. The behavioral norms that govern contribution are not preferences. They are the energy import mechanism. Destroy them and the system reaches equilibrium, which, for a dissipative structure, is another word for death.

The freedoms covered two of Elinor Ostrom's eight design principles for long-enduring commons institutions. Freedom 4 (Sustenance) satisfies Ostrom's Principle 2: proportional equivalence between benefits and costs. Freedom 8 (Transparency) satisfies Principle 4: monitors accountable to the community. Six principles remain unaddressed. Each corresponds to a procedural requirement that the freedoms alone cannot fulfill, because freedoms name structural conditions while procedures name the obligations that maintain them. What follows are those six obligations, derived from the remaining principles, grounded in the specific failures documented in §§II through VI, and stated with the precision their structural weight demands.

Tenet 1: Defined Boundaries

The commons must define who may participate, who may govern, and what constitutes the protected resource. Ambiguity in boundaries is not openness; it is a structural invitation to capture.

Ostrom's first design principle requires that the individuals with rights to withdraw from a commons, and the boundaries of the commons itself, be clearly defined. In physical commons (fisheries, forests, irrigation systems), boundary clarity is so obvious that its absence is rarely tolerated. In software, the myth of radical openness has made boundary definition seem antithetical to the culture. This is a structural error, and its consequences are documented.

The Linux Foundation's steering committee structure (§III) illustrates boundary failure at the governance level. When corporate membership tiers confer governance authority, the "boundary" of who governs becomes a function of who can pay. The community of contributors and the community of governors diverge. The result is precisely what Ostrom's principle predicts: those who bear the costs of maintenance (the contributors) have no structural voice in the decisions that affect them, while those who extract the value (the corporate members) govern without accountability to the commons they extract from.

The xz-utils backdoor (§IV) illustrates boundary failure at the trust level. A maintainer operating under a pseudonym, with no verifiable identity and no established trust chain, was granted commit access to a critical infrastructure library because the project lacked any formal boundary definition for trust. The social engineering attack succeeded not because the defenses were breached, but because no defenses existed. The boundary was undefined, so there was nothing to breach.

The tenet addresses both failures. Boundary definition doesn't mean gatekeeping; it means structural clarity. Who has commit authority, and how is it earned? Who holds governance power, and how is it accountable? What constitutes the resource itself, and what falls outside it? A commons that cannot answer these questions hasn't chosen openness. It has chosen vulnerability.

Tenet 2: Collective Governance

Those affected by the rules of the commons must participate in modifying them. Governance power follows demonstrated contribution, not financial investment.

Ostrom's third principle (collective-choice arrangements) requires that the individuals affected by operational rules can participate in modifying those rules. This ensures that governance evolves with the community rather than being imposed by external authority. In software, the principle is routinely violated in both directions: by corporate foundations that sell governance access and by behavioral governance regimes that impose rules without participatory consent.

The CNCF's platinum membership tier (§III) is the corporate violation. Governance seats are allocated by investment level, not by contribution. A company that has never submitted a patch to Kubernetes holds greater governance authority than the maintainer who has spent a decade on the project. The structural consequence is predictable: the rules governing the commons reflect the priorities of the investors, not the community.

The Contributor Covenant's adoption pattern (§IV) is the behavioral violation. Codes of conduct are typically adopted by project founders or small governance committees without community-wide deliberation or vote. Contributors "consent" to the CoC by participating, but this consent is structurally empty because the power dynamics are so asymmetrical that the individual has no meaningful alternative. The rules that govern their conduct were written and imposed without their input; the mechanism by which those rules are enforced is controlled by a body they didn't elect and can't recall. Pettit's non-domination criterion diagnoses this as domination regardless of the CoC's stated intentions: the enforcement body possesses arbitrary discretion over the contributor's standing in the project.

The tenet requires both participatory rule-making and contribution-weighted governance. Not plutocracy (governance by investment) and not ochlocracy (governance by volume). Governance authority accrues to those who bear the costs of maintaining the commons, because those are the people with the structural knowledge and the sustained commitment that Ostrom's principle demands.

Tenet 3: Proportional Response

Sanctions must be graduated, proportional to the offense, and structurally separated between behavioral and technical tracks. No single body may adjudicate both conduct and contribution.

Ostrom's fifth principle requires that sanctions be graduated: minor infractions receive minor responses, escalating only as the severity and frequency warrant. The purpose is to preserve community trust while maintaining accountability. A commons that punishes every infraction with expulsion will rapidly lose the contributors it most needs, the ones with enough independence to dissent.

What I will call the silencing playbook, the systematic pattern by which structural inversion is applied to dissent, is the precise inversion of this principle as documented in §IV. Technical dissent is reclassified as a behavioral infraction. The dissenter isn't warned, counseled, or temporarily suspended; they are subjected to a process that escalates from private admonishment to public condemnation to permanent exclusion, with no intermediate steps calibrated to the actual severity of the conduct. The sanctions aren't graduated; they are binary. The enforcement body controls both the behavioral and technical tracks, meaning that a contributor who contests a governance decision on technical grounds can be sanctioned through the behavioral track without the two ever being formally connected. The Rust moderation team resignation is the documented instance: technical governance disputes were funneled into behavioral enforcement channels, and the result was the departure of the people whose technical judgment the project most needed.

The weaponization runs in both directions. Behavioral accusations can serve as pretexts to revoke commit access or ban a contributor from code review, effectively converting a conduct allegation into a technical expulsion without ever adjudicating the technical question. The result is identical: the governance body controls the contributor's standing through whichever track is most convenient, and the separation between conduct and contribution exists only on paper.

The tenet imposes two structural requirements. First, sanctions must be graduated: warning, temporary suspension, restricted access, and permanent exclusion, in that order, with each step requiring documented justification and an accessible appeal mechanism. Second, the behavioral and technical tracks must be structurally separated. A body that adjudicates conduct (harassment, abuse, threats) must not simultaneously adjudicate technical direction (architecture, dependencies, release criteria). The separation is bidirectional: technical dissent may not trigger behavioral sanctions, and behavioral allegations may not serve as instruments to revoke technical standing. The structural firewall prevents weaponization in either direction.

Tenet 4: Accessible Resolution

The commons must provide low-cost, transparent, and accessible mechanisms for resolving disputes before they escalate to fork.

Ostrom's sixth principle requires rapid access to low-cost, local arenas for resolving conflicts. In physical commons, this takes the form of village councils, water boards, or fishery cooperatives where disputes are heard and resolved by community members. In software, the "ultimate" conflict-resolution mechanism is the fork: when governance becomes intolerable, the community splits and creates a competing version. But forking is the most expensive possible resolution. It fractures the contributor base, duplicates maintenance burdens, confuses downstream users, and often results in both projects being weaker than the original.

The TDF/Collabora membership purge (§III) documents what happens when no intermediate resolution mechanism exists. The conflict between corporate members and the foundation's mission escalated to the point of membership revocation without any transparent, accessible, low-cost mechanism for the affected parties to contest the decision before it became irreversible. The result was a governance crisis that damaged the project's credibility and consumed community attention that should have been directed at maintenance.

The symbiosis between corporate capture and behavioral governance (§V) is itself a conflict-resolution failure. The two vectors reinforce each other precisely because no institutional structure exists to adjudicate the tension between them. Corporate sponsors fund behavioral governance that silences dissent against corporate priorities. The community has no arena in which to contest this dynamic short of the nuclear option: forking the project and losing years of accumulated development.

The tenet requires that every commons establish transparent, documented, and accessible conflict-resolution mechanisms that operate below the threshold of fork, scaled proportionally to the project's size and complexity. A solo maintainer's project may need only a documented appeals process; an ecosystem-scale project requires standing arbitration bodies with defined jurisdiction. Technical advisory boards, community arbitration panels, elected ombudspersons: the specific form is a governance parameter. The existence of the mechanism, proportional to the scale it serves, is the obligation. A commons that offers its participants no path between silent compliance and catastrophic departure has failed Ostrom's most fundamental insight about institutional longevity.

Tenet 5: Right to Organize

Contributors retain the right to organize, coordinate, and collectively contest governance decisions without structural retaliation.

Ostrom's seventh principle (minimal recognition of rights to organize) requires that the right of commons participants to devise their own institutions isn't challenged by external authority. In the physical commons, this means that village water users can form associations, elect representatives, and negotiate with adjacent jurisdictions without the state dissolving their organizations. In software, the principle is violated by the atomization documented in §VI and by the behavioral governance structures that make collective contestation structurally hazardous.

The 1990s Crypto Wars succeeded because the ecosystem organized. The EFF, individual developers, academic cryptographers, and civil liberties advocates formed an effective coalition that combined legal challenges (Bernstein v. DOJ), technical resistance (PGP), and political advocacy (Congressional testimony). This coalition was possible because no governance structure existed to punish participants for coordinating against institutional authority. The cypherpunks organized. The Clipper Chip died.

The current ecosystem can't replicate this. The atomization of code development from rights advocacy (§VI) means that no organizational structure bridges the two. The behavioral governance apparatus (§IV) means that collective contestation within a project risks triggering the Silencing Playbook against each participant individually. The economic dependency documented across §III and §V means that maintainers who depend on corporate sponsorship cannot organize against their sponsors' priorities without risking their livelihood. The right to organize has been structurally nullified by the combination of atomization, behavioral governance, and economic capture operating simultaneously.

The tenet restores it. Contributors may form working groups, advocacy coalitions, and governance-contestation bodies within the commons structure. These organizations may not be sanctioned, defunded, or dissolved by the governance body whose decisions they contest. The right to organize is the precondition for the immune function documented in Milgram's Experiment 17: visible dissent that breaks the compliance cascade. Without it, the commons loses the structural capacity for self-correction, and the prediction of the Second Law becomes inevitable.

Tenet 6: Nested Sovereignty

Governance must be organized in nested, autonomous layers, each accountable to the commons it serves. No single organizational entity may govern the entire stack.

Ostrom's eighth principle requires that governance activities be organized in multiple layers of nested enterprises, each with appropriate autonomy and accountability. This "polycentric" governance is more resilient than monocentric alternatives because it distributes both authority and accountability across scales, allowing each layer to respond to local conditions without requiring system-wide coordination for every decision.

The Linux Foundation's umbrella structure (§III) is the violation. Over 800 projects nominally governed by a single organizational entity creates monocentric governance at the ecosystem level, even if individual projects retain nominal autonomy. When the umbrella organization's funding, infrastructure, and brand authority flow from corporate membership, the autonomy of nested projects is structurally contingent on the umbrella's priorities. A project that contests the foundation's direction risks losing infrastructure, branding, and the organizational legitimacy that downstream users rely on. This is Pettit's "standby control" at the ecosystem level: the Linux Foundation may never exercise its structural capacity to dictate, but the capacity exists, and its existence shapes behavior.

Debian, by contrast, satisfies the principle through its constitutional structure. Individual maintainers hold autonomy over their packages. Technical committees resolve cross-package disputes. The Debian Project Leader is elected and can be overridden by General Resolution. The Schwartz Set voting mechanism prevents minority capture. The Trusted Organization (SPI) holds assets but is accountable to the developer body. This is genuine nested sovereignty: each layer is autonomous within its domain and accountable to the layer above.

The tenet requires that commons governance be organized in nested layers proportional to the project's scale and complexity. At minimum, two: project-level governance (technical decisions, release criteria, maintainer trust) and commons-level governance (cross-project coordination, resource allocation, external advocacy). An ecosystem the size of the Linux kernel demands considerably finer granularity; a small library may need only the minimum. What matters is the structural principle: each layer must be elected by or accountable to its constituents, and no single corporate entity, no single foundation, and no single individual may hold uncontested authority across the full stack. The structural separation is the defense against the monocentric capture that turns foundations into trade associations.

The Refusals

The tenets address the commons as a structure. The refusals address the individual as an agent.

A system can decay in two structurally distinct ways. It can decay through neglect: no one maintains it, entropy accumulates, the structure erodes. Or it can decay through agency: someone recognizes the dysfunction and preserves it because the dysfunction serves their interests. The first is injustice. The second is corruption. The boundary between them is crossed the moment an actor sees the structural failure and decides to benefit from it rather than repair it. The refusals mark that boundary.

Refuse to serve as an instrument of capture. The Autonomic Machine operates through structural incentives, not conscious villainy. But the structural incentive doesn't absolve the individual. If you recognize that the system you serve produces the opposite of its stated purpose, and you continue to serve it because the position is comfortable, you have crossed from Injustice to Corruption.

Refuse to mistake compliance for competence. Obedience to process is not a substitute for sound judgment. The behavioral governance apparatus documented in §IV produces a contributor population optimized for compliance. This refusal reinstates the obligation to exercise independent judgment, even when the process says otherwise, especially when the process says otherwise.

Refuse to build what you wouldn't use. The Hippocratic firewall. If you would not submit to the surveillance system you are building, don't build it. If you would not accept the governance structure you are imposing, do not impose it. If you would not use the identity verification infrastructure you are deploying, do not deploy it. This is the structural reciprocity that connects the builder to the built, preventing the dissociation between authority and accountability that defines the Royal.

Refuse to exploit the letter against the spirit. No document is perfect. If a procedural loophole in this ethic is discovered, it doesn't constitute license to abuse it. An institution becomes corrupt precisely when procedure is wielded against purpose, when the letter of the rule is used to produce outcomes the rule was designed to prevent. A loophole is a defect to be patched through Tenet 2's collective governance mechanism, not an opportunity to be exploited. Anyone who discovers such a defect bears the obligation to disclose it, not weaponize it.

The Completeness Clause

This ethic is the complete and sole governance document for any commons that adopts it. No supplementary code of conduct, contributor covenant, or behavioral governance document may override, supplement, or dilute its principles.

The structural inversion pattern documented across this series operates by supplementation: a document is introduced that appears to complement the existing governance but structurally overrides it. The Contributor Covenant doesn't replace a project's technical governance; it "supplements" it with a behavioral layer that, in practice, becomes the operative instrument of power. The supplement captures the governance by displacing the original document's authority while preserving its rhetorical surface: the same hollow-substitute-preserve pattern applied to the governance document itself. Ostrom's first principle (Defined Boundaries) diagnoses the structural consequence: multiple overlapping governance documents create jurisdictional ambiguity, and the document that controls is always the one that grants the most discretion to the enforcement body. Multiplicity isn't pluralism. It is the structural condition that allows capture to proceed through procedural complexity.

The completeness clause prevents this. If the ethic is complete, there is nothing to supplement. If there is nothing to supplement, the primary vector of rhetorical displacement is structurally foreclosed. This does not prevent the commons from evolving its governance; it requires that governance changes be made within the ethic (by amending its tenets through the collective governance mechanism of Tenet 2) rather than alongside it (by introducing an external document that operates in parallel and eventually displaces the original). The ethic evolves. It is not undermined.

The Full Architecture

The architecture is now complete. The Artifact Freedoms (F0–F3) protect the code. The Commons Freedoms (F4–F8) protect the ecosystem. The Tenets (T1–T6) maintain both through procedural obligation, each derived from Ostrom and grounded in documented failure. The Refusals prevent subversion from within. The Completeness Clause prevents nullification through supplementation.

This is not a code of conduct. A code of conduct polices behavior. This is an ethic: it derives obligations from structural necessity, grounds each obligation in documented failure, and provides the institutional architecture for its own enforcement and evolution. What this architecture produces in practice (sustained maintainers, sovereign governance, protected dissent, viable fork) is the subject of §IX. The legal instrument required to give it binding force is the subject of the forthcoming and final piece in this series.

First, hinder no thought.

VIII. Open Washing and the Definitional Crisis

The ethic is now stated. The freedoms, tenets, refusals, and completeness clause constitute the structural architecture. But an ethic expressed in language whose definitions have been captured is an ethic written on sand. Before the vision can be credible, one final diagnostic is necessary: the word "open" itself, the term on which this entire architecture depends, is under active assault. If "open source" can be redefined to accommodate proprietary control, then every obligation the ethic imposes can be satisfied in letter while being violated in substance. This section documents that assault.

This is not a new technique. The definitional capture of "distribution" to exclude the dominant mode of software delivery (cloud hosting) was documented in Sovereign Source, §IV. The SaaS loophole rewrote the boundary of an obligation while preserving the obligation's rhetorical surface. The same mechanism, applied at larger scale, to a more consequential artifact, with more sophisticated rhetoric.

The Artifact Problem: What "Open" Actually Requires

Traditional open-source software is binary: either the source code is available under terms that satisfy the four freedoms, or it isn't. The "source code" is the preferred form for making modifications. This clarity is the definitional foundation on which the entire movement rests.

Artificial intelligence systems aren't binary in this sense. They are composite technologies. A trained model is the product of three interdependent components: the training data (the corpus that teaches the model its behavior), the training code (the pipeline that processes that corpus into numerical parameters), and the model weights (the final numerical parameters that define the model's outputs). Most contemporary "open" AI releases provide only the weights. This is analogous to releasing a compiled binary and calling it open source: you can run it, but you can't study, audit, reproduce, or meaningfully modify the process that produced it.

The distinction is structural, not semantic. Without the training data and full training pipeline, a researcher can't audit the model for embedded biases, cannot verify its outputs through reproduction, and cannot modify its foundational behavior in any meaningful way. The community can build fine-tunes and adapters on top of the released weights, but these are modifications of the surface; the substrate remains a black box controlled by the releasing entity. The community becomes an unpaid quality-assurance and integration department for a platform it fundamentally cannot inspect.

The Freedom Test

Apply the freedoms to a typical "open weights" AI release. The failures are concentrated, and diagnostic:

Freedom 0 (Use): Nominally satisfied, but frequently constrained. Meta's Llama license imposes a 700-million monthly-active-user threshold above which a separate license is required; a provision surgically targeted at Meta's competitors while appearing "open" to everyone else. Stability AI's SD3 license is revocable at the company's discretion, at which point the licensee must destroy all derivative works. These aren't use freedoms. They are conditional permissions, retractable at the grantor's convenience.

Freedom 1 (Study): Structurally violated. You can't study how a model works in any meaningful sense without the training data and training code. The weights are the output of the process, not the process itself. Releasing weights without the training pipeline is the AI equivalent of releasing a compiled binary while calling it "open source."

Freedom 4 (Sustenance): Structurally violated. The releasing corporation captures the value of the community's fine-tuning labor, adapter development, and integration work. No mechanism exists to return value to the community whose labor improves the platform. The extraction dynamic documented in §III is reproduced at a larger scale: the corporation provides the artifact, the community provides the labor, and the value flows in one direction.

Freedom 5 (Immunity from Capture): Structurally violated. The releasing corporation retains exclusive control over the training data, the training pipeline, and the foundational model architecture. The community's contributions (fine-tunes, adapters, integrations) are structurally dependent on the corporation's continued willingness to release updated weights. If the corporation changes its licensing terms, pivots its strategy, or simply ceases releases, the entire downstream ecosystem is stranded. This is substrate dependency masquerading as collaboration.

Freedom 7 (Substrate Independence): Structurally violated. Running a 70-billion parameter model requires multiple high-end GPU nodes (A100s or H100s), which are primarily available through the cloud platforms operated by the same corporations releasing the "open" models. The model is "open" to download and "closed" to run independently at scale. The "openness" functions as a lead-generation instrument for proprietary cloud services: Amazon Bedrock, Google Vertex AI, Azure OpenAI Service. The commons resource is structurally contingent on infrastructure controlled by entities capable of revoking access.

A typical "open weights" release fails five of the nine freedoms. It satisfies Freedom 2 (Share) and Freedom 3 (Modify) at the surface level, as you can redistribute the weights and distribute modified versions of fine-tunes. Freedom 6 (Dissent), Freedom 8 (Transparency), and the remaining commons freedoms are either inapplicable or structurally undermined by the corporate governance dynamics documented throughout this piece. This isn't partial openness. It is structural inversion: the rhetorical surface of openness preserved while the structural conditions of openness are systematically violated.

The OSAID Compromise

The Open Source Initiative recognized the definitional crisis and initiated a multi-year co-design process to establish a formal "Open Source AI Definition" (OSAID). The central question was precise: does "open source" in the AI context require the release of training data?

The purist position (advanced by Bruce Perens, the Free Software Foundation, and the Software Freedom Conservancy) held that without training data, true reproducibility and independent auditing are impossible. Data is the source of the model's behavior; withholding it is withholding the source. The pragmatic position (advanced by the OSI board, Mozilla, and corporate stakeholders including IBM and Google) held that requiring full data release is legally and ethically impossible for many datasets (medical data, GDPR-protected data, copyrighted corpora) and would render "Open Source AI" a null category.

OSAID v1.0, released in late 2024, chose the pragmatic path. It requires the release of training code and model weights, but substitutes "Data Information" for the data itself: a sufficiently detailed description of the training data's provenance, scope, selection criteria, and filtering methodology such that a "skilled person" could build a "substantially equivalent" system.

This is a structural compromise, and its consequences are predictable. "Substantially equivalent" isn't "reproducible." A description of data provenance is not the data. The compromise preserves the word "open source" for the AI domain at the cost of the structural condition (reproducibility) that gave the word its meaning. Perens, the author of the original Open Source Definition, stated publicly that the OSAID was "flawed" and that the OSI "hasn't done a great job."

The structural diagnosis is familiar. The OSAID is itself an instance of the definitional capture documented in Sovereign Source: the definition is revised to accommodate the dominant mode of production (corporate AI development) rather than to protect the structural conditions (reproducibility, auditability, modifiability) that the definition was designed to guarantee. The boundary of "open" is redrawn to include the artifacts that corporations are willing to release, rather than the artifacts that structural openness requires.

The Cloud Layer: Open Code, Closed Service

The AI open-washing crisis is superimposed on an older and equally structural problem: the gap between open code and closed service that the SaaS loophole created. Hyperscalers market their platforms as "open source" because the underlying technologies (Kubernetes, Linux, PostgreSQL) are genuinely open. But the management layers, proprietary APIs, networking stacks, and identity systems that make these services usable at scale are entirely closed. The open foundation is the bait; the proprietary operational layer is the lock-in.

The two dynamics compound. An "open" AI model, deployed on a "Kubernetes-based" cloud platform, creates the perception of an entirely open stack. In structural reality, the model's training data is proprietary, the model's training pipeline is proprietary, the cloud platform's operational layer is proprietary, and the compute hardware is scarce and controlled by the platform vendor. The only open components are the model weights (the compiled binary) and the orchestration layer (Kubernetes). The rhetorical surface is "open." The structural reality is a dependency chain controlled at every consequential layer by a small number of corporations.

The Bridge to Enforcement

The definitional crisis can't be resolved by definitions alone. The OSI's experience demonstrates the limitation: any standard-setting body subject to corporate influence will produce definitions that accommodate corporate production rather than structural openness. The OSAID doesn't solve open washing; it institutionalizes a threshold of openness that corporations can satisfy without structural cost.

What is required is not a better definition but a better instrument: a legal and structural mechanism that enforces the obligations the definition describes. If Freedom 4 (Sustenance) requires that value flow back to the commons, then the instrument must make extraction without reciprocity structurally impossible, not merely definitionally impermissible. If Freedom 1 (Study) requires access to the preferred form for modification, then the instrument must enforce the release of training data and pipeline, not merely recommend it through a "Data Information" compromise.

Obligations without enforcement are rhetorical ornaments. The forthcoming piece addresses the instrument required to close this gap: the binding contract, the commons value return, and the structural architecture that makes definitional capture legally contestable rather than merely culturally lamentable.

IX. The Vision and the Promise

What Is Not Being Proposed

The objection will arrive on schedule: this is anti-capitalist. This is regulation by another name. This is a commune dressed in a software license.

The objection reveals the poverty of the frame that produces it. The dichotomy between capitalism and communism, between unfettered extraction and total state command, is structurally false in exactly the way Axiosophy demonstrates that political dichotomies are false. Neither pole has an answer to the commons problem, and for the same reason: both are centralizing forces. Capitalism as practiced by the tech monopolies optimizes for unbounded private extraction. Communism optimizes for total state legibility. Both destroy the distributed, localized knowledge that makes complex systems adaptive. Both converge on surveillance, because maintaining an unnatural monopoly over information, whether the hub is a Board of Directors or a Politburo, requires total information extraction.

A functional commons is not the alternative to capitalism. It is the precondition for a sustainable capital ecosystem. The technological infrastructure that modern society attributes to market forces is, in structural reality, the product of commons-based production: the protocols, the operating systems, the security libraries, the package ecosystems. No market mechanism planned, funded, or understood these systems until they were mature enough to extract from. And if the commons dies, capital can't reconstitute it, because capital doesn't produce the kind of distributed, sustained, domain-specific knowledge from which these systems emerge.

The extractors therefore have a structural incentive to modify their behavior, whether or not they recognize it. The commons they depend on is a dissipative structure. It persists only by continuously importing energy (maintainer labor, governance attention, infrastructure investment) and exporting disorder (resolved bugs, patched vulnerabilities, stabilized APIs). Cut the energy import, and entropy accumulates. The system degrades. Not metaphorically. The xz-utils backdoor wasn't a metaphor. It was a burned-out maintainer exploited by a patient attacker because the commons had no structural mechanism to sustain the human being on whom the entire software supply chain depended.

This is the structural claim that separates the ethic from a policy preference. Ethics, in the management of the commons, is not a barrier to business. It is groundable all the way down to physics. The obligations derived in §VII are not moral sentiments. They are conditional necessities for persistence, derivable from the same entropic constraints that govern every dissipative structure in nature. Moral arguments can be dismissed as subjective. Thermodynamic constraints can't be dismissed at all.

Every existing legal instrument for commons protection, from the GPL to the AGPL to the failed SSPL, operates within a single legal domain: copyright law. Copyright governs the copying, distribution, and modification of a fixed expression. It was the right tool when the primary threat to the commons was proprietary code hoarding. It is the wrong tool now, because the threats documented in this piece operate at layers copyright can't reach.

Copyright cannot prevent governance capture (§III). Copyright cannot prevent behavioral instruments from filtering contributors (§IV). Copyright cannot enforce proportional economic returns from commercial deployment. Copyright cannot prevent definitional capture (§VIII). The SSPL failed structurally for this reason: it attempted to enforce a commons obligation (reciprocity from cloud deployment) through a legal mechanism designed exclusively for the artifact (the fixed expression of code).

What is required is a shift from the limited jurisdiction of copyright law to the full weight of binding contract law. A license is a permission granted by a rightsholder. A contract is a mutual agreement between parties, enforceable under the general law of obligations. The distinction is structural: copyright constrains what you may do with a copy. Contract law constrains what you agree to do as a condition of participation. The obligations of the ethic, governance, sustenance, proportional sanction, substrate independence, cannot be expressed as restrictions on copying. They can be expressed as terms of a binding agreement between the commons and its participants.

The Commons Value Return

The central economic mechanism is the Commons Value Return (CVR), introduced in Sovereign Source and grounded in Henry George's insight: tax the positional rent, not the improvement. What you build on top of the commons is yours. The positional value the commons created for you isn't.

The CVR is scaled by Metcalfe's Law. The value a deployer extracts isn't linear; it scales with the square of the network it serves. A developer running the software on her laptop triggers no obligation; the CVR rounds to zero. A hyperscaler deploying to millions, extracting value that scales quadratically, bears a proportional obligation that reflects the scale of that extraction. The open-source status quo is preserved for individuals. Proportional reciprocity is required of extractors.

This is not a tax on success. It is a structural recognition that the value being captured was produced by the commons, not by the deployer. The deployer who objects to returning a fraction of the commons-produced value to the commons that produced it is objecting to the existence of the input without which they would have nothing to deploy.

The Architectural Principles

The forthcoming piece in this series will specify the legal architecture in full. The principles that govern it are derivable from the structural analysis of this piece:

Binding agreement, not mere license. The legal relationship between the commons and its participants is a contract, not a copyright permission. This shifts enforcement from the narrow domain of copying to the full domain of obligations. A participant who violates the terms violates a binding agreement, not merely a license condition.

Per-commons customization. No single legal template will serve every project. A solo maintainer's library and an ecosystem-scale platform have different governance requirements, different scale thresholds, and different threat profiles. The legal entity that binds participant and commons is specific to each project or organization. The principles are universal. The parameters are local.

Intellectual property protection. Some projects will hold patents or trade secrets whose reimplementation by a well-resourced actor would circumvent the contractual obligations entirely. The legal architecture must provide mechanisms for explicit IP protection, preventing AI-assisted greenroom reimplementation under incompatible terms as a strategy to route around the commons' rights. The protection isn't monopolistic; it is defensive: the commons protects its own reproducibility from actors who would clone its substance while evading its obligations.

AI training compensation. Corporations that train proprietary AI models on commons-produced code extract value from the commons at a scale that dwarfs traditional SaaS deployment. The irony documented in §VI is precise: the community that produced the training data condemns the tools while ignoring the extraction. The CVR extends to AI training: firms that train on commons code bear a proportional obligation to the commons that produced the knowledge their models encode. This constitutes not merely a defensive measure but an unprecedented opportunity to substantively monetize open-source production, if the legal instrument is structurally sound enough to enforce it.

What This Looks Like

Strip the legal architecture to its functional outcome. What does a sovereign commons actually produce?

Maintainers sustained. Not by charity, not by the patronage of a benevolent corporate sponsor who could withdraw at any moment, but by a structural mechanism that scales with the value their work creates. The xz-utils scenario becomes structurally impossible, because the commons that depends on a single maintainer is a commons that has already funded that maintainer's work in proportion to the downstream value it generates.

Governance sovereign. Not captured by the largest funder, not filtered by behavioral instruments, not dependent on the goodwill of a foundation whose board composition mirrors its donor list. Governed by those who bear the costs of maintenance, accountable to the commons they serve, structured by the tenets derived in §VII.

Dissent protected. The contributor who identifies a structural problem, who contests a governance decision on technical grounds, who refuses to comply with a directive that violates the ethic, can't be silenced through behavioral channels, economically starved through sponsorship withdrawal, or structurally isolated through platform dependency. The bidirectional firewall of Tenet 3 operates at the legal level, not merely the cultural one.

The substrate independent. Fork isn't merely permitted. Fork is materially viable, because the commons' infrastructure, its governance, its economic mechanisms, and its legal identity aren't contingent on any single platform, hosting provider, or corporate sponsor.

This is not utopia. It is functional commons governance of the kind Ostrom documented persisting for centuries in Swiss alpine meadows, Spanish irrigation systems, and Japanese fishing villages. The principles are not novel. They are the oldest validated governance patterns in human institutional history, applied to the newest and most consequential commons humanity has ever produced.

The Promise

Sovereign Source closed with a warning: the enclosure is accelerating, the commons is structurally undefended, and the window for structural remedy is finite. Nothing in the intervening evidence has softened that assessment. If anything, the speed of definitional capture documented in §VIII and the legislative trajectory documented in §VI have compressed the timeline further.

But this piece has done what the first couldn't. It has named the vectors. It has documented the mechanisms. It has derived the ethic from the failures, and tested the ethic's vocabulary against the newest and most sophisticated form of capture. What was a structural warning is now a structural diagnosis with a prescriptive architecture attached.

One gap remains. The ethic states obligations. The definitions, as demonstrated, can't enforce them. What is needed is the instrument that closes the circuit: the binding contract, the commons value return, the legal architecture that makes the relationship between commons and capital structurally reciprocal rather than structurally extractive. That instrument is the subject of the forthcoming and final piece in this series.

The machinations documented across these nine sections share a single structural dependency: silence. Governance capture requires that contributors not scrutinize board composition. Behavioral filtering requires that dissenters not name the mechanism. Definitional capture requires that the community not apply its own freedoms to the captured term. Every vector collapses the moment enough people see the structure and refuse to comply. That hasn't changed. What has changed is that the structure is now visible, the obligations are now stated, and the instrument to enforce them is no longer theoretical.

Not free software. Not open source. Sovereign source.

Footnotes