return ✕︎

Association and Plural Publics

By E. Glen Weyl, Audrey Tang and ⿻ Community

Association and Plural Publics

     As the cacophony of war reverberated through the narrow streets of the Middle Eastern city, a relentless barrage of gunfire and the ghostly dance of flames cast a shroud over the metropolis, with a blackout plunging nearly half of it into fear-strewn darkness. The digital backbone of the city, once a beacon of interconnected brilliance, lay in tatters—government databases lay in ruin, communications networks severed by skilled enemy hacks, leaving the city's guardians grappling for any remaining strategies to wrest back control from the chaos.

     Amidst the chaos, the hope of an entire nation rested on a group of hackers. They were the last bastions of defense, the last guardians. This decentralized group convened a hackathon aptly named “Guard.” Faisal was one of them. In a world in chaos, the group’s unwavering determination and skill had made them a beacon of hope.

     Slipping on his headset, he activated his AI agent, establishing an untraceable IP. He turned on the privacy option of his digital wallet, showcasing his proof of citizenship and a myriad of credentials from hackathons gone by. Such precautions had become essential in these dark times.

     As Faisal entered the “Guard” interface, it seemed just like any other online chatroom. The anonymity was palpable, as no one spoke or even typed a message. All that could be seen were silent avatars representing participants. But the difference lay in its secure foundation, using a DVP tech that promised censorship resistance and a private blockchain that ensured no external influence could penetrate this last bastion of hope.

     The host, the only voice in the silent room, began, “Several introductions and ground rules are about to show on your screen. Each of you will be asked questions to confirm your presence.” A warning followed, highlighting the risk of expulsion for non-compliance or suspicion.

     Soon, a virtual Roman soldier appeared on screen, laying out the grand vision of “Guard”—to construct a decentralized defense system for the digital city. Faisal quickly went through the questions, and upon returning to the main room, found that only half of the initial participants remained. This filtering process seemed to break the ice, as the room came alive with chatter.

     Wasting no time, the guardians began their mission. Faisal, with his expertise, was naturally drawn to the power grid security group. But their conversation was interrupted by a sudden ring. Faisal picked up the phone, the voice from the other side rushed, “Have you gotten anything? We need the rest of the power grids down.”

     The urgency in the voice was palpable. Faisal replied, "I can't find a secretive way in. The security is covered by AI agents. I can formally request access, but everyone has to agree." He continues, "If even one participant objects, I might receive a copy, but I won't be able to discern if it's authentic."

     "Is there no way to duplicate the original code?" the voice asked, desperation evident.

     “I'll keep trying,” Faisal promised.


     In his classic summary of his observations of Democracy in America, French aristocrat and traveler Alexis De Toqueville highlighted the centrality of the civic association to American self-government “Nothing...is more deserving of our attention than the intellectual and moral associations of America." Furthermore, he believed that such associations were necessary for political action and social improvement because equality across individuals had rendered large scale action by individuals alone impossible: "If men are to remain civilized...the art of associating together must grow and improve in the same ratio in which the equality of conditions is increased."

     No individual has ever, alone, made political, social or economic change. Collective efforts, through political parties, civic associations, labor unions and businesses, is always necessary. For Plurality, these and other less formal social groupings are just as fundamental as individuals are to the social fabric. In this sense, associations the Yin to the Yang of personhood in the most foundational rights and for the same reason are the scourge of tyrants. Again to quote De Toqueville, "No defect of the human heart suits (despotism) better than egoism; a tyrant is relaxed enough to forgive his subjects for failing to love him, provided that they do not love one another." Only by facilitating and protecting the capacity to form novel associations with meaningful agency can we hope for a freedom, self-government and diversity.

     The potential of computers and networking to facilitate such association was the core of Lick and Taylor's vision of "The Computer as Communication Device": "They will be communities not of common location, but of common interest." In fact, Merriam-Webster defined an association precisely this way: "an organization of persons sharing a common interest". Given their shared goals, beliefs, and inclinations, these communities would be able to achieve far more than pre-digital associations. The only challenge the authors foresaw was that of ensuring that "'to be on line'...be a right" rather than "a privilege". Much of this vision has, of course, proven incredibly prescient. Many of today's most prominent political movements and civic organizations formed or achieved their greatest success online.

     Yet, perhaps paradoxically, there is an important sense in which the rise of the internet has actually threatened some of the core features of free association. As Lick and Taylor emphasized, forming an association or community requires establishing a set of background shared beliefs, values and interests that form a context for the association and communication within it. Furthermore, as emphasized by Simmel and Nissenbaum, it also requires protecting this context from external surveillance: if individuals believe their communications to their association are being monitored by outsiders, they will often be unwilling to harness the context of shared community for fear their words will be misunderstood.

     The internet, while enabling a far broader range of potential associations, has made the establishment and protection of context more challenging. As information spreads further and faster, knowing who you are speaking to and what you share with them has become challenging. Furthermore, it has become easier than ever to surveil groups or for their members to inappropriately share information outside the intended context. Achieving Lick and Taylor's dream, and thus enabling the digital world to be one where plural associations thrive, requires, therefore, understanding informational context and building digital systems that support and protect it.

     Therefore in this chapter we will outline a theory of the informational requirements for association. Then we will discuss existing technologies that have begun to aid or could aid in the establishment of context and in its protection. We will then highlight a vision of how to combine these technologies to achieve not privacy or publicity but rather "plural publics", the flourishing of many associations of common understanding protected from external surveillance, and why this is so critical to supporting the other digital rights.

Associations

     What structure is required for people to form "an organization of person with a common interest"? Clearly a group of people who simply share an interest is insufficient. People can share an interest but have no awareness of each other, or might know each other and have no idea about their shared interest. As social scientists and game theorists have recently emphasized, the collective action implied by "organization" requires a stronger notion of what it is to have an "interest", "belief" or "goal" in common. In the technical terms of these fields, the required state is what they call (approximate) "common knowledge".

     Before describing what this means formally, it's useful to consider why simply sharing a belief is insufficient to allow effective common action. Consider a group of people who all happen to speak a common second language, but none are aware that the others do. Given they all speak different first languages, they won't initially be able to communicate. Just knowing the language will not do them much good. Instead, what they have to learn is that the others also know the language. That is, they must have not just basic knowledge but higher-order knowledge, knowledge that others know something.

     The importance of such higher-order knowledge for collective action is such a truism that it has made its way into folk lore. In the classic Hans Christian Andersen tale of "The Emperor's New Clothes", an swindler fools an emperor into believing he has spun him a valuable new outfit, when in fact he has stripped him bare. While his audience all see he is naked, all are afraid to remark on it until a child's laughter creates understanding not just that the emperor is naked, but that others appreciate this fact and thus each is safe acknowledging it. Similar effects are familiar from a range of social, economic and political settings:

  • Highly visible statements of reassurance are often necessary to stop bank runs, as if everyone thinks others will run, so will they.
  • Denunciations of "open secrets" of misdeeds (e.g. sexual misconduct) often lead to a flood of accusations, as accusers become aware that others "have their back" as in the "#MeToo" movement.
  • Public protests can bring down governments long opposed by the population, by creating common awareness of discontent that translates to political power.

     Formally, "common knowledge" is define as a situation where a group of people know something, but also know that all of them know it, and know that all of them knows that all of them knows it and so on ad infinitum. "Common belief" (often quantified by a degree of belief) is when a group believes that they all believe that they all believe that... A great deal of game theoretic analysis has show that such common belief is a crucial precondition of coordinated action in "risk collective action" situations like the above where individuals can accomplish a common goal if enough coordinate, but will be harmed if they act without support from others.

     While the common beliefs of a group of people are obviously related to the actual shared beliefs of their average members, they are a distinct thing. We all know of examples when some view as "conventional wisdom" that almost everyone doubted or a particular norm persisted even though, individually, almost everyone in a group disagreed with it. Furthermore, we can use this notion of community to refer not just to beliefs about facts, but also moral or intentional beliefs. We can think of a "common belief" (in the moral sense) of a community as being things that everyone believes everyone else holds as a moral principle and believes everyone else believes that everyone holds, etc. Similar a "common goal" can be something everyone believes others intend and believes everyone believes everyone intends and so on. Such "common beliefs" and "common intentions" are important to what is often called "legitimacy", the commonly understood notion of what is appropriate.

     In game theory and other formal social science disciplines, it is common to model individuals as collections of intentions/preferences and beliefs. This notion of community gives a way to think about groups similarly and distinctly from the individuals that make them up, given that common beliefs and intentions need not be the same as those of the individuals that are part of that group: group beliefs and goals are common beliefs and goals of that group. In this sense, the freedom to create associations can be understood as the freedom to create common beliefs and goals. Yet creating associations is not enough. Just as we argued in the previous chapter that protecting secrets is critical to maintaining individual identity, so too associations must be able to protect themselves from surveillance, as if their common beliefs become simply the beliefs of everyone, they cease to be a separate association. As such privacy from external surveillance or internal over-sharing is just as critical as is establishing associations to their freedom.

     It is little surprise, then, that many of the historical technologies and spaces that most come to mind when we think of the freedom of association are precisely geared to achieving common beliefs and shielding common beliefs from external beliefs from outsiders. Searching for images of "freedom of association" typical yield images of protests in public spaces, meetings in public spaces like parks and squares and group discussions in private clubs. As illustrated above, group meetings and statements made openly in front of group members are crucial to achieving common beliefs and understanding among that group. Private pamphlets may achieve individual persuasion, but given the lack of common observation, game theorists have argued that they struggle to create public beliefs in the same way a shared declaration, like the child's public laughter, can.

     But purely public spaces have important limitations: they do not allow groups to form their views and coordinate their actions outside the broader public eye. This may undermine their cohesion, their ability to present a united face externally and their ability to communicate effectively harnessing an internal context. This is why associations so often have enclosed gathering places open only to members: to allow the secrecy that Simmel emphasized as critical to group efficacy and cohesion.[1] The crucial question we thus face is how systems of network communication can offer the the brave new world of "communities of interest" these same or even more effective affordances to create protected common beliefs.

Establishing context

     If parks and squares are the site of protest and collective action, one thing we are looking for is a digital public square. Many digital systems have purported to serve this function. Sites on the original World Wide Web offered unprecedented opportunities for a range of people to make their messages available. But as Economics Nobel Laureate Herbert Simon famously observed, this deluge of information created a paucity of attention. Soon it became hard to know if, who and how one was reaching an audience with a website and proprietary search systems like Google and proprietary social networks like Facebook and Twitter became the platforms of choice for digital communication. The digital public square had become a private concession, with the CEO of these companies proudly declaring themselves the public utility or public square of the digital age while surveilling monetizing user interactions through targeted advertising.

     A number of recent efforts have begun to address this problem. The World Wide Web Consortium (W3C) has published Christine Lemmer Webber's ActivityPub standard as a recommendation to enable an open protocol for social networking that has empowered open systems like Mastodon to offer federated, decentralized services similar to Twitter to millions of people around the world. Twitter itself recognized the problem and launched in 2019 the BlueSky initiative with similar aims; while it has yet to grow to the size of Mastodon, it has grown rapidly and generated significant attention. Philanthropist Frank McCourt has invested heavily in Project Liberty and it's Decentralized Social Networking Protocol as another, blockchain-based foundation for decentralized networking. While it is hard to predict exactly which of these will flourish, how they will consolidate and so forth, the recent struggles of Twitter (recently renamed to X) combined with the diversity of vibrant activity in this space suggests the likelihood of cooperation and convergence on some open protocol for usable digital publication.

     Yet publicity is not the same as the creation of community and association. Posting online resembles much more the distribution of a pamphlet rather than the holding of a public protest. It is hard for those seeing a post to know who and how many others are consuming the same information, and certainly to gauge their views about the same. The post may influence their beliefs, but it is hard for it to create common beliefs among an identifiable group of compatriots. Features that highlight virality and attention of posts may help someone, but still make tracking of the audience for a message far coarser than what is possible in physical public spaces.

     One of the most interesting potential solutions to address this challenge in recent years have been distributed ledgers technologies (DLTs) including blockchains. These technologies maintain a shared record of information and append something to this record only when there is "consensus" (sufficient shared acknowledgement of the item to be included) that it should be. This has led cryptographers and game theorist to conclude that DLTs hold special promise in creating common beliefs among the machines on which they are stored.[2] Arguably this is why such systems have supported coordination on new currencies and other social experiments.

     Yet even such community among machines does not directly imply it among the people operating these machines. This problem (from the perspective of creating community) is exacerbated the financial incentives for maintaining blockchains, which lead most participants, motivated by financial gain, to run "validator" software rather than monitor activity direct. This also implies those participating are likely to be whoever can profit, rather than those interested in common, non-commercial action. Nonetheless one can imagine, as we will below, DLTs being an important component of a future infrastructure of association.

Protecting context

     If establishing context is primarily about creating strongly social notions of publicity, protecting context is about strongly social notions of privacy. And, just as with technologies of publicity, those of privacy have primarily been developed in more atomistic monist direction than in ones that support plural sociality.

     The field of cryptography has long studied how to securely and privately transmit information. In the canonical "public key cryptography" scheme, individuals and organizations publicly post a key while privately holding a pair. This allows anyone to send them a message that can only be decrypted by their private key. It also allows the key controller to sign messages so that others can verify the message came from the signer. Such systems are the foundation of a wide range of security on the internet and throughout the digital world, protecting email from spying, allowing end-to-end encrypted messaging systems like Signal and digital commerce.

     Building on top of this foundation and branching out from it, a number of powerful privacy-enhancing technologies (PETs) have been increasingly developed in recent years. These include:

  • Zero-knowledge proofs (ZKPs): these allow the secure proof of a fact based on some data to someone without such access without leaking the full data. For example, one might prove that one is above a particular age without showing the full driver's license on which this claim is based.
  • Secure multi-party computation (SMPC) and homomorphic encryption: These allow a collection of individuals to perform a calculation involving data that each of them has parts of without revealing the parts to the others and allow the process to be verified both by themselves and others. For example, a secret ballot can be maintained while allowing secure verification of election results.
  • Unforgeable and undeniable signatures: These allow key controllers to sign statements in ways that cannot be forged without access to the key and/or cannot be denied except by claiming the key was compromised. For example, parties entering into a (smart) contract might insist on such digital signatures just as physical signatures that are hard to forge and hard to repudiate are important for analog contracts.
  • Confidential computing: This solution to similar problems as above is less dependent on cryptography and instead accomplishes similar goals with "air gapped" digital systems that have various physical impediments to leaking information.
  • Differential privacy: This measures the extent to which disclosures of the output of a computation might unintentionally leak sensitive information that entered the calculation. Technologists have developed techniques to guarantee such leaks will not occur, typically by adding noise to disclosures. For example, the US Census is legally required both to disclose summary statistics to guide public policy and keep source data confidential, aims that have recently been jointly satisfied using mechanisms that ensure differential privacy.
  • Federated learning: Less a fundamental privacy technique than a sophisticated application and combination of other techniques, federated learning is a method to train and evaluate large machine learning models on data physically located in dispersed ways. It is important to recognize two fundamental limitations of these techniques that depend most on cryptography (especially the first three); namely they depend on two critical assumptions. First, keys must remain in the possession of the desired person, a problem closely related to the identity and recovery questions we discussed in the previous chapter. Second, almost all cryptography in use today will break and in many cases its guarantees be undone by the advent of quantum computers, though developing schemes robust against quantum computing is an active area of research.

     Furthermore, these technical solutions increasingly intersect and integrate with a range of technical standards and public policies that support privacy. These include SOMETHING ON CRYPTO STANDARDS ONLINE. SOMETHING ON EUROPEAN RULES ON E2E ENCRYPTION. SOMETHING ON GDPR.

     Yet a basic limitation of almost all this work is that only focus on protecting communication from external surveillance rather than from internal over-sharing. While external snooping is obviously the first line of defense, any fan of stories about military intelligence knows that internal moles and leaks are one of the most important threats to information security. While military intelligence is the most dramatical example, the point stretches much further, especially in the internet age. As highlighted in works ranging from danah boyd's classic study It's Complicated to Dave Eggers's book and film The Circle, the ease of credibly sharing digital information has made the danger of over-sharing a constant threat to privacy.

     The basic problem is that while most cryptography and regulation treats privacy as about individuals, most of what we usually mean when we talk about privacy relates to groups. After all, there is almost no naturally occurring data that pertains to exactly a single individual. Let's revisit some of the examples of the social life of data from the previous chapter.

  • Genetic data: genes are, of course, significantly shared in a family, implying that the disclosure of one individual's genetic data reveals things about her family and, to a lesser extent, about anyone even distantly related to her. Related arguments apply to many medical data, such as those related to genetic conditions and transmissible diseases.
  • Communications financial data: communications and transactions are by their nature multiparty and thus have multiple natural referents.
  • Location data: few people spend much of their time physically distant from at least some other person with whom they have common knowledge of their joint location at that moment.
  • Physical data: There are many data that are not personal to anyone (e.g. soil, environmental, geological). One of the only truly individualistic data are the bureaucratically created identifying numbers created as part of identity schemes, and even these actually pertain not to the individual alone but to her relationship to the issuing bureaucracy.

     This implies that in almost every relevant case, unilateral disclosure of data by an individual threatens the legitimate privacy interests of other individuals. Protecting privacy therefore requires protecting against unilateral over-sharing. This has generally been thought essentially impossible to externally enforce: anyone who knows something can share that information with another. Strategies have thus primarily focused on norms against over-sharing, gossiping and the like, tools to aid individuals in remembering what they should not share, attempts to make it hard to secretly over-share and policies to punish ex post facto those who do engage in oversharing. All of these are important strategies: literature, media and everyday experience are full of shaming for over-sharing and enforcement against leakers. Yet they fall far short of the guarantees enforced by cryptography, which does not merely condemn snoops but locks them out of systems.

     Is there any chance of doing something similar for over-sharing? One common approach is simply to avoid data persistence: SnapChat rose to prominence with disappearing messages, and many messaging protocols have since adopted similar approaches. Another, more ambitious cryptographic technique is "designated verifier proofs" (DVPs). This is a way of sending a message whose authenticity/veracity can only be verified by a particular key. Such an approach is only useful for information that cannot be independently verified: if someone chooses to over-share a community password, DVPs are not of much use as the person who the password is shared to can quickly check if the password works.

     Yet most types of information are harder to independently and immediately verify: even the location of buried treasure requires significant resources to pursue and dig up, otherwise the many adventure stories about such would not be nearly as interesting. As generative foundation models make persuasive deception ever cheaper, the importance of verification will grow. In such a world, the ability to target verification at an individual and rely on the untrustworthiness of over-shared information may be increasingly powerful. As such, it may be increasingly possible to more fully protect information from over-sharing, as well as snooping.

Plural publics

     If properly combined in a new generation of networking standards, a combination of these tools could give us the capacity to move beyond the superficial traditional divide between "publicity" and "privacy" to empower true freedom of association online. While we usually think of publicity and privacy as a one-dimensional spectrum, it is easy to see that another dimension is equally important.

     Consider first information "hidden in plain sight", lost in a pile of irrelevant facts, available to all but reaching the awareness of no one. Contrast this with the secret of the existence of the Manhattan Project, which was shared among roughly 100,000 people but was sharply hidden from the rest of the world. Both are near the mid-point of the "privacy" v. "publicity" spectrum, as both are in important ways broadly shared and also obscure. But they sit at opposite ends of another spectrum: of concentrated common understanding v. diffuse availability.

     This example illustrates why "privacy" and "publicity" are far too simplistic concepts to describe the patterns of co-knowledge that underpin free association. While any simple descriptor will fall short of the richness we should continue to investigate, a more relevant model may be what elsewhere we have called "plural publics". Plural publics is the aspiration to create information standards that allow a diverse range of communities with strong internal common beliefs shielded from the outside world to coexist. Achieving this requires maintaining what Shrey Jain and Zoë Hitzig have called "contextual confidence", where participants in a system can easily establish and protect the context of their communications.

     Luckily, in recent years some of the leaders in open standards technologies of both privacy and publicity have turned their attention to this problem. Lemmer Webber, of ActivityPub fame, has spent the last few years working on Spritely, a project to create self-governing and strongly connected private communities in the spirit of plural publics, allowing individual users to clearly discern, navigate and separate community contexts in open standards. A growing group of researchers in the Web3 and blockchain communities are working on combining these with privacy technologies, especially ZKPs.

     One of the most interesting possibilities opened by this research is achieving formal guarantees of combinations of common knowledge and impossibility of disclosure. One could, for example, create ledgers distributed among members of a community group using DVPs. This would create a record of information that is common knowledge to this community and ensure this information (and its status as common knowledge) could not be credibly shared outside this community. Additionally, if the protocol's procedure for determining "consensus" relied on more sophisticated voting rules than at present such as those we describe in our chapter on voting below, it might instantiate richer and more nuanced notions of common knowledge than present ledgers.

     Furthermore, all of the space around these topics is suffused with work on standards: for cryptography, blockchains, open communications protocols like Activity Pub, etc. It therefore does not require great stretches to imagine these standards converging on a dynamically evolving but widely accepted technical notion of an "association" and therefore broadly observed standards enabling associations online to form and preserve themselves. Such a future could enshrine a right to digital freedom of association.

Association, identity and commerce

     Digital freedom of association is tightly connected to the other freedoms we discuss in this part of the book. As we saw in the previous chapter, "privacy" is at the core of the integrity of identity systems yet as we saw here, concerns usually labeled as such are more appropriately connected to the diversity of contexts an individual navigates rather than privacy in an individualistic sense. Thus the right to freedom of association and the right to the integrity of personhood are inseperable: if it is our entanglement in a diversity of social groups that creates our separateness as a person, it is only by protecting the integrity of that diversity that separate personhood is possible. And, of course, because groups are made up of people, the opposite is true as well: without persons with well-articulated identities, there is no way to create groups defined by common knowledge among these persons.

     Furthermore, the right of free association is the foundation on which commerce and contracts are built. Transactions are among the simplest forms of association and how digital transaction systems can replicate the privacy that is often touted as a core benefit of cash depends intimately on who can view what transactions at what resolution. Contracts are more sophisticated forms of association and corporations even more so. All rely heavily on information integrity and common understandings of obligations. In this sense, the freedom of association we outlined in this chapter, together with identity in the last, are the lynchpins for what follows in the rest of the book.


  1. Simmel "Sociology of Secrecy and Secret Societies" ↩︎

  2. Halpern and Pass "A knowledge-based analysis of the blockchain protocol" ↩︎