return ✕︎

Living in a Plural World

By Audrey Tang, E. Glen Weyl and the Plurality Community

Living in a Plural World

Until lately the best thing that I was able to think in favor of civilization…was that it made possible the artist, the poet, the philosopher, and the man of science. But I think that is not the greatest thing. Now I believe that the greatest thing is a matter that comes directly home to us all. When it is said that we are too much occupied with the means of living to live, I answer that the chief worth of civilization is just that it makes the means of living more complex; that it calls for great and combined intellectual efforts, instead of simple, uncoordinated ones, in order that the crowd may be fed and clothed and houses and moved from place to place. Because more complex and intense intellectual efforts mean a fuller and richer life. They mean more life. Life is an end in itself, and the only question as to whether it is worth living is whether you have enough of it.

Oliver Wendell Holmes, 1900[1]

(A)re…atoms independent elements of reality? No…as quantum theory shows: they are defined by their…interactions with the rest of the world…(Q)uantum physics may just be the realization that this ubiquitous relational structure of reality continues all the way down…Reality is not a collection of things, it’s a network of processes.

Carlo Rovelli, 2022[2]

     Technology follows science. Thus, if we are to offer a different vision of the future of technology from AT and ES, we need to understand what is at the root of their understanding of science, what this might miss, and how correcting this can open new horizons. To do so, we now explore the philosophy of science behind these approaches and explore how in both the natural and social sciences the advances of the last century arose from moving beyond the limits of these perspectives to a plural, networked, relational, multiscale understanding of the reality we live in.

Atoms and the universe

     The simplest and most naïve way to think about science is what might be called “objectivist”, “rationalist” or, as we will dub it, “monist atomism”[3]. The physical world has an objective state and obeys an ultimately quite simple set of laws, waiting to be discovered. These can be stated in mathematical terms and dictate the deterministic evolution of one state into another through the collision of atoms. Because these laws and the mathematical truths they obey are unitary and universal, everything that ever will happen can be predicted from the current state of the world. These laws are often expressed in “goal-seeking” or “teleological” terms: particles “take the path of least action”, chemical compounds “minimize free energy”, evolution maximizes fitness, economic agents “maximize utility”. Every phenomenon in the world, from human societies to the motion of the stars, can ultimately be reduced to these laws. All one needs to do — in this frame — is have sufficient computational power/intelligence, sufficiently precise observations, and the courage to strip away one’s superstitions/social constructs/biases and one will be, essentially, gods, omniscient, and possibly omnipotent.

     The pattern of such thinking runs through almost every scientific field at some point in its development. Euclidean geometry, which aspires to deduce nearly all mathematical facts from a small set of axioms and concepts, and Newtonian mechanics, which describe the relationship between the motion of an object and the forces acting on it, are perhaps the most famous examples. In biology, the simple version of Darwinism focuses on the survival of the fittest species, with individual animals (or in later versions “selfish genes”) constantly struggling against each other to survive [4]. In (primitive) neuroscience (especially phrenology), atoms are regions of the brain, each undertaking an atomic function that together add up to thought. In psychology, behaviorism saw thought as reducible to stimuli and response. In economics, the atoms are the self-interested individuals (or sometimes firms) of economic theory, each seeking their own advantage in the market. In computer science, the Church-Turing Thesis sees all possible operations as reducible to a series of operations on an idealized computer called a “Turing Machine”.

     Whatever their limits, these approaches have achieved great successes that cannot be ignored. Newtonian mechanics explained a range of phenomena and helped inspire the technologies of the industrial revolution. Darwinism is the foundation of modern biology. Economics has been the most influential of the social sciences on public policy. And the Church-Turing vision of “general computation” helped inspire the idea of general-purpose computers that are so broadly used today.

     They are also the foundation of the Abundance Technocracy (AT) and Entrepreneurial Sovereignty (ES) worldviews we discussed in the last chapter, though each emphasizes a different aspect. AT focuses on the unity of reason and science inherent in monism and seeks to similarly rationalize social life, harnessing technology. ES focuses on the fragmentation intrinsic to atomism and seeks to model “natural laws” for the interaction of these atoms (like natural selection and market processes). In this sense, while ES and AT seem opposite, they are opposites within an aligned scientific worldview.

     For all that shared worldview has inspired, the science of the 20th century showed its limitations. Relativity and even more quantum mechanics upended the Newtonian universe. Gödel’s Theorem and a variety of following works undermined the unity and completeness of mathematics and a range of non-Euclidean geometries are now critical to science. Symbiosis, ecology, and extended evolutionary synthesis undermined “survival of the fittest” as the central biological paradigm. Neuroscience has been reimagined around networks and emergent capabilities, which in turn have become conceptually central to modern computation. Critical to all these developments are ideas such as “complexity”, “emergence”, “networks”, and “collective intelligence” that challenge the elegance of monist atomism.

Complexity and emergence

     The central idea of complexity science is that reduction of many natural phenomena to their atomic components (what we can call “reductionism”), even when conceptually possible, is often counterproductive. At the same time, studying complex systems as a single unit is often uninformative or impossible. Instead, structures (e.g. molecules, organisms, ecosystems, weather systems, societies) emerge from “atoms” at a range of (intersecting) scales that can be understood most usefully at least in part according to their own principles and laws rather than those governing their underlying components. Some of the common core arguments for “complexity”, or what we will call “pluralism”, in all the domains it is applied include:

  • Computational complexity: Even when reductionism is feasible in principle/theory, the computation required to predict higher-level phenomena based on their components is so large that performing it is unlikely to be practically relevant. In fact, in some cases, it can be proven that the required computation would consume far more resources than could possibly be recovered through the understanding gained by such a reduction. This often makes the theoretical possibility of such reduction irrelevant and creates a strong practical barrier to reduction.
  • Sensitivity, chaos, and irreducible uncertainty: To make matters worse, many even relatively simple systems have been shown to exhibit “chaotic” behavior. A system is chaotic if a tiny change in the initial conditions translates into radical shifts in its eventual behavior after an extended time has elapsed. The most famous example is weather systems, where it is often said that a butterfly flapping its wings can make the difference in causing a typhoon half-way across the world weeks later. In the presence of such chaotic effects, attempts at prediction via reduction require extreme and thus unrealistic degrees of precision. To make matters worse, there are often hard limits to how much precision is feasible, as precise instruments often interfere with the systems, they measure in ways that can lead to important changes due to the sensitivity mentioned previously. The most absolute version of this is the Heisenberg Uncertainty Principle, which puts physical upper limits on measurement precision based on this logic.
  • Multiscale organization: While some might take the above observations as a council of scientific despair, an alternative is to view it as a reason to expect a diversity of analytic/scientific approaches to be fruitful under different conditions, at different scales of analysis and in ways that will intersect with each other. In this view, it is natural to seek to characterize these different approaches, their “scope conditions” (viz. when they are likely to be most useful), how they can interact with each other and to consider this sort of approach as a core part of the scientific endeavor.
  • Relationality: Multiscale organization implies many imperfectly commensurable ways of knowing. But if these could each be sliced into distinct scientific spheres, could monist atomism still prevail within each of several scientific fields? Yet a critical element of complexity is that phenomena at different scales often determine the interactions between and even constitute the nature of items at other scales. Units at smaller scales, for example, may have their identities and the rules they obey constituted by the larger units they in turn make up. While approximations ignoring these interactions may be useful for some phenomena, it is frequently important to trace down these dependencies in other contexts and ensure one accounts for them.
  • Embedded causality: As a result of the preceding points, causation can rarely be understood completely or exhaustively in a reductive manner, where the explanation of higher-level phenomena is reduced to simpler or more atomic components. Instead, while specific causal arrows may follow such a pattern, others in the same system will take an opposite form, where the behavior of “atoms” is explained by the way they are situated in larger systems. Causal analysis will thus have quasi-“circular” elements that form equilibria and independent causation will usually emerge from forces within these equilibria, rather than by predictable reduction to a constant set of atomic “unmoved movers”.

     Together these elements constitute a basic reimagining of the scientific project compared to monist atomism. In monist atomism, the search for scientific truth and explanation resembles something of a process of digging from different start points on a planet’s surface towards its core: people may start from many different points, but as they strip away falsehood, superstition, error, and misunderstanding, they will all find the same underlying core of truth, reducing everything they see to the same fundamental elements.

     In the plural view, the almost the exact opposite metaphor applies: the scientific pursuit resembles the building of structures outward from the surface of a planet. While these structures might initially crowd and compete, if they grow far enough out the space they have to fill expands into the infinite void beyond. As these structure branch out, they diversify and fragment, making the possibilities for them to interact and recombine ever richer and yet the potential of their converging to a single outcome ever more remote. Furthermore, each of these recombinations can, roughly as in sexual reproduction, form new structures that themselves extend further off on their own trajectories. Progress is complexity, diversification, and intersectional recombination.

     While this plural vision doesn’t offer the hope of final or absolute truth that monist atomism does, it offers something perhaps as hopeful: an infinite vista of potential progress, expanding rather than contracting as it moves on. As the scientific revolutions of the 20th century so dramatically illustrated, shifting to such a plural perspective spells not the end of scientific progress, but rather an explosion of its possibilities.

The plurality of scientific revolutions

     The twentieth century, and in particularly the Golden Age highlighted in the previous chapter, was the most rapid period of scientific and technological advance in human history. These advances happened in a range of disparate fields, but one common thread runs through most: the transcendence of monist atomism and the embrace of the plural. We will illustrate this with examples from mathematics, physics, biology to neuroscience.


     Perhaps the most surprising reach of pluralism has been into the structure of truth and thought itself. The gauntlet for twentieth century mathematics was thrown down by David Hilbert, who saw a complete and unified mathematical structure within grasp around the same time that Lord Kelvin saw the passing of the closing of the frontier in physics. Yet while the century began with Bertrand Russell and Alfred North Whitehead’s famous attempt to place all of mathematics on the grounds of a single axiomatic system, developments from that starting point have been quite opposite. Rather than reaching a single truth from which all else followed, mathematics shattered into a thousand luminous fragments.

     Geometry and topology, once the province of Euclidean certainties, turned out to admit endless variations, just as the certainties of a flat earth vanished with circumnavigation. Axiomatic systems went from the hope for complete mathematical systems to being proven, by Kurt Gödel, Paul Cohen, and others to be inherently unable to resolve some mathematical problems and necessarily incomplete. Alonzo Church showed that other mathematical questions were undecidable by any computational process. Even the pure operations of logic and mathematics, it thus turned out, were nearly as plural as the fields of science we discussed above. To illustrate:


Figure 1: The Mandelbrot Set (characterizing the chaotic behavior of simple quadratic functions depending on parameter values in the function) shown at two scales. Source: Wikipedia (left) and Stack Overflow (right).

  • Church proved that some mathematical problems were “undecidable” by computational processes and subsequent work in complexity theory has shown that even when mathematical problems might be in principle decidable, the computational complexity of arriving at such an answer is often immense. This dashed the dream of reducing all of mathematics to computations on basic axioms.
  • Chaos has proven inherent even to many very simple mathematical problems. Perhaps the most famous example involves the behavior of the complex numbers of iterated application of quadratic polynomials. The behavior of such iterations turns out to form such intricate and rich patterns that characterizing them has become the source of “fractal art” as shown in Figure 1. These structures illustrate that even solutions to apparently “obvious” mathematical questions may depend on infinitely intricate details, that dazzle even our senses with their richness.
  • While mathematics is not primarily concerned with phenomena well described by scales, the above phenomena have implied that rather than collapsing into a single field, twentieth century mathematics blossomed into an incredible diversity of subfields and sub-subfields, covering a range of phenomena. Geometry alone has a dozen major subfields from topology to projective geometry, studying radically different and only loosely intersecting elements of what was once a single, highly axiomatic, and largely closed set of phenomena.
  • Relationality is a fundamental aspect of mathematics, as it concerns the study of the relationships between objects and the structures that emerge from those relationships. In mathematics, different branches are often interconnected, and insights from one area can be applied to another. For instance, algebraic structures are ubiquitous in many branches of mathematics, and they provide a language for expressing and exploring relationships between mathematical objects. Moreover, the study of topology is based on understanding the relationships between shapes and their properties. The mix of diversity and interconnectedness is perhaps the defining feature of modern mathematics
  • Again, while “causation” is not quite the right way to understand pure mathematics, one of the most remarkable features of this modern field is its opposition to the reductionist approach, where seemingly simple questions are reduced to axioms and everything filters down through these. Perhaps the most famous example is Fermat’s Last Theorem, the claim by a the 17th century mathematician to have proven that a simple equation admits no whole number solutions. The eventual proof in the 1990s by Andrew Wiles building off centuries of intervening mathematics involved a range of techniques (especially related to so-called “elliptic curves”) developed for other purposes far more apparently advanced that the statement itself. The same is believed to be true of many other unsolved mathematical problems, such as the Riemann Hypothesis.

     Many of these advances in pure mathematics have remained puzzles of curiosity and toys of the mind. Yet many of these apparently abstruse ideas have helped transform modern technology. The same elliptic curves that were central to Wiles’s proof are the foundation of one of the leading approaches to public key cryptography, given the intractability of certain solutions to problems involving them. Other advanced mathematics has proven core to the design of computer circuitry, medical image analysis, civil and aeronautical engineering, and more. Each of these applications depends on wildly different and only occasionality tangential areas of mathematics, rather than on the monolithic and integrated theory that Hilbert, Russell, and Whitehead once dreamed of.

     In short, in sharp contrast to the monist atomist vision, the world-defining science and technology built on it in the twentieth century arose from their diversity: fields of knowledge proliferated and speciated, and each field internally, like a fractal, mirrored the same richness. The closer we looked into each area, the greater intricacy revealed itself. Surprising connections and relationships have emerged, but have only added to the complexity, rather than implying “unity”.

     Structures at every level of intersecting scale and described from the perspective of every way of knowing have proven important to progress: nuclear bombs reshape human societies, setting off environmental changes that reshape weather, twisting human psychology and feeding into the designs of computational systems that help cure disease and so on.


Pluralism is perhaps least surprising in biological systems; we can see the complexity of these all around us in everyday life. More surprising, perhaps, is the way in which 20th century physics revealed that these principles go “all the way down”, to the heart of the physical sciences that Newtonian monist atomism pioneered.

At the end of the 19th century, Lord Kelvin infamously proclaimed that “There is nothing new to discover in physics now.” The next century proved, on the contrary, to be the most fertile and revolutionary in the history of the field. Relativity (special and especially general), quantum mechanics, and to a lesser extent thermodynamics/information theory and string theory upended the Newtonian universe, showing that the simple linear-time, Euclidean-space objective reality of colliding billiard balls was at best an approximation valid in familiar conditions. The (post-)modern physics that emerged from these revolutions beautifully illustrates pluralism in science, illustrating how pluralism is, as suggested by the epigraph from prominent physicist Carlo Rovelli, baked into the very fabric of reality.

  • Computational complexity is the core reason for the field of thermodynamics and its many offshoots. In fact, the field of information theory so core to computer science is built almost entirely on top of concepts derived from thermodynamics. The impossibility of simulating the action of billions of sub-units (e.g., molecules in a gas or compound, electrons in a wire, etc.) implies the need for thermodynamic techniques describing the average behavior of these sub-units.
  • The ideas of sensitivity, chaos, and irreducible uncertainty originate or at least achieved their first intellectual prominence in physics. The simplest example of a chaotic system is three comparably sized bodies acting under gravitational forces. The behavior of smoke, of ocean currents, of weather, and many more all exhibit chaos and sensitivity. And, as noted above, the most canonical and best-established example of irreducible uncertainty is “Heisenberg’s Uncertainty Principle”, under which the quantum nature of reality puts a firm upper limit on the precision with which the velocity and position of a particle can be measured.
  • For both these reasons, modern physics is organized according to the study of a wide range of different scales, illustrated by the famous “scales of the universe” walk at New York’s Hayden planetarium that takes visitors from quarks through atoms, molecules, chemical compounds, objects, planets, stars, star systems, galaxies, etc. While all systems in theory obey the same set of underlying physical laws, the physics at each scale is radically different, as different forces and phenomena are dominant and in fact, physics at the smallest scales (quantum) has yet to be reconciled with those at the largest (general relativity).
  • Perhaps the most striking and consistent feature of the revolutions in twentieth century physics was the way they upset assumptions about a fixed and objective external world. Relativity showed how time, space, acceleration, and even gravity were functions of the relationship among objects, rather than absolute features of an underlying reality. Quantum physics went even further, showing that even these relative relationships are not fixed until observed and thus are fundamentally interactions rather than objects, as highlighted by Rovelli above. His interpretations of more recent developments pull ideas of time and space further apart.
  • Given the diversity of levels of reality, causation in physics is profoundly embedded, shifting and cycling across scales at dizzying speeds. Atomic interactions, carefully constructed by sentient beings harnessing nano-scale computing, can trigger explosions that destabilize a planet. Collisions between stars can lead to a collapse of a microscopic blackhole that becomes the center of a galaxy.

     The applications of this rich and plural understanding of physical reality are at the very core of the tragedies of the twentieth century. Great powers harnessed the power of the atom to shape world affairs. Global corporations powered unprecedented communications and intelligence by harnessing their understanding of quantum physics to pack ever-tinier electronics into the palms of their customers’ hands. The burning of wood and coal by millions of families has become the cause of ecological devastation, political conflict, and world-spanning social movements based on information derived from microscopic sensors scattered around the world.


If the defining idea of 19th century macrobiology (concerning advanced organisms and their interactions) was the “natural selection”, the defining idea of the 20th century analog was “ecosystems”. Where natural selection emphasized the “Darwinian” competition for survival in the face of scarce resources, the ecosystem view (closely related to the idea of “extended evolutionary synthesis”) emphasizes:

  • The persistent inability to form effective models of animal behavior on reductive concepts, such as behaviorism, neuroscience, and so forth, illustrating computational complexity. • The ways in which systems of many diverse organisms (“ecosystems”) can exhibit features similar to multicellular life (homeostasis, fragility to destruction or over propagation of internal components, etc.) illustrating sensitivity and chaos.
  • The emergence of higher-level organisms through the cooperation of simpler ones (e.g., multicellular life as cooperation among single-celled organisms or “eusocial” organisms like ants from individual insects) and the potential for mutation and selection to occur at all these levels, illustrating multi-scale organization.
  • The diversity of interactions between different species, including traditional competition or predator and prey relationships, but also a range of “mutualism”, where organisms depend on services provided by other organisms and help sustain them in turn, exemplifying entanglement, and relationality.
  • The recognition of genetics as coding only a portion of these behaviors and of “epigenetics” or other environmental features to play important roles in evolution and adaptation, illustrating embedded causality.

    This shift wasn’t simply a matter of scientific theory. It led to some of the most important shifts in human behavior and interaction with nature of the 20th century. In particular, the environmental movement and the efforts it created to protect ecosystems, biodiversity, the ozone layer, and the climate all emerged from and have relied heavily on this science of “ecology”, to the point where this movement is often given that label.

     While this point is easiest to illustrate with macrobiology, as it is more familiar to the public, the same lesson applies perhaps even more dramatically to microbiology (the study of the inner workings of life in complex organisms). That field has moved from a focus on individual organs and the mechanical study of genetic expression to a “systems” approach, integrating action on a range of scales and according to many different systems of natural laws. This may be best illustrated by focusing on perhaps the most complex and mysterious biological system of all, the human brain.


    Modern neuroscience emerged from two critical discoveries about the functioning of brains. First, in the late 19th century, Camillo Golgi, Santiago Ramón y Cajal, and collaborators isolated neurons and their electrical activations as the fundamental functional unit of the brain. This analysis was refined into clear physical models by the work of Hodgkin and Huxley, who built and tested in on animals their electrical theories of nervous communication. Second, and more diffusely, a rich and nuanced picture emerged over the course of the twentieth century complicating the traditional view, often derided as “phrenology” that each brain function was physically localized to one region of the brain. Instead, while researchers like Paul Broca found important evidence of physical localization of some functions by studying brain lesion patients, a variety of other evidence including mathematical modeling, brain imaging, and single-neuron activation experiments suggested that many if not most brain functions are distributed across regions of the brain, emerging from patterns of interactions rather than primarily physical localization.

    The understanding that emerged from these findings was that of a “network” of “neurons”, each obeying relatively simple rules for activation based on inputs, and updating the underlying connections based on co-occurrence. Again, the themes of pluralism emerge elegantly (THIS COULD USE SOME HARD LOOK BY REAL NEURO FOLKS):

  • Of all fields, neuroscience showed most sharply the bounds imposed by computational complexity. As early as the late 1950s, researchers beginning with Frank Rosenblatt built the first “artificial neural network” models of the brain and hoped to simulate a full human brain within a few years, only to discover that task was computationally many decades off if ever attainable, forcing a great diversification of ways (both model-based and experiment-based) for studying the brain.
  • The wide-ranging investigation of different forms of partial physical localization and interaction centers around multiscale organization, where some phenomena are localized to very small structures (a few physically proximate neurons), while others are distributed over large brain regions, but not the entirety of the brain and others still are physically distributed but appear to be localized, at different scales, to various consistent networks of brain activity.
  • The Hebbian model of connections, where they are strengthened by repeated co-firing, is perhaps one of the most elegant illustrations of the idea of “relationality” in science, closely paralleling the way we typically imagine human relationships developing.
  • Neuroscience also elegantly illustrates embedded causality. Brain structure is famously plastic to learning and what is learned depends heavily on the social contexts that humans inhabit and construct as well as on the nutrients human economic and social system provide to brains. Thus, the higher-level phenomena (societies, relationships, economies, educational systems), which one might hope to help explain with features of human neuropsychology, are some of the central factors that shape the nature and function of those brains. Causation thus traces a classic circular pattern across levels.

     Modern neuroscience has transformed this understanding into a range of applications: treatments of patients with damaged brains, development of psychiatric medicine, some treatments and interventions based on transcranial stimulation and other brain activation approaches, and more. Yet the most transformative technologies inspired by neuroscience have been at least partly digital, rather than purely biomedical. Neuroscience is increasingly central to two of the more exotic and exciting areas of digital technology development: brain-computer interfaces and the use of brain organoids as a substrate for computation.

     Most pervasively, the “neural network” architecture inspired by early mathematical models of the brain has become the foundation of the recent advances in “artificial intelligence”. Networks of trillions of nodes, each operating on fairly simple principles inspired by neurons of activation triggered by crossing a threshold determined by a linear combination of inputs, are the backbone of the “foundation models” such as BERT and the GPT models. These have taken the world by storm in the past half-decade and increasingly dominated the headlines in the last two years. All the critical features of neuroscience discussed above, and of pluralism more broadly (e.g., multiscale organization, relationality, embedded causation), manifest in the operation of these systems.

From science to society

     Plurality is, scientifically, the application of an analogous perspective to the understanding of human societies and, technologically, the attempt to build formal information and governance systems that account for and resemble these structures as physical technologies built on plural science do. Perhaps the crispest articulation of this vision appears in the work of the leading figure of network sociology, Mark Granovetter. There is no basic individual atom; personal identity fundamentally arises from social relationships and connections. Nor is there any fixed collective or even set of collectives: social groups do and must constantly shift and reconfigure. This bidirectional equilibrium between the diversity of people and the social groups they create is the essence of pluralist social science.

     Moreover, these social groups exist at a variety of intersecting and non-hierarchical scales. Families, clubs, towns, provinces, religious groups of all sizes, businesses at every scale, demographic identities (gender, sexual identity, race, ethnicity, etc.), education and academic training, and many more co-existing and intersecting. For example, from the perspective of global Catholicism, the US is an important but “minority” country, with only about 6% of all Catholics living in the US; but the same could be said about Catholicism from the perspective of the US, with about 23% of Americans being Catholic.

     While we have emphasized the positive vision of pluralistic social science (a “network society”), it is important to note that beyond its inherent plausibility, a key reason for adopting such a perspective is the impossibility of explaining most social problems using monistic atomism given both complexity and chaos. Even in the social science field, economics, that most consistently aims for “methodological individualism”, it is universally accepted that trying to model complex organizations exclusively as the outgrowth of individual behavior is unpromising.

     The field of Industrial Organization, for example, treats firms rather than individuals as the central actors, while most macroeconomic models assume sufficient homogeneity to allow the construction of a “representative agent”, rather than reducing behavior to actual diverse individual choice. In fact, one fascinating features of economic models is that they tend to feature a range of different forms of organization as either the “central planner” (e.g., a technology platform operator or provincial government) or as the “individual actors” (e.g., a municipality or a manufacturer). This is hardly surprising given that a leading result in game theory (the most canonical approach to economic “reduction” of a group to individual behavior) is the “folk theorem”, a variant on chaos and irreducible uncertainty that states that when interactions are repeated, a very wide range of outcomes can be an equilibrium.

     Yet, whatever level of explanation is chosen, actors are almost always modeled as atomistically self-interested and planners as coherent, objective maximizers, rather than socially-embedded intersections of group affiliations. The essence of understanding social phenomena as arising from a “network society” is to embrace this richness and build social systems, technologies, and policies that harness it, rather than viewing it as a distracting complication. Such systems need, among other things, to explicitly account for the social nature of motivations, to empower a diversity of social groups, to anticipate and support social dynamism and evolution, to ground personal identity in social affiliations and group choices in collective, democratic participation and to facilitate the establishment and maintenance of social context facilitating community.

     While we do not have the space to review it in detail, a rich literature provides quantitative and social scientific evidence for the explanatory power of the pluralist perspective. Studies of industrial dynamics, of social and behavioral psychology, of economic development, of organizational cohesion, and much else, have shown the central role of social relationships that create and harness diversity[^6]. Instead, we will pull out just one example that perhaps will be both the most surprising and most related to the scientific themes above: the evolution of scientific knowledge itself.

     A growing interdisciplinary academic field of “Science of Science” (SciSci) studies the emergence of scientific knowledge as a complex system[6]. It charts the emergence and proliferation of scientific fields, the sources of scientific novelty and progress, the strategies of exploration scientists choose, and the impact of social structure on intellectual advance. Among other things, they find that, relative to the most efficient ways of discovering existing knowledge (in chemistry, as an example), scientific exploration is biased towards topics and connections related to social connections and previous publications within a field[7]. It finds strong connections between research team size and diversity and the types of findings (risky and revolutionary v. normal science) developed and documents the increasingly dominant role of teams (as opposed to individual research) in modern science [8]. The largest innovations tend to arise from a strong grounding in existing disciplines deployed in unusual and surprising combinations[9]. It illustrates that most incentive structures used in science (based e.g. on publication quality and citation count) create perverse incentives that limit scientific creativity and has helped produce new metrics that can complement and offset these biases, creating a more pluralistic incentive set [10].

     Thus, even in understanding of the very practice of science, a pluralist perspective, grounded in many intersecting levels of social organization, is critical. To advance science and technology of any flavor, therefore, a pluralist outlook is critical.

A future plural?

     Yet the assumptions on which the AT and ES visions of the future discussed above diverge sharply from such pluralist foundations.

     In the AT vision we discussed in the previous chapter, the “messiness” of existing administrative systems is to be replaced by a massive-scale, unified, rational, scientific, artificially intelligent planning system. Transcending locality and social diversity, this unified agent is imagined to give “unbiased” answers to any economic and social problem, transcending social cleavages and differences. As such, it seeks to at best paper over and at worst erase, rather than fostering and harnessing, the social diversity and heterogeneity that pluralist social science sees as defining the very objects of interest and value.

     In the ES vision, the sovereignty of the atomistic individual (or in some versions, a homogeneous and tightly aligned group of individuals) is the central aspiration. Social relations are best understood in terms of “customers”, “exit” and other capitalist dynamics. Democracy and other means of coping with diversity are viewed as failure modes for systems that do not achieve sufficient alignment and freedom.

     But these cannot be the only paths forward. Pluralist science has shown us the power of harnessing a plural understanding of the world to build physical technology. We have to ask what a society and information technology built on an analogous understanding of human societies would look like. Luckily, the twentieth century saw the systematic development of such a vision, from philosophical and social scientific foundations to the beginnings of technological expression. While that path (dao) of development is today somewhat forgotten, we will rediscover it in the next chapter.

  1. “Life as Joy, Duty, End” ↩︎

  2. ↩︎

  3. “Objectivist” here is not meant only in the narrow sense of the philosophy of Ayn Rand, though perhaps she expresses this view most consistently, but rather in the broader sense of common sense, simplistic version of the philosophy of the Enlightenment. ↩︎

  4. Dawkins, The Selfish Gene, Darwin, The Descent of Man. ↩︎

  5. Here are some examples of these properties in neuroscience: Sensitivity: In neuroscience, sensitivity refers to the ability of the brain to detect and respond to small changes in its environment. One example of sensitivity in the brain is the phenomenon of synaptic plasticity, which is the ability of synapses (connections between neurons) to change in strength in response to activity. This sensitivity allows the brain to adapt and learn from experience. Chaos: Chaos is a property of complex systems that exhibit unpredictable behavior even though they are deterministic. In neuroscience, chaos has been observed in the activity of neurons in the brain. For example, studies have shown that the firing patterns of individual neurons can be highly irregular and chaotic, with no discernible pattern or rhythm. This chaotic activity may play a role in information processing and communication within the brain. Sensitivity and chaos together: Sensitivity and chaos can also interact in the brain to produce complex and adaptive behavior. For example, studies have shown that the brain can exhibit sensitivity to small changes in sensory input, but this sensitivity can also lead to chaotic activity in neural networks. However, this chaotic activity can be controlled and harnessed to produce adaptive behavior, such as in the case of motor control and coordination. The brain's ability to integrate sensitivity and chaos in this way is a hallmark of its remarkable complexity and adaptability. [^6] Page, S. E. (2007). The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton University Press.; Hidalgo, C. A. (2015). Why information grows: The evolution of order, from atoms to economies. Basic Books.; Acemoglu, D., & Linn, J. (2004). Market size in innovation: Theory and evidence from the pharmaceutical industry. The Quarterly Journal of Economics, 119(3), 1049-1090.; Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.; Pentland, A. (2014). Social physics: How good ideas spread—the lessons from a new science. Penguin. Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. Simon and Schuster. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360-1380. Uzzi, B. (1997). Social structure and competition in interfirm networks: The paradox of embeddedness. Administrative Science Quarterly, 42(1), 35-67.; Burt, R. S. (1992). Structural holes: The social structure of competition. Harvard University Press.; McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual Review of Sociology, 27(1), 415-444. ↩︎

  6. See a summary in Fortunato et al. (2018) ↩︎

  7. Rzhetsky et al. 2015 ↩︎

  8. Wu et al. 2019 ↩︎

  9. Foster et al. 2015 ↩︎

  10. Clauset et al. 2017 ↩︎