← Back to home

Emergence as Foundation for Nanoservice Architecture

· Jan van der Jagt

Theoretical grounding in support of “Towards an Autonomous Agent Architecture: Nanoservices

Abstract

This document provides a theoretical grounding for why emergence theory, as formulated by P.W. Anderson (1972) and Ilya Prigogine (1977/1984), constitutes a legitimate and robust foundation for the argument that nanoservice architecture is not merely a passing trend, but a predictable, scientifically explainable development in the evolution of distributed software systems.

The document is structured as a self-contained text: no prior knowledge of emergence theory is assumed. All concepts are built from scratch, from the original sources, with explicit translation steps into the IT domain.

1. Introduction: why theoretical grounding is necessary

Architecture patterns come and go. SOA, microservices, serverless, event-driven architecture: each of these paradigms was defended at introduction with pragmatic arguments: better scalability, faster deployments, better separation of responsibilities. Rarely was the question asked: why does this actually work? What underlying principle explains that this is the right direction?

Towards an Autonomous Agent Architecture: Nanoservices posits something fundamental: the transition toward ever smaller service units is not an incidental trend, but a structural development. To defend that argument against criticism—from architects who consider granularity an anti-pattern, from engineers who know the operational overhead of nanoservices, from academics who recognize historical cycles—a solid theoretical foundation is indispensable.

This document provides that foundation. It departs from two of the most cited scientific contributions to the theory of complex systems: P.W. Anderson’s “More Is Different” (Science, 1972) and Ilya Prigogine’s work on dissipative structures and self-organization (Nobel Prize 1977; Order Out of Chaos, 1984). Neither work was written specifically about software, and precisely this makes them more powerful as a foundation: the principles they describe are domain-independent and empirically grounded.

Reading guide

Section 2 introduces emergence as a concept for the uninitiated reader. Section 3 covers Anderson’s core argument and its direct relevance to IT. Section 4 adds Prigogine’s dynamic theory. Section 5 explicitly links both theories to nanoservices.

2. What is emergence? An accessible introduction

Those encountering the word “emergence” for the first time are sometimes deterred by the philosophical connotations. In reality, the concept describes something everyday and recognizable: the phenomenon that systems exhibit properties which, no matter how well you know the individual components, you could not have predicted by studying those components alone.

2.1 An everyday example

Take water. Hydrogen is a flammable gas. Oxygen is a gas that supports combustion. Combine two hydrogen atoms with one oxygen atom, and the result, H₂O, is a liquid that extinguishes fire. No property of hydrogen or oxygen individually predicts liquidity, or the specific heat capacity of water, or the remarkable behavior during freezing where ice is less dense than liquid water. These properties “emerge” at the level of the molecule and are not derivable from the characteristics of the atoms individually.

The same principle applies to ant colonies: no single ant “knows” the colony architecture, yet the colony as a whole builds efficient tunnel systems, manages food distribution, and adapts its behavior at the population level. The collective behavior emerges from simple local interactions.

2.2 Two forms of emergence

In the scientific literature, a distinction is made between two variants:

Variant Definition and relevance for IT
Weak emergence System properties are in principle derivable from the components, but only through complex simulation or by studying the system in actual operation. This is the relevant form for software architecture.
Strong emergence System properties are fundamentally irreducible to the components. Philosophically contested and not applicable to software systems.

In this document, “emergence” always refers to the weak variant. This is scientifically uncontested and directly applicable to distributed systems.

2.3 Why emergence is relevant for architecture

Software architecture is about designing systems that exhibit properties we want: resilience, scalability, maintainability, domain coherence. Emergence theory posits that these properties cannot simply be “built into” individual components; they are the result of the structure of interactions between components. This has direct architectural consequences, which we elaborate in the following sections.

3. P.W. Anderson: “More Is Different” (1972)

Philip Warren Anderson is one of the most influential physicists of the twentieth century. His article “More Is Different”, published in Science in August 1972, has become one of the most cited philosophy-of-science texts in modern science. Anderson worked at Bell Telephone Laboratories and was later professor at Princeton University; he received the Nobel Prize in Physics in 1977.

3.1 The attack on reductionist constructionism

Anderson’s central argument is an attack on what he calls “reductionist constructionism”. This is the view, dominant in science at the time, that if you know the fundamental laws (the laws of elementary particle physics), you can in principle explain and construct everything. Chemistry is then “merely” applied physics, biology is “merely” applied chemistry, and so on.

Anderson distinguishes two dimensions of scientific inquiry:

The error, Anderson argues, is to think that extensive research is by definition less fundamental or less creative. This is incorrect. At each level in the scientific hierarchy, properties appear that are fundamentally new and that are not derivable by simple extrapolation from the underlying level.

3.2 The hierarchy of the sciences

Anderson posits that the sciences are hierarchically organized: the entities at each level obey the laws of the level below. But—and this is his central thesis—that does not make the higher level “merely applied” lower science:

Level X Obeys the laws of level Y
Solid-state / many-body physics Elementary particle physics
Chemistry Many-body physics
Molecular biology Chemistry
Cell biology Molecular biology
Physiology Cell biology
Psychology Physiology
Social sciences Psychology

Anderson’s thesis: at each transition in this hierarchy, entirely new concepts, laws, and generalizations are necessary, requiring as much creativity and fundamental insight as the level below. Psychology is not “merely” applied biology. Chemistry is not “merely” applied physics.

3.3 Broken symmetry: the mechanism behind emergence

Anderson introduces the concept of “broken symmetry” as the mechanism that explains how new properties arise with increasing scale. A system need not have the symmetry of the laws that govern it; in fact, large systems typically do not.

The most accessible example is the crystal. A crystal is built from atoms that obey the homogeneous symmetry laws of space, but the crystal itself exhibits a particular, specific structure (cubic, hexagonal, and so on) that breaks the underlying symmetry. This structure is not derivable from the laws of the individual atoms; it emerges when the system is large enough and transitions to its lowest-energy state.

Core principle Anderson: broken symmetry

Large systems spontaneously adopt the structure that corresponds to their lowest-energy state, even if that structure has less symmetry than the underlying laws. This structure is an emergent property: it is only visible and definable at the level of the whole, not at the level of the components.

3.4 The N → ∞ argument: scale as qualitative threshold

A central insight of Anderson is that in the transition to large systems (the so-called N → ∞ limit), systems undergo “mathematically sharp, singular phase transitions” to qualitatively new states. This is not a gradual change, but a discontinuous jump in which the system begins to exhibit fundamentally different behavior.

Superconductivity is Anderson’s prime example: all fundamental laws describing superconductivity had been known for thirty years. Yet it took thirty years before the phenomenon could be explained, because the explanation required a new conceptual framework at a higher level. Knowledge of the building blocks was not sufficient. Only analysis of the large system as a whole revealed the new order.

3.5 Analysis works; synthesis does not

One of Anderson’s most practically relevant theses is that the relationship between a system and its parts is intellectually a “one-way street”:

This has direct consequences for software design: complex system properties (resilience, scalability, domain coherence) cannot be “built in” by specifying the right components. They require architectural conditions that make the emergence of those properties possible, and they require observation of the system in operation.

4. Ilya Prigogine: dissipative structures and bifurcation

Ilya Prigogine was a Belgian-Russian chemist and physicist who received the 1977 Nobel Prize in Chemistry for his work on dissipative structures: systems far from thermodynamic equilibrium that can spontaneously generate order. His major work Order Out of Chaos (1984, with Isabelle Stengers) extends this work to complex systems in biology, economics, and society.

4.1 The problem with classical thermodynamics

Classical thermodynamics describes closed systems that tend toward equilibrium. In a closed system, disorder (entropy) always increases: things fall apart, energy differences equalize. This seems to contradict reality: life, and the technology that humans build, continuously generates new structures and complexity.

Prigogine resolves this paradox by showing that classical thermodynamics applies only to closed systems at or near equilibrium. Systems that are open—that exchange energy and matter with their environment—and that operate far from equilibrium, behave fundamentally differently.

4.2 Dissipative structures: order through energy dissipation

Prigogine introduces the concept of the “dissipative structure”: a system that maintains its internal order by continuously dissipating energy (transferring it to the environment). The order is not static; it is actively maintained by the energy flow through the system.

Examples are plentiful:

Core principle Prigogine: dissipative structure

A system far from equilibrium can spontaneously adopt an organized, stable structure, provided it continuously exchanges energy with its environment. That order costs energy: stop the energy flow, and the structure collapses. Order is not free.

4.3 Bifurcation points: when systems jump

Prigogine’s bifurcation theory describes what happens when the complexity pressure on a system—the so-called “flux” or energy flow—increases to a critical threshold. At that point, the bifurcation point, the existing structure becomes unstable. The system can no longer remain in its current configuration and faces a choice between two trajectories: chaos, or a new, qualitatively higher order.

This is not a linear transition but a discontinuous jump—a phase transition, in Anderson’s terminology. The new order is qualitatively different from the previous one: it has properties that the previous structure did not and could not have.

Historical examples of bifurcations:

4.4 The energy cost of order, and the threshold value

A critical insight of Prigogine is that order has costs. A dissipative structure is only stable as long as the energy flow is high enough to maintain the order. If the cost of order is higher than the available energy flow, the structure cannot form, regardless of whether it is theoretically optimal.

This explains a paradox in IT history: nanoservice-like architecture was theoretically conceivable long ago (and was even described as an ideal in early SOA literature), but was practically unachievable. The reason: the “energy” needed to maintain nanoservices—orchestration, observability, CI/CD per service, service mesh management—exceeded what development teams could bear.

Critical insight

The feasibility of an architecture pattern depends not only on its theoretical merits, but on the ratio between the benefits it provides and the energy costs it demands. Nanoservices were always theoretically superior; they were practically unachievable as long as the energy costs were too high.

5. Emergence applied to nanoservice architecture

With the theoretical foundation of Anderson and Prigogine in hand, we can now rigorously build the argument for nanoservices. We do this in four steps: (1) IT systems as hierarchical complex systems, (2) broken symmetry and domain coherence, (3) the bifurcation point of AI-driven development, and (4) the consequences of Anderson’s analysis-synthesis asymmetry.

5.1 IT systems as hierarchical complex systems

Anderson’s hierarchy is explicitly domain-independent: he places psychology, social sciences, and biology alongside physics in his table. The criteria for his argument are structural: systems composed of components, where at each level new properties appear that are not derivable by extrapolation from the level below.

IT systems structurally satisfy these criteria. The technology stack is hierarchically layered:

Level Examples Emergent property
Physical Transistors, memory Logical switching
Logical Instruction sets, registers Computability
OS / runtime Processes, threads Concurrency, isolation
Application Classes, modules Business logic
Service Microservices Independent deployability
Nanoservice Single capability Domain isomorphism
Platform Service ecosystem System resilience

At each level, properties appear that the level below does not and cannot possess. This is Anderson’s hierarchy principle directly applied.

5.2 Broken symmetry and domain coherence

Anderson’s broken symmetry describes how large systems spontaneously adopt the structure that corresponds to their lowest-energy state, even when that structure breaks the symmetry of the underlying laws.

In software architecture, this has a direct analogy in Domain-Driven Design (DDD, Eric Evans 2003): the “right” architectural structure is the structure that best corresponds to the structure of the domain. This is the lowest-friction state: the structure in which the software mirrors the business reality as directly as possible, without unnecessary translation layers.

A monolith breaks the symmetry of the domain: it forces all domain concepts—with their own behavioral rules, data models, and rates of change—into one shared structure. This creates friction: parts that should be independent are coupled through shared state, shared deployment, shared failure modes.

Nanoservices per business capability restore the symmetry: each capability gets the structure that best reflects its nature. This is not an aesthetic preference, but the architectural lowest-energy state: the state in which system complexity is minimal given the domain structure.

Anderson applied to DDD

Just as a crystal spontaneously adopts its lowest-energy structure, the optimal service structure emerges from thorough domain analysis as the configuration with the lowest architectural friction. Nanoservices per bounded context are the crystal structure of the domain.

5.3 The bifurcation point: AI as catalyst

Prigogine’s bifurcation theory states that a system bifurcates toward a new order when (a) the complexity pressure is high enough, and (b) the energy cost of the new order falls below a critical threshold.

Both conditions are currently met for nanoservice architecture:

Condition Current situation
Complexity pressure rises Modern applications integrate more domains, more users, more data, more compliance requirements than ever. The pressure on service delineation increases.
Energy costs decline AI-driven code generation, automated CI/CD pipelines, Kubernetes, service mesh technology (Istio, Linkerd), and observability tooling drastically lower the operational overhead of fine-grained services.
AI quality scales with granularity Recent research (DORA 2025) shows that AI tools perform better with small, bounded codebases. The nanoservice is the architectural form that best aligns with the properties of AI as builder.
Bifurcation point reached The system can no longer remain stable in the microservice configuration: the complexity pressure exceeds the capacity of coarse service delineation, while the tooling makes the new order feasible.

This is Prigogine’s bifurcation mechanism precisely: the existing structure (microservices) becomes unstable under increasing pressure, and the system bifurcates toward a qualitatively higher order (nanoservices) as soon as the energy costs of that order become achievable.

5.4 The analysis-synthesis asymmetry: what this means for design

Anderson’s thesis that synthesis is all but impossible has a fundamental consequence for how nanoservice architecture must be designed and managed:

6. Literature and sources

The following sources underlie the theoretical grounding in this document:

Source Relevance
Anderson, P.W. (1972). More Is Different. Science, 177(4047), 393–396. Primary source for emergence and hierarchical complexity; foundation for the IT hierarchy analogy.
Prigogine, I. & Stengers, I. (1984). Order Out of Chaos. Bantam Books. Primary source for dissipative structures and bifurcation theory; foundation for the revival argument.
Ford, N., Parsons, R. & Kua, P. (2017). Building Evolutionary Architectures. O’Reilly. Practical implementation of emergence principles in architecture design.
Evans, E. (2003). Domain-Driven Design. Addison-Wesley. Foundation for bounded contexts and domain isomorphism as architectural principle.
Newman, S. (2021). Building Microservices (2nd ed.). O’Reilly. Reference framework for microservice principles; contrast with nanoservice granularity.
DORA (2025). State of AI-Assisted Software Development. Google Cloud. Empirical confirmation that AI tools perform better with small, bounded codebases.

Frequently Asked Questions

Is emergence theory applicable to software, or is this just a metaphor?

Anderson’s hierarchy is explicitly domain-independent. The criteria are structural: systems composed of components where each level exhibits properties not derivable from the level below. IT systems satisfy these criteria directly. The application is not metaphorical but structural—the same formal principles that govern physical systems apply to the technology stack.

Why were nanoservices an anti-pattern if they were always theoretically superior?

Prigogine’s dissipative structures show that order has energy costs. A fine-grained architecture is only stable when the operational energy flow is high enough to maintain it. Before AI-driven tooling, the overhead of orchestration, observability, and CI/CD per service exceeded what teams could bear. The theory was sound; the energy budget was not.

What does “broken symmetry” mean in practical architecture terms?

A monolith forces all domain concepts into one shared structure, breaking the natural symmetry of the domain. Nanoservices restore that symmetry: each business capability gets the structure that reflects its own behavioral rules, data models, and rate of change. The result is minimal architectural friction—the lowest-energy state of the system.

Does this mean observability and chaos engineering are scientifically necessary?

Anderson’s analysis-synthesis asymmetry is unambiguous: you cannot predict emergent properties by assembling components. You can only discover them by studying the system in operation. Observability and chaos engineering are not operational luxuries; they are the methodological consequence of how complex systems behave.

If synthesis is impossible, how do you design a nanoservice architecture?

You don’t design it completely upfront—you grow it. The analysis-synthesis asymmetry means the desired system properties (resilience, domain coherence) cannot be specified into existence. They require architectural conditions that make their emergence possible, and continuous observation of the system in operation to steer toward them. This is what evolutionary architecture operationalizes: not a blueprint, but a fitness function.

How does this document relate to Towards an Autonomous Agent Architecture: Nanoservices?

This document provides the scientific grounding for Towards an Autonomous Agent Architecture: Nanoservices. That article describes what nanoservices are and why AI-first development requires them. This document explains why the transition is structurally inevitable from the perspective of complex systems theory—and why it was theoretically predictable long before AI made it practically achievable.