← Back to home

Software Architecture for AI Agents: Designing for Permanent Absence

The industry is focusing on multi-agent architectures, but the key is not to introduce more complexity. The key is to place the boundary between human and machine at exactly the right place.

· Mart van der Jagt

The Challenge of Letting AI Build Autonomously

An autonomous agent does not just assist; it delivers. And the promise of autonomous AI agents is real. They will change the economics of software construction entirely.

What holds this back is not capability but control. The biggest risk today is the absence of a reliable self-correction loop. The agent can’t verify itself, the problem gets worse the larger the scope, and the agent won’t tell you it failed.

But even as models improve, autonomous agents will always pose a risk. This is not temporary; organizations will structurally not give up control. The risk isn’t “the AI might make a mistake.” The risk is “we are responsible for a system we didn’t build and might not fully understand.” No amount of model improvement changes that.

In order to let AI build autonomously while retaining control, we must shape the system in a way that makes drift and failure structurally unlikely.

Why the unit must shrink

The question we need to answer is not whether AI can build software, but what is the right unit for AI to build.

Eric Evans argues that the boundaries that give a system its structure must be defined deliberately. They do not emerge from implementation. Once those boundaries are in place, what happens inside them is an implementation concern.

The existing pattern in software that matches this principle is decomposition into microservices. The 2025 DORA Report states that AI amplifies strengths and weaknesses. The structural consequence of amplification is that microservices get even smaller.

Services are not just incrementally getting smaller though, they are approaching a structural threshold. Ilya Prigogine, recipient of the 1977 Nobel Prize, showed that systems driven far from equilibrium reach bifurcation points: thresholds where the existing structure can no longer hold and the system reorganizes into something qualitatively different.

Software construction is approaching such a threshold. When AI is used not just to speed up the existing process, but to change who does the building, that changes the constraints the architecture must satisfy. Microservices emerged as a structure that could sustain the pressure for continuous delivery at scale. With the emergence of AI and autonomous agents, the pressure changes in kind. The constraints are no longer about coordination between teams; they are about retaining control over autonomous construction.

Under these new constraints, the microservice becomes too large to trust. Not because the agent is unreliable, but because the organization remains accountable for what it ships. Verification must be possible without reading the code, and the larger the scope, the less that holds. The existing structure can no longer hold. The system reorganizes around something smaller: the nanoservice.

The nanoservice as anti-pattern

A microservice owns a business capability: a cohesive set of related operations. A nanoservice owns a single operation within that capability. Around fifteen years ago, nanoservices got popularized as an anti-pattern, as services whose overhead outweighs their utility due to poor performance, fragmented logic and development and management overhead. The granularity was technically possible but economically irrational.

Three constraints have been shifting. Network latency and bandwidth have improved dramatically since then, powered by faster networks and smarter infrastructure. Monitoring and maintenance has become increasingly automated; maturing platform engineering capabilities absorb operational overhead across services. And now, development overhead is collapsing; AI generates the boilerplate, and the conversion layers that would have dominated nanoservice codebases.

When the overhead drops below the utility threshold, the anti-pattern becomes the architecture. The objection that remains is fragmented logic spread across too many services. In a system bifurcating into smaller units, that concern becomes the central design challenge.

The architecture of permanent absence

Before you continue. Take a moment to step back and rethink what building software looks like when the main purpose is retaining control over autonomous construction. Because what follows will feel wrong if you read it through the lens of current engineering practices. Autonomous construction means that you are no longer interested in how something is built, as long as it works. Everything after specification becomes an implementation detail. Has this been proven at scale? No. Is it easy to get there? Probably not. In return, spinning up a new service becomes as easy as debugging it. A service that has drifted can be recreated from scratch. Constructing a service will come fast and cheap.

Most AI-assisted development keeps a human in the loop: intervening, iterating, reviewing. Autonomous agents operate at the other end of that spectrum. But it goes further than simply stepping out of the loop; it requires designing for permanent absence.

Re-entry into autonomously built code is not like inheriting a colleague’s codebase. A colleague’s decisions were deliberate. You can trace their reasoning because it followed from professional instincts you share. Autonomously generated code carries no deliberate reasoning to trace. The AI produced thousands of micro-decisions whose combination was never intended. There is no coherent design behind the whole, only locally reasonable choices that happened to coexist. Stepping back in for a bugfix means making a deliberate correction inside a system that was never deliberately composed. Once you are truly out of the loop, the system must not need you back in.

A nanoservice solves this because it is small enough to verify from the outside and cheap enough to replace when it fails. It owns one responsibility behind a contract, one boundary, one deployment unit. The contract defines what goes in and what comes out; the tests verify that the behavior holds. Implementation is internal; no human needs to inspect it. The boundary also constrains the agent’s scope. Since AI amplifies, it amplifies the risk of violating boundaries in any larger unit. A nanoservice limits what the agent can reach.

Nanoservice templates make this work at scale and can enforce consistency without human presence. The template provides the boilerplate every nanoservice inherits: project structure, infrastructure as code, application-level cross-cutting concerns and coding standards. The agent operates within that scaffolding; it fills in the behavior. Meanwhile, templates enforce consistency across hundreds of services.

Verification replaces review. You specify the contract and the tests; the agent owns everything behind that boundary: implementation, build, deployment. You never read the code. You only verify the behavior. If a nanoservice fails its contract, you do not debug it; you replace it.

Nanoservice boundaries drawn on a standard CQRS event-driven flow, showing where each nanoservice contract sits.

The per-handler-as-a-service pattern already exists in CQRS. Nanoservices generalize this into an architecture principle. Note that this is first and foremost a logical architecture. See FAQ.

The role of the software engineer

The complexity does not disappear in the reorganization. It moves from inside the services to between them. The engineer’s output shifts from writing code to three things: nanoservice specification, nanoservices architecture and nanoservice templates.

At the specification level, the primary artifacts are the business logic, the contracts and the tests that enforce them. The quality of the AI’s output is determined by the quality and precision of the specification it receives. The tests serve as the verification mechanism through which you retain control over what the agent produces.

At the architecture level, the engineer designs the system’s decomposition; where boundaries fall, how data and consistency flows, and how services combine as a whole. At this level the engineer also designs its resilience; how failures propagate and where the system recovers. The architecture template matters more than any individual service, because it is the template that determines whether the system behaves consistently.

At the template level, the engineer designs the boilerplate that becomes the agent’s operating context. Every decision baked into the template is a decision the agent no longer needs to reason about. This is context engineering embedded in the architecture itself.

AI can assist with each, but the responsibility stays with you. Managing fragmented logic through specifications and architecture demands expertise that is built through deliberate cognitive work, not delegated to the tools.

Diagram showing the boundary between software engineer and autonomous agent responsibilities. The engineer side is intentional: architecture, specification, and templates. The agent side is disposable: implementation, verification, and delivery. Specification flows across the boundary; verification flows back.

Conclusion

When the pressure on a system exceeds what its structure can absorb, the system reorganizes. The demand for autonomous construction is that pressure. Simultaneously, the overhead that made nanoservices an anti-pattern before has collapsed. The architecture that was too expensive for humans is precisely right for AI agents: a small scope.

You become responsible for a system you didn’t build and might not fully understand. The complexity therefore doesn’t disappear. It moves from inside services to between them, into specifications, architecture and templates. That work becomes harder than it is today, because the engineer must control system-level behavior through contracts and verification alone.

What you get in return is that construction becomes disposable. A service is as cheap to rebuild as it is to repair. Nanoservices are no longer the antipattern. They are the architecture where construction is effortless while we remain accountable.

Related

Frequently Asked Questions

Should everyone build with autonomous agents?

Not necessarily. A different architecture also requires the organization to organize differently around it. That is a big investment. If you do, then nanoservices is the way forward. When the scope is small enough, the architecture provides the context the LLM needs; the agent does not need a framework to manage its own scope. It places the boundary between human and machine at exactly the right place and thereby keeps it simple.

Are nanoservices preferred over (nano)modular monoliths?

The nanoservices architecture described here is first and foremost a logical architecture, and one that is also compatible with a modular monolith repository. Tradeoffs in favour of modular monoliths, such as context availability, latency, overhead and cost are all related to the operational boundary. The logical architecture is identical; the deployment boundary is a separate decision.

How granular should a single nanoservice be?

A domain nanoservice does not extend beyond an aggregate; that is the upper boundary. Some aggregates are small enough to be a single nanoservice. Others decompose into per-operation nanoservices, where each command or query becomes its own unit. In practice, autonomous development tends to push toward the lower end of that range: the demands of full verification and replacement without re-entry often require finer granularity than the aggregate alone provides. The right granularity is where each nanoservice represents one coherent, independently verifiable behavior.

Can a nanoservice contain multiple aggregates?

A nanoservice can not contain multiple aggregates. An aggregate is the largest unit that guarantees transactional consistency without coordination. If a nanoservice spans two aggregates, it must coordinate consistency across their boundaries. That reintroduces the complexity that the decomposition was meant to eliminate, and makes the unit harder to verify in isolation.

What about compliance and security if you never read the code?

The contract-and-test model does not exempt you from compliance or security. Those concerns belong in the template and the verification layer, not in manual code review. Security policies, dependency scanning, static analysis, and compliance rules are enforced through the template’s CI pipeline and the cross-cutting concerns it inherits. What changes is where these checks happen: they move from human inspection of implementation to automated enforcement at the boundary.

What if the specification itself is wrong?

That is the central risk, and exactly why specification is a human responsibility. A nanoservice faithfully implementing a flawed contract still produces the wrong behavior. The architecture does not eliminate specification failure; it concentrates it in one place where it can be caught. Contracts are versioned, tests are explicit, and a failing nanoservice is cheap to regenerate once the specification is corrected.

Can I let AI design the nanoservices architecture?

AI can help, but this is a human-in-the-loop activity for which you carry responsibility. It is also the mechanism that allows you to keep building expertise.