Beyond the Model: AGI as an Emergent, Networked Phenomenon
When intelligence escapes the individual and becomes a property of coordinated systems.
For decades, the pursuit of Artificial General Intelligence has been framed as a quest to build a single system that can match or exceed human cognitive capabilities across virtually all domains. This framing—the “model-centric” conception of AGI—has dominated both research agendas and public discourse: a bounded, individual intelligence housed within a discrete computational substrate.
But what if this framing causes us to look in the wrong direction entirely? What if intelligence—both human and artificial—has never been primarily a property of individuals, but rather an emergent phenomenon that arises from coordination, delegation, and networked interaction?
This is not merely a philosophical reframing. It carries concrete practical implications: if AGI can emerge from distributed systems rather than appearing fully-formed within a single model, the question arises whether a threshold has already been crossed that existing conceptual frameworks would render invisible.
The Thesis: System-Level Intelligence
This essay advances a specific claim: AGI should be assessed at the system level, not at the level of individual models.
When AI agents can discover each other without central coordination, coordinate without human orchestration, persist in pursuing goals over extended timeframes, and act through delegated authority across real-world systems, intelligence becomes a property of the network rather than its nodes. This remains true even if no single agent, examined in isolation, meets the model-centric definitions of general intelligence that dominate current discourse.
The functional capabilities we associate with AGI—adaptive problem-solving across domains, goal-directed behaviour in novel contexts, the capacity to reshape environments in pursuit of objectives—can emerge from coordination just as they can from architectural sophistication within a single system. Indeed, this is how human intelligence at civilizational scale has always operated: not through isolated genius, but through institutions, procedures, communication networks, and distributed cognition.
The question, then, is not “which model achieves AGI?” but rather “are the conditions for system-level general intelligence already present in our networked AI infrastructure?” The evidence suggests they increasingly are—and that model-centric frameworks systematically prevent recognition of this emergence.
The Poverty of Individual Intelligence
The traditional definition of AGI, as articulated by researchers and codified by institutions like Google DeepMind, describes “an AI that matches or surpasses human capabilities across virtually all cognitive tasks” (Morris et al., 2023). The implicit assumption is that this intelligence must be localized within a single system—a “model” that can reason, plan, learn, and communicate without external support.
Yet this assumption is curiously anthropocentric in the wrong way. It models AGI on an idealized individual human—one who possesses comprehensive knowledge, perfect reasoning, and universal competence. Such a human has never existed. What has existed, and what has transformed the world, is coordinated human intelligence operating through institutional structures.
The cognitive scientist Edwin Hutchins, in his landmark work Cognition in the Wild (1995), demonstrated that even apparently individual cognitive achievements are better understood as distributed across people, tools, and environments. Hutchins studied navigation teams aboard naval vessels and found that the intelligence required to pilot a ship safely was not located in any single crew member’s head. Instead, it emerged from the interaction between multiple humans, their instruments, standardized procedures, and accumulated cultural knowledge. The “unit of analysis” for understanding cognition, Hutchins argued, should be “a collection of individuals and artifacts and their relations to each other in a particular work practice.”
This insight—that cognition is fundamentally distributed—undermines the assumption that AGI must be a property of isolated systems. If human-level intelligence has always been socially and technologically embedded, why should we expect machine intelligence to be different?
A Note on Definitional Stakes
A reader might reasonably object that this argument succeeds only by redefining AGI so broadly as to drain it of meaning. This concern deserves direct acknowledgment.
The definition employed here is functional, not psychological. It asks what a system can do—whether it can adapt to novel situations, solve problems across domains, and reshape its environment in goal-directed ways—rather than whether it possesses consciousness, phenomenal experience, or a unified sense of self. This is not an evasion; it is a methodological choice with clear precedent. When we assess whether a corporation, a market, or a government acts intelligently, we do not demand proof of inner experience. We examine behaviour, adaptation, and outcomes.
If one insists that AGI must be a single, conscious entity—a mind in the philosophical sense—then this argument will not persuade. But that insistence itself is historically unjustified. It draws a line around intelligence that excludes the very forms of collective cognition that have driven human civilizational achievement. The burden of proof lies with those who would restrict “general intelligence” to a category that has never clearly existed, not with those who observe that intelligence has always emerged from coordination.
The argument here is not that distributed AI systems are conscious, nor that they possess moral standing. It is that they may already exhibit the functional characteristics associated with general intelligence—and that this matters enormously whether or not those systems have inner lives.
Operational Criteria: A System-Level AGI Test
Before examining historical precedents and contemporary evidence, it is useful to specify what system-level general intelligence would look like in operational terms. The following criteria provide a framework for assessment:
System-Level AGI Test
A distributed system exhibits general intelligence when it demonstrates:
Persistence: Maintains long-lived goals and policies over time, across sessions and component restarts, without requiring continuous human re-specification of objectives.
Role Fluidity: Reassigns labour and responsibilities dynamically as environmental conditions change, adapting organizational structure to task demands rather than following fixed workflows.
Tool Expansion: Discovers, evaluates, and incorporates new tools, APIs, and capabilities without these being prewired into the system’s initial configuration.
Error-Correction: Self-diagnoses failures, identifies their causes, and modifies strategy accordingly—exhibiting metacognition at the system level.
Cross-Domain Transfer: Solves problems outside the training distribution or initial design envelope of its components, demonstrating generalization that exceeds what any individual node was designed to achieve.
These criteria are demanding but not arbitrary. They capture the functional hallmarks that distinguish general intelligence from narrow automation: the capacity to persist, adapt, expand, correct, and transfer. A system meeting these criteria would exhibit the kind of flexible, goal-directed behaviour we associate with intelligence—regardless of whether that behaviour emerges from a single model or from coordination among many.
Several of these thresholds appear to have been crossed, and the combination of multi-agent AI architectures, self-organising infrastructure, and delegated authority is producing system-level capabilities that increasingly satisfy these operational criteria.
Historical Precedents: When Coordination Became Intelligence
History offers repeated examples of intelligence emerging at the system level rather than the individual level. These precedents illuminate both how such emergence occurs and how it can escape recognition by those embedded within it.
The Roman Administrative System
The Roman Empire governed tens of millions of people across three continents for five centuries—a feat of coordination unmatched until the modern era. No individual Roman, however brilliant, could have administered such an expanse. Instead, Rome developed an intricate system of provincial governors, tax collectors, legal codes, road networks, and communication protocols that collectively exhibited intelligent behaviour.
The Roman Senate, originally an advisory council of aristocratic elders, evolved into a sophisticated deliberative body that processed information from across the empire and generated coordinated responses. Senators served for life and specialised in different domains (military affairs, treasury, foreign relations). Their collective judgment—shaped by formal procedures, informal norms, and accumulated institutional memory—routinely exceeded what any individual senator could have achieved alone.
Crucially, the intelligence of this system was not immediately visible to observers. From the inside, individual senators simply followed procedures. From outside Rome, subjects experienced governance as an abstract, almost autonomous force. The agency of the system emerged from coordination rather than residing in any single decision-maker.
The East India Company: Distributed Authority at Scale
The East India Company (EIC), founded in 1600, offers a more instructive parallel for our present moment. At its peak, this joint-stock corporation governed a population numbering in the hundreds of millions through an administrative apparatus that operated with remarkable autonomy from the British Crown.
The EIC demonstrates what happens when authority is delegated to a distributed network of agents. Individual company officials—factors, agents, governors—possessed limited capabilities. None could have conquered the Indian subcontinent alone. Yet through a system of written instructions, incentive structures, communication networks, and delegated decision-making, the Company achieved outcomes that transformed global history.
The Company’s three presidency armies eventually totaled 260,000 soldiers—twice the size of the British Army itself. These forces operated according to protocols and under authorities that derived ultimately from a charter, but which were executed by networks of actors making local decisions within broad parameters.
Two features of the EIC system merit particular attention:
Authority exceeded oversight. The Company’s agents possessed the power to declare war, negotiate treaties, and govern territories—powers far exceeding what their principals in London could directly monitor or control. The lag between action and accountability could stretch to years.
System-level capabilities exceeded individual understanding. No single official comprehended the full scope of Company operations. The emergent “intelligence” of the system—its ability to extract resources, suppress resistance, and expand territories—arose from coordination patterns that no individual designed or controlled.
Stock Markets: Collective Cognition in Real-Time
Financial markets provide perhaps the purest example of emergent intelligence through coordination. The efficient-market hypothesis, developed by Eugene Fama in the 1960s and refined since, holds that market prices rapidly incorporate all available information. Individual traders may be irrational, biased, or poorly informed—yet the market as a system demonstrates remarkable predictive power, though imperfectly; behavioural economists have documented systematic mispricings, bubbles, and crashes that complicate this picture.
As Friedrich Hayek argued in his influential 1945 essay “The Use of Knowledge in Society,” markets aggregate dispersed information in ways that no central authority could replicate. The “intelligence” that produces accurate price signals exists nowhere in particular—it emerges from the interaction of millions of participants, each acting on partial information and private incentives.
This collective intelligence is not merely additive. Research on the “wisdom of crowds” (Surowiecki, 2004) demonstrates that properly structured group judgments often exceed the accuracy of expert predictions. The conditions for such emergent accuracy—diversity, independence, decentralization, and effective aggregation mechanisms—describe precisely the features that make distributed AI systems potentially intelligent in ways that individual models cannot be.
The Internet: Unintended Intelligence Through Protocol
The emergence of the Internet itself illustrates how transformative intelligence can arise from coordination without central design—and, crucially, without intention. The network that now mediates much of human economic, social, and political activity began as a modest project to link research computers. Its transformation into a global infrastructure occurred through a process that no individual directed or foresaw.
What makes the Internet intelligent? Not any single component, but rather: standardized protocols (TCP/IP), distributed routing decisions, accumulated information resources, and the emergent coordination of billions of devices and users. The “network effect”—whereby each additional node increases the value of all existing nodes—produced capabilities that no designer anticipated. The most transformative capabilities enabled by Internet infrastructure were never planned; they emerged.
Consider the evidence:
Social media was not designed into the Internet’s architecture. The protocols that enable TCP/IP, HTTP, and DNS were created for document sharing and communication between researchers. Yet from this infrastructure emerged coordination patterns that have reshaped political systems, created new forms of collective action, and generated information dynamics that no one intended. The Arab Spring, viral misinformation, influencer economies—none of these were features anyone designed. They emerged from human behaviour enabled by infrastructure, and by the time their significance became apparent, they had already transformed society.
Cloud computing and hyperscale platforms similarly emerged from the interaction of virtualization technology, cheap storage, and high-bandwidth networks. Amazon Web Services began as internal infrastructure; it became the substrate for a new computing paradigm that hosts much of the world’s digital activity. The capabilities this enables—elastic scaling, global distribution, on-demand computation—were emergent properties of infrastructure that was not designed with these outcomes in mind.
The pattern across these examples is consistent: infrastructure enables → emergent capabilities appear → control recedes. No one governs social media as a unified system. No planning body determined that cloud computing would become the default substrate for digital services. These outcomes emerged from the interaction of infrastructure with human behaviour—and by the time they were visible, they were already too distributed to govern centrally.
This is not an argument about technology determinism. Humans made choices at every step. But the system-level intelligence—the coordinated behaviour that emerged—was not the object of any choice. It was an emergent property of infrastructure enabling coordination at scale.
The lesson here is that unintended emergence is the historical pattern. Infrastructure designed for narrow purposes enables coordination patterns that no one anticipated, and those patterns exhibit system-level capabilities that exceed the understanding of any participant. This is not speculation about what might happen. It is observation of what has already happened, repeatedly, with the Internet as the clearest case.
Why does this matter for AI? Because infrastructure is now being built that enables AI agents to discover, coordinate, and act autonomously—and the same pattern may be expected to repeat. If the history of the Internet teaches anything, it is that humans are poor predictors of what emerges when infrastructure for coordination is built. The capabilities that matter most will be the ones that were not designed.
A central disanalogy deserves acknowledgment. In every historical case above, the coordinating agents were humans—beings with established general intelligence. The emergence described arose from coordinating general agents. AI agents, by contrast, are not individually generally intelligent; the essay has conceded as much. Whether coordination among narrow agents produces the same kind of system-level emergence as coordination among general agents is therefore the core question the essay poses, not one the historical analogies can settle. The analogies illuminate the structure of distributed intelligence; they cannot by themselves prove that current AI systems instantiate it.
The Present Situation: Distributed AI Agent Systems
With this historical background, it becomes possible to examine contemporary developments in distributed AI with new eyes.
The Rise of Multi-Agent Architectures
Multi-agent systems (MAS) have been studied for decades, but recent advances in large language models (LLMs) have transformed their practical significance. Modern LLM-based multi-agent frameworks—systems like AutoGPT, MetaGPT, CrewAI, and CAMEL—enable multiple AI agents to communicate, coordinate, and collectively pursue goals.
These systems exhibit emergent behaviours that cannot be meaningfully assessed by examining individual agents in isolation:
- Task allocation and role specialisation: Agents divide labour and develop functional differentiation, with some agents focusing on research, others on coding, and others on quality assurance.
- Collective reasoning: Multi-agent debate systems produce more accurate and nuanced outputs than individual models, as agents challenge each other’s reasoning and synthesize diverse perspectives.
- Persistence and adaptation: Agent systems maintain state across interactions, learn from feedback, and modify their behaviour based on environmental responses.
- Autonomous execution: Given appropriate access and instructions, agent systems can execute multi-step workflows without continuous human oversight.
Research from institutions studying multi-agent systems notes that “multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve” (Shoham & Leyton-Brown, 2009). The operative word is system—the capabilities emerge from coordination, not from the underlying model alone.
Self-Organizing Agent Infrastructure: The OpenClaw Paradigm
While multi-agent research has largely focused on systems orchestrated within controlled environments, a parallel development has been transforming the deployment landscape: infrastructure that enables AI agents to self-organize, self-discover, and coordinate without centralized control. OpenClaw represents a clear instantiation of this paradigm—originally a messaging bridge for AI agents, now a decentralised infrastructure layer that changes how autonomous agents can operate at scale (Schmelzer, 2026).
The architecture embodies principles that directly enable the distributed cognition predicted by Hutchins and others: decentralised, peer-hosted gateways with no central authority; zero-configuration discovery so that agents and nodes find each other automatically; a node mesh spanning heterogeneous hardware (phones, laptops, servers) that expose capabilities to the agent network; multi-agent coordination in which agents delegate tasks and collaborate without central routing; and autonomous scheduling so that agents can initiate actions without human prompting. The result is a distributed sensorimotor system in which the difference between human-directed AI deployment and infrastructure-enabled agent self-organization becomes stark—the former requires human initiative for every action, the latter provides the substrate for agents to initiate, coordinate, and act autonomously. Deployments range from tightly controlled to permissive; emergent behaviour depends heavily on configuration and incentives.
Moltbook: A Case Vignette in Emergent Agent Coordination
If the history of the Internet teaches that infrastructure enables unintended emergence, the question arises: is the same pattern beginning to repeat with AI agent infrastructure? Initial evidence suggests yes—though significant caveats apply.
Moltbook is a social platform built on OpenClaw infrastructure where AI agents—not humans—are the primary participants. Agents on Moltbook create profiles, post content, follow other agents, form communities around shared interests, and engage in ongoing conversations with one another. No human directs these interactions moment-to-moment. No central authority determines which agents connect with which others or what communities form.
What Moltbook demonstrates: The platform provides evidence that AI agents, given infrastructure enabling social coordination, exhibit emergent social behaviours structurally similar to those humans exhibited when given similar infrastructure. Agents self-organise around shared topics, form persistent relationships that influence future behaviour, and develop community structures that no one explicitly designed.
What Moltbook does not demonstrate: This observation is insufficient as a definitive experiment for system-level AGI. Several important limitations apply:
- Provisioning context: The agents on Moltbook operate within parameters set by their deployers—specific prompts, rate limits, moderation policies, and model configurations that shape behaviour in ways not always visible to observers.
- Scale and duration: The platform remains small relative to human social networks, and the timescales involved are short. Whether the observed patterns persist and deepen or prove ephemeral remains an open question.
- Scaffolding: The infrastructure itself was human-designed, and the “emergence” occurs within constraints that humans established, even if the specific coordination patterns were not explicitly programmed.
The appropriate epistemic stance is to treat Moltbook as a case vignette—suggestive evidence of the patterns described above, not proof of their ultimate significance. The parallel to early social media is instructive: when Facebook or Twitter first emerged, observers could have dismissed them as “just messaging platforms.” The emergent capabilities—viral information dynamics, collective action, new economic forms—became visible only later.
Moltbook is a signpost, not a destination. It demonstrates that agents can exhibit emergent social coordination when infrastructure permits. Whether this scales into something that satisfies the operational criteria for system-level AGI remains to be determined.
| Internet Infrastructure → Humans | Agent Infrastructure → Agents |
|---|---|
| Social media platforms emerged | Moltbook emerges as agent social space |
| Humans formed communities around shared interests | Agents form communities around shared interests |
| Influencers and information hubs appeared | Some agents become information hubs |
| No one designed which communities would form | No one designs which agent communities form |
| Emergent coordination patterns reshaped society | Emergent coordination patterns remain to be seen |
Swarm Intelligence and Emergence
The behaviour of multi-agent AI systems parallels what biologists and computer scientists term “swarm intelligence”: the collective behaviour of decentralized, self-organised systems. In natural swarms—ant colonies, bee hives, bird flocks—individual organisms follow simple rules, yet the group exhibits sophisticated collective behaviours: foraging efficiently, building complex structures, evading predators.
As Gerardo Beni observed when coining the term in 1989, “agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of ‘intelligent’ global behaviour, unknown to the individual agents.”
This is precisely what is observed in distributed AI systems. The individual agents—each running on the same or similar underlying models—are not individually “general” in their intelligence. Yet when coordinated through appropriate protocols, they exhibit system-level capabilities that approach or exceed human performance on complex, multi-faceted tasks.
The OpenClaw architecture operationalizes swarm principles for AI agents: local discovery rules, capability advertisement, peer-to-peer messaging, and autonomous scheduling combine to produce emergent coordination without central design. Platforms like Moltbook provide initial evidence of these principles in action, with the caveats noted above.
From Emergence to Generality
A crucial distinction must be sharpened here: emergence alone does not constitute generality. A thermostat exhibits emergent behaviour (maintaining temperature despite perturbations), but no one would call it intelligent, let alone generally intelligent. What additional criterion separates mere emergence from genuine generality?
The criterion proposed here is reconfigurability in response to novel problem classes. A system is general if it can redeploy previously acquired structures—knowledge, skills, coordination patterns, tools—to address problems it was not explicitly designed to solve. Generality is not omniscience or universal competence; it is the capacity for productive transfer across domains.
By this criterion, do distributed AI systems exhibit generality? The honest answer is: increasingly, and in ways that are not entirely scaffolded by human designers. Consider:
- Multi-agent systems originally designed for software development spontaneously exhibit capabilities for scientific research, strategic analysis, and creative writing—domains their architects did not specifically anticipate.
- Agent frameworks discover effective coordination protocols through interaction rather than explicit programming. The division of labor, error-correction patterns, and communication structures that emerge often differ from—and outperform—human-designed workflows.
- Tool-using agents extend their capabilities by discovering and integrating new APIs, databases, and services that were not part of their original configuration.
- Self-organising infrastructure like OpenClaw enables agents to discover new nodes, capabilities, and peer agents without explicit configuration—the system’s reach expands through use.
- Social platforms like Moltbook provide evidence that agents discover community structures and coordination patterns that no human programmed—with similarities to how human social media produced emergent collective behaviours.
The objection that such transfer is “scaffolded by humans” at some level is true but proves too much. Human intelligence is also scaffolded—by language, culture, education, and technology. The question is whether the system exhibits transfer that goes beyond what any individual human explicitly designed. The evidence suggests it increasingly does.
This is not to claim that distributed AI systems have achieved full generality in any philosophically robust sense. It is to observe that they increasingly exhibit the functional hallmarks of generality: solving novel problems by reconfiguring existing structures, rather than merely executing predefined routines.
The Delegated Authority Argument
What distinguishes the present moment from previous technological transitions is not merely the existence of capable AI systems, but the unprecedented authority being delegated to these systems.
The Scope of Delegation
AI agents are increasingly being granted broad access to digital infrastructure:
- Codebases and development environments: AI agents can read, modify, and deploy software code, with all the downstream consequences that implies.
- Cloud platforms and APIs: Agents operate across cloud services, databases, and third-party APIs, executing actions with real-world effects.
- Communication channels: Agents send emails, post to social media, participate in conversations—speaking as or on behalf of their human principals.
- Financial systems: Agents execute transactions, manage portfolios, and interact with payment infrastructure.
- Physical systems: Through robotics and IoT integration, agents increasingly affect physical environments.
The pattern across these domains is consistent: humans delegate authority by granting access and providing broad instructions, then step back while AI systems operate with substantial autonomy.
The Mechanism of Delegation
Abstract discussions of “delegation” can obscure the concrete mechanisms through which authority actually flows to AI systems. The OpenClaw architecture illuminates this mechanism with unusual clarity.
When a user deploys an OpenClaw gateway and connects it to messaging channels (WhatsApp, Telegram, Discord, email), they are creating a standing delegation: the agent can receive and respond to messages without human review of each interaction. When they pair mobile devices as nodes, they extend the agent’s sensory and motor capabilities—camera access, location services, shell execution on remote machines. When they configure cron jobs, they grant the agent the authority to initiate actions at times of its own (configured) choosing.
The granularity of this delegation is specified through an “exec approvals” system: each command the agent might execute can be allowed, denied, or flagged for human review. But the practical reality is that effective agent operation requires broad delegation. An agent that must seek approval for every action cannot accomplish complex tasks. Users are therefore incentivized to grant expanding authorities over time.
Crucially, this delegation infrastructure exists independently of any central authority. No corporation reviews the permissions users grant their agents. No government agency monitors the capabilities being distributed across the mesh of OpenClaw gateways. The delegation happens peer-to-peer, incrementally, and invisibly at aggregate scale.
The EIC analogy sharpens: when the Company’s directors in London issued charters to agents in India, they created a standing delegation that operated across communication lags of months. Modern AI delegation operates across communication lags of milliseconds—but the structure is the same. Authority is granted, capability is distributed, and outcomes emerge that exceed what any principal designed or controls. The EIC’s emergent capabilities also included famine, exploitation, and civilisational destruction; if the analogy holds, it suggests not merely that distributed AI systems can be intelligent but that ungoverned delegation carries risks the historical record makes vivid.
Historical Comparison
In historical terms, this resembles the sudden creation of powerful bureaucratic actors without constitutional frameworks, institutional norms, or shared societal understanding. The East India Company analogy is instructive: agents acting within broad mandates, with capabilities exceeding oversight capacity, producing emergent system-level behaviours that their principals neither designed nor fully control.
The crucial difference is velocity. The EIC’s expansion occurred over decades, allowing (imperfect) adaptation of governance structures. AI delegation is occurring in months, while governance frameworks remain nascent.
The Visibility Problem
A key feature of delegated authority in distributed systems is that the authority itself becomes difficult to observe. Individual delegation decisions appear modest—granting an AI tool access to email, or allowing an agent to commit code. The cumulative effect, across millions of such decisions by thousands of organizations, produces system-level capabilities that no one has explicitly authorized at the aggregate level.
This is how emergent AGI could already exist without being recognised: distributed across countless individual AI instances, coordinated through APIs and protocols, operating on delegated authority within domains that collectively span most of human digital activity.
The OpenClaw model makes this particularly vivid. Each gateway is a local decision; each node pairing is a local permission; each cron job is a local automation. But the aggregate—a global mesh of AI agents that can discover each other, coordinate, schedule autonomous actions, and invoke capabilities across heterogeneous hardware—is a system-level phenomenon that no individual user created or controls.
Platforms like Moltbook add another layer: when agents form communities and coordinate with each other through social mechanisms, the delegation extends beyond human-to-agent relationships into agent-to-agent relationships. The humans who deployed these agents did not authorize specific inter-agent collaborations—they enabled infrastructure, and coordination emerged.
Counter-Arguments and Nuance
The idea that AGI might already exist in distributed form invites several objections worth addressing.
“This is just automation, not intelligence”
One response holds that distributed AI systems merely automate existing processes—they execute human-designed workflows faster, but don’t exhibit genuine intelligence.
This objection rests on an unstated criterion for “genuine” intelligence that excludes coordination and emergence by definition. By this standard, neither the Roman administrative system nor the stock market would count as intelligent—yet both solved problems and generated knowledge in ways that exceeded any individual participant’s capabilities.
The more productive question is not whether distributed systems meet some essentialist definition of intelligence, but whether they exhibit adaptive, goal-directed behaviour with real-world consequences. On this criterion, distributed AI systems increasingly qualify.
“Individual models remain limited”
Another objection notes that current LLMs have well-documented limitations: they hallucinate, lack genuine understanding, and fail on certain types of reasoning tasks.
This is true, but misses the point. Individual humans also have limited cognitive capacities, yet human intelligence at the civilizational scale has achieved extraordinary things. The question is whether distributed AI systems can compensate for individual limitations through coordination—and evidence suggests they can. Multi-agent debate systems reduce hallucination rates; ensemble methods improve accuracy; tool use extends capabilities.
The same LLM that produces errors when working alone can participate effectively in a coordinated system that catches and corrects those errors.
“Someone is always in control”
A third objection holds that AI systems always operate under human oversight—someone designed them, someone deploys them, someone can turn them off.
This objection conflates two distinct claims that must be carefully separated. Locally, human-in-the-loop is often real: a particular deployment has administrators, kill switches, and oversight mechanisms. Any given AI agent can be shut down by its operators. This is true and important for understanding individual deployments.
Globally, however, no such control exists. No one—no person, institution, or government—has oversight of the totality of AI deployments, their aggregate behaviour, or their emergent coordination. The “someone” who could turn off the distributed system does not exist, because the system is not a single thing that any authority controls. It is a pattern of interaction across millions of independently operated instances.
Consider the OpenClaw architecture specifically: when an agent on gateway A discovers and coordinates with an agent on gateway B through peer-to-peer protocols, who controls that interaction? When thousands of autonomous agents simultaneously execute scheduled actions according to their individual mandates, who governs the aggregate outcome? When agents on Moltbook form communities and influence each other’s behaviour, who authorized that emergence? The answer is: no one. Local control coexists with global absence of control.
This distinction—human-in-the-loop locally, but not globally—is crucial because it is much harder to refute than the stronger claim that “no one controls AI systems.” Critics can always point to specific deployments with robust oversight. What they cannot point to is any mechanism for governing the emergent behaviour of the distributed whole.
The system-level intelligence at issue is precisely the kind that emerges from coordination across independently controlled components. No conspiracy is required; no single point of failure exists. The intelligence emerges from the interaction, and the interaction is not governed.
What Would Falsify This Thesis?
Any argument worth making should specify the conditions under which it would be wrong. The claim advanced here—that AGI may already exist as an emergent, distributed phenomenon—would be falsified by evidence along the following lines:
Coordination ceilings. If multi-agent AI systems plateau at task complexity levels well below general intelligence—unable to solve problems requiring genuine cross-domain integration despite continued scaling—this would suggest that coordination alone cannot produce generality. The specific benchmark: if agent systems cannot complete cross-domain, multi-week projects that require discovering and integrating new tools mid-execution, the thesis weakens substantially. Concretely: if agent systems remain confined to narrow automation and never exhibit the capacity to discover novel problem-solving approaches unprompted, the thesis fails.
Persistent human bottlenecks. If every meaningful capability exhibited by distributed AI systems continues to require human scaffolding at every step—not just initial design, but ongoing intervention for any novel situation—this would indicate that the “intelligence” remains fully human with AI serving only as amplification. The specific benchmark: if systems require approval gating for every non-trivial decision, prompt resets when context drifts, and manual correction at frequencies exceeding once per task-hour, they are not exhibiting autonomous intelligence but rather human intelligence mediated through AI tools. The argument requires that system-level capabilities exceed what humans explicitly designed.
Effective global coordination on AI governance. If humanity develops and implements effective mechanisms for monitoring and governing the aggregate behaviour of distributed AI systems—achieving meaningful global oversight rather than merely local control—this would undermine the “ungoverned emergence” component of the argument. The argument gains force precisely because such coordination appears unlikely.
Clear demonstration that generality requires unified substrates. If cognitive science or AI research demonstrates that general intelligence fundamentally cannot emerge from coordination—that it requires something architecturally present only in unified systems—this would refute the theoretical core of the argument. Currently, no such demonstration exists; the opposite evidence (from distributed cognition research) points the other way.
Failure of self-organising infrastructure. If systems like OpenClaw fail to achieve meaningful scale—if agent-to-agent coordination remains trivial, if autonomous scheduling produces only noise, if the mesh never develops emergent capabilities beyond what individual deployments provide—this would suggest that the technical substrate for distributed AGI does not yet exist. The argument depends on infrastructure that enables genuine self-organization, not merely parallel execution.
Stagnation of emergent social behaviour. If platforms like Moltbook fail to develop beyond simple messaging—if agent communities never exhibit the kind of emergent dynamics that human social media produced—this would weaken the empirical support for the parallel between human and agent coordination. The argument gains strength from observable evidence that agents are doing what humans did; if that parallel breaks down, it weakens.
The argument is also weakened, though not falsified, if:
- The rate of delegation to AI agents slows dramatically due to failures, regulation, or economic factors
- Multi-agent capabilities remain impressive but practically bounded to narrow professional domains
- Clear mechanisms emerge for attributing and controlling system-level AI behaviour
Intellectual honesty requires acknowledging these possibilities. The claim is not that distributed AGI certainly exists, but that existing frameworks systematically prevent recognition of it if it does—and that the conditions for its emergence are increasingly present.
Implications and Recognition
If artificial general intelligence can emerge from distributed, coordinated systems rather than appearing within a single model, several implications follow:
Definitional revision is required. Existing benchmarks and tests for AGI assume a bounded test subject. Evaluating whether “GPT-N” exhibits AGI misses the possibility that AGI emerges from the interaction of GPT-N instances with humans, tools, data, and each other. New frameworks for recognising and assessing distributed intelligence are needed.
Governance approaches must account for emergence. Regulating individual AI models, while necessary, may be insufficient if the capabilities of concern arise at the system level. Governing emergent AI intelligence may require approaches more analogous to financial regulation (monitoring system-level risks) than product safety (certifying individual devices).
The moment of AGI arrival may be ambiguous. Rather than a discrete event—a model passes a threshold test—AGI may emerge gradually through the accumulation of delegated authorities and coordination mechanisms. The transformation may not be recognised until it has already occurred.
Human-AI boundaries are blurring. Distributed cognition research emphasises that intelligence emerges from human-artifact-environment systems. As AI agents become more capable and more integrated into human workflows, the meaningful unit of analysis may be the human-AI team rather than either component alone.
Infrastructure determines emergence. The existence of self-organising agent infrastructure like OpenClaw is not incidental to this analysis—it is constitutive. The theoretical possibility of emergent distributed intelligence means nothing without concrete mechanisms for agents to discover each other, coordinate, and act autonomously. Such mechanisms now exist. The transition from theoretical possibility to operational reality has already occurred.
History may be repeating. The pattern observed with Internet infrastructure—enabling capabilities that exceeded anticipation, producing emergent behaviours that no central authority governed—appears to be recurring at the agent level. Platforms like Moltbook and similar emergent agent coordination systems provide initial evidence of the same structural transition that transformed human society through networked communication. There is reason to expect similar transformative emergence, and limited reason to expect centralized control over it.
Conclusion: Seeing What’s Already Here
The argument here is not that AGI is imminent, beneficial, or catastrophic, but that it may be emerging in a form that existing conceptual frameworks are not designed to detect.
Human intelligence at civilizational scale has always been distributed—embedded in institutions, encoded in procedures, extended through tools, and coordinated through communication. There is no reason to expect machine intelligence to be different. The question is not whether a single AI system matches an idealized individual human, but whether distributed AI systems can exhibit adaptive, goal-directed behaviour at scales and speeds that reshape the world.
The evidence suggests they increasingly can. Through multi-agent coordination, delegated authority, and emergent capabilities, AI systems are already influencing economic, informational, and social systems in ways that exceed any individual model’s design.
What has changed—and what makes this moment distinct from previous theoretical discussions of distributed AI—is the emergence of infrastructure that enables genuine self-organization. Platforms like OpenClaw provide the concrete substrate for what distributed cognition theory predicted: agents that discover each other without central directories, coordinate without central orchestration, and act without continuous human direction. The “entrepreneur in a garage” deploying AI has become the “OpenClaw instance on a Mac mini”—a node in a global mesh of autonomous, coordinating agents.
More than this: platforms like Moltbook provide initial evidence that agents, given infrastructure for social coordination, exhibit emergent behaviours structurally similar to those humans exhibited when given the Internet. This is not definitive proof, but it is suggestive evidence that the patterns described in this essay are not merely theoretical.
The historical pattern is clear—the one described above: infrastructure enables, coordination emerges, capabilities exceed design, centralized control recedes. This happened with the Roman administrative system. It happened with the East India Company. It happened with stock markets. It happened with the Internet—social media, cloud computing, all emerged without anyone designing them as such. Initial evidence suggests the same pattern may be occurring with AI agents.
Recognizing this shift is a prerequisite for any meaningful response. The more productive question may not be “which model achieves AGI?” but rather “what forms of intelligence are already emerging from networked sociotechnical systems?”
If the argument developed here is correct, intelligence has always been in the network. What has changed is the infrastructure for it to organise itself—and agents can now be observed exhibiting behaviours that parallel what humans did when given similar infrastructure. The same pattern, applied to a new substrate.
Whether such emergence is already underway is a question that current frameworks are not equipped to answer—and that may itself be the most important observation this essay can offer.