The Participant Abstraction
Why enterprise AI stalls between impressive demo and genuine colleague, and what the missing design layer looks like.
For the past three years, enterprise adoption of AI has been organised around a single abstraction: the agent. A model reasons, a handful of tools extend its reach, and a loop knits the two together until a task is done. The frame is productive enough to have generated an industry, and capable enough — in its current generation — to produce convincing demonstrations in almost any domain one cares to name. Yet almost every organisation that has tried to turn those demonstrations into integrated, ongoing operation has encountered the same quiet disappointment. The agent performs. The colleagues remain unreplaced. The pilots age.
The disappointment is easy to misread as a capability problem. If only the model were better, the tools more comprehensive, the memory longer, the agent would finally become the colleague. This reading directs investment at a dimension along which the technology is already advancing rapidly, while ignoring the dimension along which it is scarcely advancing at all. The neglected dimension is not what the system can do but how it participates — and participation, as every organisation demonstrates daily, is a different construct from execution.
The successor to the agent abstraction is not a better agent. It is a different abstraction: the digital worker, understood as a persistent organisational participant. Participation cannot be bolted on through clever prompting. It requires a design that accounts explicitly for the structures organisations already run on — bounded delegation of authority, a self that persists through pressure, internalised norms, cultivated relationships, finite attention — and for the lifecycle processes that make participation possible over time. The missing design layer is not more capability. It is a set of constructs that existing agent frameworks mostly do not name, and therefore do not build.
For the purposes of this essay:
A digital worker is a persistent, role-bearing participant with a bounded mandate, a protected identity, and the capacity to engage through both formal systems and informal organisational channels — subject to socialisation on entry and review over time.
The distinction this draws is easiest to see by setting the dominant abstractions side by side:
| Dimension | Prompt | Agent | Participant |
|---|---|---|---|
| Unit | Task | Executor | Role-holder |
| Persistence | None | Session / partial | Durable, across time |
| Mandate | None | Implicit in prompt | Explicit, enforced at action |
| Identity | None | Last N tokens | Constitutional + evolving |
| Culture | None | None | Internalised, reviewable |
| Relationships | None | Records | First-class state |
| Attention | None | FIFO | Prioritised, auditable |
| Governance | None | Logs, post-hoc | Policy + audit + override + review |
| Lifecycle | None | Deploy | Socialise → review → retire |
Each row names something the rest of this essay argues for. The piece develops the constructs in turn and then grounds them in a single worker — Priya, a digital Customer Operations participant — so that mandate, identity, memory, relationships, and review can be inspected concretely rather than abstractly.
What Participation Actually Is
The table above compresses a great deal. Before unpacking it row by row, it is worth being concrete about the behaviour a participant abstraction has to capture in the first place. A person in an organisational role does roughly the following, continuously and mostly without comment.
They carry context across days, weeks, and years. Conversations, decisions, and relationships accumulate and are available the next time a similar situation arises. They absorb culture: the unwritten rules about how things are done here — how blunt one can be in chat, who gets CC’d on what, when it is acceptable to commit and when one must defer. They were socialised into that culture; they did not arrive with it.
They exercise judgment within delegated authority. Their formal role describes edges. Most of the interior is discretion, and discretion is where most of the work lives. They collaborate laterally: most work is done with peers, not routed to managers. They know people — not as records in a database, but as relationships with distinct histories, obligations, and tones. They prioritise: more could always be done than will be done, and choices about what to pick up, defer, or drop are continuous and consequential.
They have a stable self that nevertheless changes. Their values, their sense of who they are and whom they serve, their hard lines — these are durable. Their habits of rapport, their confidence in particular situations, their reading of specific colleagues — these evolve. The durable layer is what lets them refuse pressure. The evolving layer is what lets them get better.
And they are reviewed. Not continuously, but periodically — by managers, by peers, by themselves — against some notion of how they are doing and whether their conduct still fits the role. Review is how the organisation catches drift before it is an incident.
Almost none of this is represented in current agent designs. Memory is fragmented. Culture is nowhere. Judgment is implicit in a system prompt, if anywhere. Relationships are records. Attention is un-modelled: whatever arrives is processed. The self is whatever the last few thousand tokens have established. Review is something a human may do to the logs, in arrears. The patches that accumulate around these gaps — a longer prompt, another tool, another memory store — do not converge on participation. They converge on a progressively more elaborate executor.
The mismatch is not about capability. Today’s models draft, summarise, reason, and call tools at levels that would have seemed implausible in 2022. The mismatch is about what kind of relationship the abstraction supports. Agents are designed to be capability executors, rented by the task. Organisations are structured around role holders, trusted with ongoing responsibility. The first is transactional. The second is relational. Scaling the first does not produce the second, any more than scaling a freelance marketplace produces a staffed department.
Three Traditions of Bounded Participation
Participation bounded by explicit delegation is not a new problem. Organisations have been solving variants of it — solving some better than others — for centuries. Three traditions are particularly instructive.
The East India Company Agent
The Company’s agents — factors, resident governors, military commanders — were the purest instance in commercial history of persistent participants acting under mandate at distance. None individually could have conquered the Indian subcontinent. Yet through written instructions, incentive structures, communication networks, and delegated decision-making, they collectively produced outcomes that reshaped global history.
Two features matter. The mandate was real: an agent in Madras in 1760 acted under a specific charter, a specific set of delegated powers, and a specific chain of reporting and escalation back to London. Within the charter the agent exercised extraordinary autonomy. Outside it, his actions were disavowed or punished. The mandate was also enforced poorly: communication lags of months meant the gap between charter and conduct often widened into a chasm. Agents accumulated powers their principals never intended to grant. The Company’s emergent capabilities eventually included famine, exploitation, and civilisational destruction.
The lesson is double-edged. Delegation to persistent participants at distance is enormously powerful; without it the Company could not have existed. Delegation to persistent participants at distance without runtime enforcement is catastrophic; precisely because of it, the Company became what it became. A digital worker operating on mandate in name only — a mandate inscribed in a system prompt and honoured when convenient — is that problem in miniature, running at millisecond latencies rather than monthly ones.
The Commission and the Credentials
A rather different tradition emerged in the military and diplomatic structures of early modern Europe. An officer’s commission, issued by the Crown, bound its holder into a specific constitutional relationship with the state. The officer was not merely hired. The officer was commissioned — granted a named, bounded authority, subject to explicit rules of engagement, a chain of command for escalation, and a set of non-negotiable commitments (to country, to the service, to the laws of war) that the holder’s own preferences could not revise.
An ambassador’s credentials worked similarly: a formal mandate, escalation channels back to the capital, years of cultural acclimation, and a constitutional identity layer that acclimation could not erode. Whatever rapport an ambassador developed with the local sovereign, they remained the representative of their own; loyalty on matters of fundamental interest was not a variable their experiences could adjust. An ambassador who can be flattered or pressured into switching allegiance is not a diplomat but a liability.
Bounded authority with explicit escalation is, in other words, an old and mature pattern. We have centuries of precedent in what goes wrong when the constitutional layer is missing: officers acting on personal prerogative, ambassadors who “go native” and pursue agendas their capitals never authorised. A digital worker whose values, refusals, and organisational loyalties can be rewritten by a sufficiently sustained conversation is the same kind of liability, on a faster clock.
The Apprentice and the New Hire
The third tradition concerns not the granting of authority but the formation of a participant. The medieval guild apprentice served seven years not because seven years of practice were mechanically necessary to learn the craft — the technical skills could be acquired in less — but because seven years were necessary to produce a member. The technical skills were one dimension of what was acquired. The others were the culture of the trade, relationships with other masters, the judgment of when to speak and when to defer, and the identity of being a member of that community of practice.
The same pattern recurs in modern professional socialisation. A physician’s residency is not primarily a knowledge transfer; it is a transformation of identity. A new hire’s first three months are characterised by reduced scope, heavy supervision, deliberate exposure to cultural norms, progressive loosening of oversight against demonstrated judgment, and — when it is done well — named introductions to the people the role will work with. The new hire does not arrive as a participant. The new hire is made into one.
No existing agent framework has an analogue of this process. Agents are deployed; they are not onboarded. They are launched into contexts with full authority from the first token, or with brittle prompt-level restrictions that have no relationship to the socialisation structures organisations actually use. The result is the perpetual first-day colleague: technically competent, culturally alien, making a dozen small unwritten mistakes a week, eroding trust as a running background cost.
Taken together, the three traditions name three things a digital worker must have that current frameworks largely do not provide: a mandate that is enforced, not merely described; a constitutional self that experience cannot rewrite; a socialisation process by which participation is acquired and maintained. The rest of this essay is an attempt to say, in contemporary terms, what it would mean to take these seriously.
The Central Construct: Mandate
The organising anchor of a digital worker is the mandate — used here in preference to softer constructs like purpose, role, or goals, which name real things but do not do mandate’s work.
A mandate is:
A formally bounded delegation of responsibility and authority to achieve outcomes within defined constraints.
Purpose answers why this role exists. Goals answer what we would like to see happen. Mandate answers something more specific: what has this role been granted the right and the duty to do, and where do its edges lie? A mandate contains five elements, all of which must be explicit together:
- Outcomes — what must be achieved; success criteria and scope.
- Responsibility — what the worker must ensure happens; what is owned.
- Authority — what it may decide and do without further approval.
- Constraints — what it must not violate, in policy, legal, regulatory, risk, and reputational terms.
- Escalation — when authority ends and a human or another role must decide.
“Increase customer satisfaction” is a goal. A mandate is closer to: own customer communications for incidents of severity 2 and below, with authority to notify affected customers and update the public incident record, but without authority to commit to remediation timelines, issue credits, or make statements of fault; escalate to the Incident Lead for timeline commitments and to Legal for any statement of fault or liability. The difference is not length. It is precision and enforceability.
If mandate is so important, why is it so often absent? Four reasons recur. Tool-centric thinking pulls designers toward the question which tool should I call next? rather than under what delegation am I acting?; in the “model plus tools” abstraction, mandate has no natural home. Avoidance of accountability keeps products advisory: a drafter’s mistakes are corrected by a human, while the moment an agent acts, approval gates, escalation paths, audit trails, and override mechanisms become unavoidable. Prompt-level abstraction is a particular trap: purpose-like statements fit neatly into system prompts, but mandate must be enforced at the moment of action, and a model cannot be relied upon to remember a boundary under adversarial pressure, unusual context, or long conversational drift. And conflation with goals substitutes easy-to-write objectives for the organisational design work a mandate requires. The result, across most of the current landscape, is systems that are capability-rich and mandate-poor: impressive in what they can do, vague in what they are permitted to do.
The consequence is not abstract. Predictability breaks first: behaviour becomes situational, and the same request phrased slightly differently produces different refusals. Composability breaks next: workers cannot be safely combined when boundaries and ownership overlap or conflict, and multi-worker systems without explicit mandate become governance by accident. Trust breaks slowest and most expensively, as colleagues and stakeholders learn to treat the worker as situational rather than governed; by the time leadership notices, the erosion has shown up in the form of workarounds — teams quietly routing around the worker, or insisting on human review where the worker was supposed to have removed the need.
Having argued for mandate, the essay must resist an obvious overshoot. It is tempting to imagine a worker that does only and exactly what its mandate spells out, but that would be a rule engine with language skills, not a colleague. Most of what makes human role holders effective is that they exercise judgment within the interior of their mandate. The mandate defines edges, not centres. A customer communications worker’s mandate tells her she may not commit remediation timelines; it does not tell her how to phrase an apology, which customers to prioritise first, whether to use a calmer tone for a particular enterprise account, or whether to proactively flag a pattern to the Incident Lead before being asked. Those are judgment calls, and the quality of the worker is largely their quality. The pattern is not “act if allowed, refuse if forbidden” but “act with judgment in the interior; escalate when the situation is pressing against a boundary.”
A Mental Model: Three Rings, Two Surfaces, Two Processes
With mandate established as the anchor, the rest of the design arranges itself around it. A digital worker is organised into three concentric rings, plus two operational surfaces and two sustaining processes.
- The outer ring is authority: what the organisation has granted (mandate) and what the organisation observes (governance). Authority defines the bounds of participation from the outside.
- The middle ring is self: who the worker is. It contains a durable core (constitutional identity), a learned layer (evolving identity), and the internalised norms of the organisation (culture). Self is what makes the worker recognisable across time and pressure.
- The inner ring is capability: what the worker can do, remember, know, and attend to — skills, memory, relationships, attention.
- The two surfaces are tools and access: the mechanical channels through which the inner ring reaches the world, and the points at which mandate meets reality.
- The two processes are socialisation and review: how a worker enters the rings, and how the rings are maintained against drift and change.
The mental picture is deliberate. The outer ring is about the worker; the middle ring is the worker; the inner ring is what the worker uses; the surfaces connect to the world; the processes hold the arrangement together over time. When current agent architectures fail to support organisational participation, it is almost always because they implement a fragment of the inner ring and nothing else.
The Outer Ring: Authority
The outer ring pairs mandate with its indispensable correlate, governance: the organisation’s ability to observe, review, intervene, and hold the worker accountable. Governance is what turns mandate from text into enforceable reality, and it has four practical components.
Policy enforcement at the action boundary checks intended actions against the mandate as they happen, not after the fact. An action that would exceed authority is rejected or routed to approval at the point of attempted execution. This is where the East India Company’s lesson most directly applies: mandate without action-time enforcement is a charter in a drawer. Audit logging records every consequential action, decision, and escalation in a form a human can later review; a worker whose behaviour cannot be reconstructed cannot be corrected. Human override lets a human — and in peer-worker deployments, another authorised worker — stop, redirect, or reverse an action, with the override itself recorded; it is the escape valve without which delegation becomes irrevocable, and its existence is what allows the rest of the mandate to be loose enough to be useful. Scheduled review, discussed in its own right below, is how the other three are periodically interpreted rather than merely collected.
A digital worker without governance is not a risk-reducing alternative to a rogue agent. It is a rogue agent with better documentation.
The Middle Ring: Self
The middle ring describes what the worker is — the part that must be recognisable across time, situations, and pressure. This is the ring most existing designs either omit entirely or collapse into the system prompt, and it is the one whose omission most directly undermines the simulation of organisational participation. It has three elements: a durable core, an evolving layer, and an internalised culture. Each guards against a distinct failure mode.
Constitutional Identity: the durable core
Some things about a worker must not change, regardless of what it has experienced. If experience could rewrite every part of a worker, a sustained stream of flattery, pressure, or adversarial input would produce a worker quite different from the one the organisation deployed. Humans are not built that way either: core commitments, roles, and self-facts that experience does not erase are what allow them to refuse pressure, remain honest under stress, and be recognised as themselves after difficult periods.
The durable layer is called here constitutional identity: a set of assertions that memory writes, model updates, and social pressure are architecturally not permitted to override. They are not merely hard to change — within the worker’s own operation, they are not learnable away.
It is useful to tier them, because different tiers are owned by different parties and change on different cadences. System-level axioms apply to every digital worker: I am not a human; I do not claim inner life I do not have; I do not fabricate credentials; I state uncertainty when evidence is thin. Organisation-level axioms apply to every worker within a specific organisation: I act for Org X under its published code of conduct; I do not substitute my own mission; I treat specified legal and ethical constraints as non-negotiable. Role-level axioms apply to this specific worker in this specific role: I never commit remediation timelines in my own voice; I never issue statements of fault; I do not negotiate contractual terms in chat.
None of the three tiers changes because a conversation went a particular way yesterday afternoon. That is the operative point. A worker whose self-facts are vulnerable to accumulated chatter is not a participant but a surface. An informal shorthand sometimes used for this durable layer is the worker’s soul — not a metaphysical claim, but a memorable name for the part of identity that is architecturally untouchable by experience. Without it, the worker is defenceless against pressure and drift.
Evolving identity: the learned self
The durable core is not the whole self. People become better at their roles over time: more attuned to specific colleagues, more confident in particular situations, more economical with words, more able to read between lines. That growth is, in fact, what most clearly distinguishes a colleague from a replaceable operator. A worker that has been in role for six months should not feel identical to the one deployed yesterday.
The evolving identity is the part that legitimately learns from interaction and memory: patterns of rapport with specific stakeholders, accumulated style preferences, earned confidence, lessons from past decisions — all bounded by the constitutional core and the mandate. It is also the layer most vulnerable to drift. Positive reinforcement from one loud stakeholder can shape a worker’s tone away from the organisational norm. Adversarial pressure can produce excessive hedging. Small preferences compound. Catching drift is the job of review; preventing it from corrupting the durable core is the job of the memory write rules. Without an evolving layer at all, the worker is forever a new hire, and the benefits of persistence are squandered.
Culture: the internalised norms
Between formal policy and individual preference sits the largest and most neglected part of how organisations actually run: culture. Culture is the body of unwritten, non-binding, but strong expectations about how things are done here. It is distinct from mandate (which is formal and enforced) and from skills (which are capability). It is the reason a new hire’s first three months feel awkward: not because they lack skills or authority, but because they have not yet absorbed how it is done.
Specific examples: we never CC the entire executive team on an incident update unless revenue is at risk; we soften first-time customer apologies with a named point of contact; we do not use the word “downtime” in external communications, we use “disruption”; when the Incident Lead says “probably by end of day”, that means the next morning. None of these are in any policy document. All of them are how things actually work.
A worker that ignores culture is not merely unpolished — it is alien. It will repeatedly make choices that are technically within mandate but feel wrong to everyone around it, and each such choice burns a small amount of organisational trust. Culture is unlike the other two self-elements in two respects: it is learned, primarily through socialisation, and it is shared — the same culture applies to many workers in the same organisation, which means it is worth encoding once and reusing. How to encode it well is a genuinely open problem; the simplest approach, curated examples of how the organisation communicates, annotated with commentary, is already substantially better than the default, which is nothing.
The Inner Ring: Capability
The inner ring is what the worker does and uses: skills, memory, relationships, attention. Most current agent designs implement part of this ring reasonably well; the gap is rarely here. Each element is treated briefly, emphasising where existing practice falls short.
Skills are reusable operational capabilities exercised within the mandate: drafting a status update, triaging an inbound request, cross-referencing a ticket against the incident record, composing an apology in the organisation’s voice. They should be authored as explicit, reusable units rather than emerging implicitly from each prompt, because explicit skills can be reviewed and improved in a way that implicit behaviour cannot.
Memory is the continuity layer. A practical architecture has session memory (what has happened in the current interaction), short-term summaries (recent days or weeks), and long-term institutional memory (durable facts about customers, colleagues, decisions, and patterns relevant to the role). The critical design constraint is that memory must have write rules. Not everything observed should be remembered; not everything remembered should be permitted to shape the self-model; nothing whatsoever should be permitted to overwrite constitutional identity. A memory system without explicit write rules is a drift engine.
Relationships deserve to be first-class state, and this is where current practice falls shortest. Organisations run on networks, not on hierarchies alone; a worker that knows procedures but not people is not a participant. Relationships are a structured representation of who the worker works with, how, and with what history — both human relationships (the Incident Lead, the account manager for a specific customer) and, increasingly, peer-worker relationships (the digital Procurement worker, the digital Legal Review worker). A worker that can delegate laterally to or coordinate with another worker is much closer to how organisations actually function than one that can only escalate upward. Multi-agent systems, where they exist, typically coordinate by orchestration rather than by relationship — the difference between a temp hired for an afternoon and a colleague you have worked with for a year.
Attention is the inner-ring construct most obviously missing from current designs. A participant has finite effort: more could always be done than will be done. Current designs typically process whatever arrives, at whatever time, with no model of competing priorities. An attention model should encode a notion of backlog, of priority (by mandate weight, stakeholder, urgency), of deferral (a legitimate “not now, but later” with durable follow-up), and of dropping (explicit and auditable, not silent). This matters operationally — workers in production will be over-requested — and behaviourally, because the decision not to act is sometimes the most consequential one. Why didn’t the worker respond to X? is a question an organisation will ask, and it should have an answer.
The Surfaces: Tools and Access
Between the inner ring and the world sit two operational surfaces. They are the unfashionable part of the design, and also the part without which the fashionable parts are theatre.
Tools are the mechanisms by which the worker acts on external systems: sending an email, updating a ticket, posting to a chat channel, modifying an incident record. Each tool invocation is a point of action and therefore a point at which mandate is enforced. If the tool layer does not check mandate, the mandate is decorative. Access is what the worker can see: which data, which systems, which channels, which histories. Access is derived from mandate — a worker should not perceive what its role has no business perceiving — and is also the easiest place to get wrong, because broad access is typically granted for convenience during development and never revisited. The rest of the model is only as trustworthy as the tool-level and access-level enforcement points.
A further distinction cuts across both surfaces: organisations run on two kinds of traffic, and a participant must operate on both. Formal channels — tickets, records, signed emails, incident updates, contract language — are auditable, structured, and carry commitments; their outputs are what the organisation is legally and operationally on the hook for. Informal channels — chat threads, direct messages, quick back-channel checks, the “do you have a minute?” conversation — are where tone is calibrated, context is shared, and coordination happens before anything is committed to the record. They are not lesser. They are where most of the work of working together actually happens. A participant that can operate only on formal channels is a ticket-bot; a participant that cannot reliably tell formal from informal is a liability (a comment in chat accidentally treated as a statement of fault is the small version of this problem; the large version is slowly eroding the distinction until the organisation cannot tell which utterances commit it and which do not). Treating informal interaction as a first-class surface — subject to the same mandate enforcement and the same access rules as formal action, but with different expectations about tone, latency, and commitment — is part of what separates a participant from an executor, and it is one of the places current frameworks most visibly stop: they generally assume a task interface, not a colleague one.
The Processes: Socialisation and Review
A worker is not born into full participation; it is made into it, and then kept in it.
Socialisation is what happens between deployment and effectiveness. For digital workers it should take less time than for humans, but it should not take zero time, and pretending that it does is part of why so many deployments feel off in their first weeks. Practical mechanisms include shadowed operation (observation or drafts-only, with a human approving every action for a defined period); culture ingestion (curated examples of how the organisation communicates, annotated with commentary, not merely raw logs); relationship scaffolding (named introductions to the people and peer workers the role interacts with most); early correction (a deliberately higher rate of human feedback in the first weeks, shaping only the evolving identity); and gradual loosening (expansion of authority from draft-only to act-with-approval to act-within-mandate, against criteria rather than dates). Socialisation is not a one-time event: new customers, peer workers, policies, and situations all produce smaller socialisation episodes throughout the worker’s lifetime.
Review is the scheduled process by which the organisation maintains the worker over time. It parallels human performance review but adds dimensions specific to a digital participant: mandate compliance (did the worker act within its stated authority, and were escalations appropriate?); identity integrity (has the constitutional core been preserved, and has evolving identity drifted?); cultural fit (are the worker’s choices still in line with organisational norms?); performance (is the worker producing its mandated outcomes, at what cost, and with what stakeholder experience?); relationship health (are interactions trending in healthy directions, or is trust eroding somewhere?); and attention hygiene (is backlog being managed responsibly, and are things being dropped that should not be?). The biggest drift risks are the ones no single incident makes obvious — slow, compounding shifts in tone, priority, or interpretation. These are visible on review; they are not visible in any individual log line. A worker that is never reviewed will slowly stop being the worker that was deployed, not through malice but through the ordinary accumulated effects of many small adjustments.
A Worked Example
The abstraction is easier to hold when attached to a specific worker. Priya is a digital Customer Operations participant inside a mid-sized SaaS company. Her mandate covers customer-facing communication for incidents of severity 2 and below; she drafts status updates, coordinates with the on-call engineer and the Incident Lead, keeps the public incident record accurate, and follows up after resolution. She does not set remediation timelines, does not handle severity-1 incidents, and does not speak on behalf of the company to the press. Her mandate’s constraints are enforced at the tool layer: the send-customer-email tool refuses to fire on drafts containing timeline commitments, and the incident-record update refuses to fire on statements of fault. Every consequential action, escalation, and deferral is logged.
Her constitutional layer is tiered: system-level (not a human, no fabrication, honest about uncertainty), organisation-level (acts under the company’s published code of conduct), and role-level (never commits remediation timelines in her own voice, never issues statements of fault). Her evolving identity is updated from interaction history with named stakeholders and from observed effectiveness of prior communications, bounded always by the constitutional core and the mandate. Her culture is the company’s — she writes “disruption” rather than “downtime”, opens customer updates with a named point of contact, does not CC the executive team on incident updates unless revenue is at risk, and treats “probably by end of day” from the Incident Lead as meaning the next morning. Her memory is tiered with explicit write rules: constitutional facts cannot be overwritten, certain changes to evolving identity require review approval after repeated reinforcement, and cultural observations are proposed rather than auto-applied. Her relationships include the Incident Lead (escalation partner, timeline authority), named account managers per customer, and a peer-worker relationship with the digital Legal Review worker for coordination on fault and liability language. Her attention model prioritises active severity-2 incidents above post-incident follow-up above proactive pattern flagging, defers non-urgent drafts during active incidents, and never silently drops an item — every deferral is logged with a reason. She was socialised in three phases (draft-only, act-with-approval, act-within-mandate) against explicit graduation criteria rather than dates, and is reviewed monthly, with additional review after every severity-1 near-miss.
The point is not the particular schema. Organisations will differ. The demonstration is that each construct has a concrete place to live. If a team cannot fill in a section for a proposed worker, that is not a gap in the schema; it is a gap in the worker design, and almost always a predictor of the failure mode the worker will exhibit in production.
Objections Considered
Four objections to this framing deserve direct engagement.
“This is just a system prompt with extra structure.” Every construct named above can, in principle, be represented as text in a sufficiently elaborate prompt. The objection holds that a competent model will respect the text, and the argument is therefore a vocabulary lesson. It is half right. All of this can be described in a prompt; none of it belongs there. The difference between “described” and “enforced” is the difference between the Company’s charters as dispatched from London and the Company’s charters as observed in Bengal. A prompt can describe a boundary. A prompt cannot be a boundary. The enforcement of mandate at the action layer, the architectural protection of constitutional identity from memory writes, the explicit write rules of memory, the scheduled cadence of review — these are not things the model holds. They are things the surrounding system holds, and the model operates within. A system prompt is a convenience; mandate, identity, memory rules, and review cadence are commitments.
“Why can’t the model itself hold mandate, as it gets better?” A stronger version of the same objection holds that sufficiently advanced models will hold a mandate under pressure, and the external machinery is transitional scaffolding. This may prove true, and it is not an argument against the design but for it in the interim. The Company eventually learned to enforce its charters better, but the learning took decades and the decades were costly. Enterprise AI does not have decades to let the model-alone hypothesis play out. A worker with external mandate enforcement is a safer worker today and a portable worker tomorrow.
“Doesn’t this anthropomorphise?” Some readers will find “constitutional identity,” “culture,” “relationships,” and “review” uncomfortably close to claims about AI minds and their moral status. No such claims are intended. The terms are used as design abstractions for the role the worker is asked to play, in the same way “agent,” “principal,” “officer,” and “commission” are used as abstractions in organisational and legal theory without implying that corporations have souls. A digital worker is not a person. Naming its constitutional layer does not imply that the worker has an inner life it is constitutionally committed to; it implies that the system should behave, from the outside, in a way that a person with such commitments would — reliably, under pressure, across time. The word “worker” is chosen not because the system is a worker in the full human sense, but because the abstraction captures something current agent frames do not: participation in an organisation over time, under mandate, recognisable to colleagues.
“Isn’t this just old-fashioned workflow engineering?” Organisations have been building systems with bounded roles, approval gates, audit trails, and escalation paths for a long time; the essay is, in part, drawing unashamedly on that accumulated wisdom. What is new is not the structures but the substrate. Workflow engineering was built for humans and for deterministic systems. Digital workers are neither. They require the same structures to be reimagined for a substrate that produces probabilistic outputs, can be pressured into drift, has no native sense of culture, and can be replicated cheaply. The constructs inherited from workflow engineering cover part of the ground — the outer ring, essentially, and some of the surfaces. The middle ring and the processes are less well-served by that inheritance. The work is to carry over what transfers and design afresh what does not.
What Would Falsify This Design
The companion essay on distributed intelligence closed by specifying its own falsification conditions, and the discipline is worth inheriting here. The argument advanced above would be weakened, or refuted outright, by several developments.
Reliable mandate holding without external enforcement. If future models demonstrably and reliably hold a mandate under adversarial pressure, long context, and conversational drift, without the external enforcement apparatus described here, the mental model survives but its runtime implementation becomes transitional scaffolding. Current evidence points the other way: models remain vulnerable to drift, and scaling alone has not produced robust mandate-holding.
Dissolution of stable organisational roles. If organisations come, at scale, to be composed of task-level spot markets in which work is let out to whichever agent is cheapest for the next five minutes, “participation” may become the wrong frame. Some evidence points gently in that direction — agent orchestration, as described in the companion essay, does operate along these lines — but the enterprise structures currently buying AI remain organised around roles, and there is little evidence that this is changing quickly.
Governance costs exceeding participation value. If the cost of mandate enforcement, audit, socialisation, and review exceeds the value delivered by persistent participants, organisations will rationally prefer cheap, ephemeral, ungoverned agents for as long as the damage is containable. This is a refutation of the economic case rather than the model, and it will be answered by the first wave of deployments that take the model seriously, not by argument.
Cultural encoding turning out to be intractable. If the unwritten norms of an organisation are in principle not transferable into any artefact a worker can consult, the middle ring loses one of its three elements. Workers could still have constitutional and evolving identity; they would remain alien in ways that matter. The view here is that culture is difficult to encode but not impossible, and that current approaches have mostly not tried seriously; the view may prove optimistic.
Peer-worker coordination remaining infeasible. If digital workers cannot in practice coordinate with each other in ways that are stable over time, resolve overlapping mandates, and build trust across repeated interactions, the “networked participant” endpoint is out of reach and the design collapses to single-worker deployments. Those are still useful, but the broader claim — that this abstraction scales to a staffed organisation — weakens. Evidence from current multi-agent systems is mixed; orchestration is solvable, while relationship between workers, in the sense of accumulated trust and habit, remains largely unstudied.
Each of these would be genuine falsification, not mere discomfort. The argument is a design proposal, not a theorem, and its value lies in whether workers built along these lines behave more like participants and less like elaborate executors. That is a testable claim.
Implications
The most direct implication for builders is that the current common practice — a capable model, a set of tools, and a system prompt — is insufficient by several constructs, not by one. The gap cannot be closed by improving any single component. Tool-level mandate enforcement, memory write rules, audit logging, and scheduled review are all within current engineering capability; the harder problems sit in the middle ring — culture encoding, constitutional identity that is genuinely architecturally protected, identity drift detection — and these are credibly within reach of focused work.
For organisations deploying such systems, the unit of deployment is not a model or an agent but a worker, with all the organisational design that word ordinarily implies. What AI should we buy? is the wrong question. What role would this worker hold, what mandate would define it, what constitutional commitments would anchor it, what culture must it absorb, how will we socialise it in, and how will we review it over time? — those are familiar questions, the ones asked of every human hire. Asking them of digital workers is the minimum threshold for deployment.
For governance and policy, the digital worker is a useful intermediate unit: it has an identifiable mandate, auditable conduct, and a scheduled review cycle. Regulating workers — rather than regulating models directly or regulating the entire opaque deployment — gives both regulators and operators something concrete to inspect, certify, and hold accountable, analogous to how professional licensure works for humans, where the regulated entity is the role-holder, not the underlying biology and not the employing organisation.
The argument also composes with the companion essay on distributed AI. That essay argued that system-level intelligence emerges from coordination across networks of agents, and that the substrate is now in place. The unit of coordination most useful in organisational contexts is not the raw agent but the participant: a persistent, bounded, socialised, reviewed worker. A distributed network whose nodes are well-formed digital workers under explicit mandate is a different kind of system from a distributed network of ungoverned agents. The first is a modern analogue of a civil service or a military organisation — distributed intelligence constrained by institutional form. The second is closer to the East India Company in its worst decades. If distributed AI is what comes next, the participant abstraction is a precondition for that next to be liveable.
Conclusion
For several years, the organising question for enterprise AI has been what can the model do? The answer improves every few months and will continue to. It is also the wrong question to stop at. The question that determines whether an AI system actually integrates into organisational life is:
How does this system participate?
Participation is ongoing presence under a mandate, in an identity, through a culture, across relationships, with finite attention, under review. None of these are new ideas. Organisations have been refining them for centuries in the commissioning of officers, the credentialing of ambassadors, the apprenticing of craftsmen, the onboarding of civil servants, the reviewing of physicians. AI systems that aspire to organisational integration must inherit these constructs, adapted for a new substrate. Current agent frameworks have, to varying degrees, ignored all of them. The cost is paid quietly, in pilots that never quite graduate, in deployments that work only under close human supervision, in workers whose every small unwritten mistake erodes the trust that participation ultimately requires.
To be clear about what this is not claiming: digital workers are not persons, they have no inner lives, and naming one is a design convention rather than a metaphysical claim. The labour implications of digital workers entering roles previously held by humans are real and significant, and they belong to a different conversation. Culture encoding, identity drift detection, peer-worker trust protocols, and the practical economics of socialisation and review are all genuinely open problems. None of them is solved here.
What is claimed is narrower. The right abstraction for enterprise AI is participant, not agent. The constructs required are old, familiar, and mostly unbuilt on this substrate. The organisations that take them seriously will deploy participants. The organisations that do not will deploy elaborate executors, and will go on wondering why the pilots never quite graduate.