chart/AgenticOrgChart.com
menu
Pattern 05 / oversight

Human-in-the-Loop AI Agents: Oversight Topologies (2026)

An agent’s actions are reviewed or approved by a human before they take effect. Three named variants: assistant, reviewer, arbiter.

Human-in-the-loop topologyAn agent produces a candidate action. A human reviewer reviews the action and either approves it (the action proceeds to write to the system of record) or rejects it (the agent retries). The reviewer pattern is documented in LangGraph's interrupt primitive.Operatorgoal inAgentproposes actionHuman reviewerapprove / rejectsystem of recordcandidateon approveon reject: retry
Human-in-the-loop oversight topology. An agent proposes an action. A human reviewer approves (the action proceeds to write to the system of record) or rejects (the agent retries). The reviewer node uses the warm-taupe accent on this site to mark a human role.Pattern documented in: LangGraph human-in-the-loop concepts langchain-ai.github.io/langgraph/concepts/human_in_the_loop. Accessed 30 April 2026.

Three oversight variants

Assistant pattern (human leads, agent assists)

The human is the operator; the agent is a tool the human invokes. The agent surfaces suggestions, drafts, or completions; the human accepts, edits, or rejects in real time. This is the structural pattern of most coding assistants (GitHub Copilot, Anthropic claude-code in interactive mode) and most writing assistants. The agent never writes to a system of record without the human’s direct action.

Reviewer pattern (agent acts, human approves before write)

The agent does the bulk of the work autonomously, then pauses before any irreversible action and surfaces the proposed action for human approval. The human approves, edits, or rejects. LangGraph’s interrupt() primitive is the canonical implementation. Reference: the LangGraph human-in-the-loop concepts page at langchain-ai.github.io/langgraph/concepts/human_in_the_loop. Access date: 30 April 2026.

Arbiter pattern (agent escalates ambiguous cases)

The agent operates autonomously on the cases it is confident in and escalates ambiguous cases to a human arbiter. The structural property is that the human is not in every loop; the human is in only the loops the agent flags. Klarna’s published customer-service reports describe this shape: tier-1 deflection by the agent, escalation to a human agent for complex cases. Reference: Klarna AI assistant performance summary (February 2024) at klarna.com press release. Access date: 30 April 2026.

When oversight is mandatory

When the agent has write access to a system of record. Customer database updates, CRM modifications, financial transactions, calendar invitations sent externally are all candidates for the reviewer pattern. The cost of an erroneous write usually exceeds the cost of the human review.

When the action is irreversible. Sending money, sending email externally, deleting data, publishing to a public surface. Recovery from an erroneous irreversible action is expensive (apologies, refunds, legal exposure) compared to the cost of a five-second human approval gate.

When the cost of an error exceeds the cost of human review. Healthcare deployments universally follow this rule (see the healthcare industry page for the published clinical-agent oversight pattern).

When oversight is overkill

Read-only retrieval tasks. An agent that searches a corpus and returns matches does not need a reviewer; the human reads the result anyway.

Internal-tool workflows where errors are easily detected and easily corrected. A coding agent running tests in a sandbox is a paradigmatic case: the test suite is the reviewer, and a human gate per turn would defeat the purpose.

Tasks where the failure surface is contained. Generating a draft document for a human to read does not need approval-gating; the human will inspect the draft anyway.

Common anti-patterns

Theatre approval. The human rubber-stamps without reading. The remedy is to design the approval surface so that the agent presents the diff, not just the action; if the diff is non-trivial, the human is forced to read.

Rate-limited oversight. The human becomes the bottleneck. If approvals queue faster than the human can review, throughput collapses to the human’s review rate. The remedy is to either reduce what requires approval (fewer high-stakes actions), aggregate approvals (batch review), or add a senior arbiter who can pre-approve standard cases.

Loss of oversight skill. When the agent is right ninety-five percent of the time, the reviewer’s skill at catching the five percent atrophies. The remedy is to retain blind spot-checks (the agent emits a small fraction of cases for human review even when not flagged), and to maintain training data of caught-error cases.

Reference examples

LangGraph interrupt human-approval topologyLangGraph's interrupt() primitive: a graph pauses before a write-action node, surfaces a human-approval prompt, and resumes only when the approver responds.Plan nodebuilds proposalinterrupt()graph pausesApproverapprove / rejectWrite actionsystem of recordrequestreject → retryon approve
LangGraph interrupt() reviewer pattern. The graph pauses at the interrupt() node, surfaces the proposed action for human approval, and resumes only when the approver responds. Reject branches loop back for the agent to retry.Source: LangGraph human-in-the-loop concepts page langchain-ai.github.io/langgraph/concepts/human_in_the_loop; LangGraph interrupt API reference langchain-ai.github.io/langgraph/reference/types. Accessed 30 April 2026.
Klarna customer-service tier-1 deflection topologyA customer message enters a tier-1 customer service agent. The agent answers FAQs and processes returns directly. Ambiguous or sensitive cases escalate to a human agent.Customerchat inTier-1 agentFAQ + returns + refundsHuman agentcomplex / sensitiveResolutionresponse / actionescalateresolve
Klarna’s customer-service arbiter pattern. The tier-1 agent handles routine FAQ, returns, and refunds autonomously. Complex or sensitive cases escalate to a human agent. Klarna’s published February 2024 report stated the AI assistant handled two-thirds of customer-service chats in its first month.Source: Klarna press release, “Klarna AI assistant handles two-thirds of customer service chats in its first month” (27 February 2024) klarna.com. Accessed 30 April 2026.

Related on this site

For the process-flow view of a human gate, see agenticswimlanes.com.