The human-in-the-loop problem
nobody solved.

The AI agent industry has exactly two positions on human involvement: remove humans entirely, or make them approve everything.

The first camp builds fully autonomous agents. Set it and forget it. The pitch is freedom — your agents run while you sleep. The reality is anxiety. Nobody trusts an agent to send emails to customers, update CRM records and file Jira tickets without anyone watching. And the moment something goes wrong, there's no one in the loop to catch it.

The second camp builds approval workflows. Every action requires sign-off. The pitch is control. The reality is a full-time job clicking "approve" on things you stopped reading three hours ago. The agent does the work, but you're still bottlenecked on reviewing all of it.

Both camps are solving the wrong problem. The question was never "should humans be in the loop?" The question is: when should they be, and how?


Autonomy is the default. Oversight is the exception.

Think about how delegation works with people. When you hand a task to a trusted employee, you don't review every email they draft, every spreadsheet they open, every Slack message they send. You review the output. And even then, only when it matters — when it's customer-facing, high-stakes or irreversible.

The rest? They handle it. That's what trust looks like.

Agents should work the same way. Autonomous by default. Pausing for human input only at the moments that actually matter. Not every step. Not every tool call. The specific ones where human judgment adds value.

This is harder to build than either extreme. Full autonomy is easy — just let the agent run. Full approval is easy — just gate everything. The hard part is building a system where humans and agents collaborate fluidly, where the agent knows when to proceed and when to pause, and where the human's attention is spent on decisions, not rubber stamps.

That's what we built in Agentican. And once you see how it works, the approve-everything and trust-everything models both look broken.


Three ways humans and agents work together

Agentican has three collaboration mechanisms. Each serves a different purpose. All three surface in a single Inbox — one place for everything that needs your attention.

1. Step approval: review the work before it moves forward

This is the most intuitive one. An agent completes a step — drafts an email, writes a report, builds a recommendation — and pauses for you to review it before downstream work begins.

You read the output. If it's good, you approve and the task continues. If it's not, you reject with feedback explaining what needs to change. The agent re-executes the step with your input incorporated.

The key design decision: step approval is configured per step, not globally. When you build a plan, you choose which steps matter enough to gate. The step that drafts a customer email gets an approval gate. The step that researches the customer's account doesn't. You're reviewing decisions, not busywork.

And if you don't respond within 24 hours, the step auto-approves. Tasks don't stall because someone's on PTO. The timeout is configurable — tighten it for urgent work, loosen it for async workflows.

2. Tool approval: gate the action, not just the output

Step approval gates what the agent produced. Tool approval gates what the agent is about to do.

When an agent decides to call a tool — create a contact in HubSpot, send an invoice in Stripe, update a record in Salesforce — you see exactly what it intends to do and with what parameters before it happens. You approve and the tool executes. You reject with feedback and the agent adapts in real time — it might adjust its approach, try different parameters or move on.

This is configured per tool, not per agent. When you connect a toolkit, you decide which tools require approval. "Read contact" runs freely. "Create contact" pauses for review. "Delete contact" always pauses. You're gating actions by risk level, not by agent.

The critical difference from step approval: rejection doesn't restart anything. The agent is mid-execution. It receives your feedback as a result and keeps working — the same way a colleague would adjust if you told them to take a different approach mid-task.

3. Ask user: the agent starts the conversation

This one is fundamentally different. The agent isn't waiting for you to review its work. It's asking you a question.

Maybe the instructions were ambiguous and it wants clarification. Maybe it found two valid approaches and wants you to choose. Maybe it needs information that isn't in the task context or any connected tool. Maybe it's about to do something unusual and wants confirmation before proceeding.

The agent asks. You answer. It continues with your input incorporated.

There's no approve or reject. It's a conversation. And if you don't respond, the agent doesn't auto-proceed with your approval — it simply learns that you didn't answer and decides how to handle that. It might proceed with a default assumption. It might note the gap in its output. It might skip the dependent work entirely.

As far as I know, no other agent platform does this. Every orchestration framework, every agent builder, every workflow tool treats agents as executors — they take instructions and run. If the instructions are unclear, the agent guesses. If context is missing, the agent hallucinates. If there are multiple valid approaches, the agent picks one and hopes for the best.

That's not how a good employee works. A good employee asks.


Ask user is knowledge transfer in real time

I wrote recently about how agent management is the real challenge — that agents need knowledge transfer the same way employees do. Not just facts and data, but preferences. Standards. The subjective stuff that lives in your head and never gets documented.

But here's the thing about knowledge transfer: it's not something you do once. You can't sit down, dump everything you know into an agent and expect it to handle every situation perfectly. That's no more realistic than onboarding a new hire in a single afternoon and expecting them to never ask a question.

Knowledge transfer is continuous. It's spontaneous. It happens in the moments where work meets ambiguity — when the task at hand surfaces a question nobody anticipated.

A new employee handling their first customer escalation doesn't have all the context they need. They might know the product, know the process, know the tone of voice. But this specific customer has a history. This specific issue has political implications. This specific situation calls for a judgment that wasn't in the playbook. So they ask their manager. And the answer doesn't just resolve the immediate question — it becomes part of how that employee handles the next escalation.

Ask user gives agents the same ability. When an agent encounters ambiguity — unclear instructions, missing context, a decision that could go either way — it doesn't guess. It asks. And your answer doesn't just unblock the current task. It becomes part of the knowledge that shapes how the agent works over time.

This is agent management happening in real time, embedded in the work itself. Not a configuration exercise. Not a prompt engineering session. A conversation between a manager and an agent, triggered by the work, at the moment it matters most.

Every question an agent asks is a gap in its understanding that you're filling. Every answer you provide is knowledge transfer that compounds. Over time, the agents ask fewer questions — not because they're configured to ask less, but because they know more. The same way a new hire grows into a trusted team member who no longer needs to check in on every decision.


They compose

These three mechanisms aren't isolated features. They layer together within a single step.

An agent working on a customer outreach might ask you which tone to use. Then it drafts the email and attempts to send it through a gated tool — you see the exact message and recipient before it goes. Then the step completes and presents the full output for your review before the next step begins.

Three touchpoints. Three different purposes. Clarification, then action gating, then output review. Each one is optional. Each one is configured independently. The agent works autonomously between them.

This is what real collaboration looks like. Not "approve everything" and not "trust everything." A system that knows when to proceed, when to pause and when to ask — the same way a good colleague does.


Your Inbox, not your bottleneck

Every approval, every tool gate, every question — all of it surfaces in one place: the Inbox.

Tasks needing your attention appear as cards. You click a pending step and the system shows you the right interface — a step review, a tool approval or an agent question — based on what the step is waiting for. You respond and move on.

This matters because the failure mode of human-in-the-loop isn't the loop itself — it's the overhead. If reviewing agent work requires hunting through dashboards, switching between tools and tracking which tasks need attention, you'll stop doing it. The overhead will kill the habit before the value has a chance to compound.

One Inbox. Everything that needs you. Nothing that doesn't.


The collaboration gap

The agent industry talks a lot about autonomy and a lot about control. It rarely talks about collaboration.

Autonomy without collaboration is reckless. You're hoping the agent gets it right and finding out after the damage is done.

Control without collaboration is exhausting. You're reviewing everything, learning nothing and wondering why you have agents at all.

Collaboration is the middle ground that neither camp wants to admit is necessary — because it's harder to build. It requires agents that know when to pause, when to proceed and when to ask for help. It requires a platform that routes the right decisions to the right humans at the right time. It requires an experience that makes staying in the loop feel effortless instead of burdensome.

And it requires something the industry hasn't built for at all: the ability for agents to initiate conversations with the humans they work for. Not just execute instructions, but participate in the ongoing knowledge transfer that makes them better at their job over time.

That's what we built. Not because we think humans should approve everything. And not because we think agents should run unsupervised.

Because the best work happens when humans and agents actually work together. Agents handle the execution. Humans provide the judgment. The platform makes sure the right person sees the right thing at the right moment. And every interaction — every approval, every rejection, every question answered — makes the partnership stronger.

Nothing moves without you. But nothing should have to wait for you either.


Task an agent free →

The platform for human-agent collaboration.

Contact us

We'd love to hear from you. Fill out the form below and we'll get back to you shortly.

Thank You!

We've received your message and will get back to you shortly.