AI Operating Models · May 12, 2026 · 10 min read
AI Agent Readiness: What To Define Before Giving An Agent Tools
An AI agent is an actor inside a workflow, not the workflow itself. The seven questions, permission tiers, and approval points to define before giving an AI agent tool access.
TL;DR
An AI agent is not a workflow. It is an actor inside a workflow. Do not give an agent tools until the workflow is bounded. Before tool access, define the business process it supports, the systems it may touch, the actions it may take, what it must ask before doing, whose identity and permissions it uses, how its actions are logged, and who owns exceptions. An agent should have only the permissions one task needs, a visible owner, auditable actions, and a human approval point before any customer-visible, financial, legal, or record-changing action.
What should we define before giving an AI agent tools?
Answer seven questions in writing before any connection is made:
1. What workflow does the agent support? 2. What systems can it access, and at what scope? 3. What actions can it take versus only propose? 4. What must it ask a person before doing? 5. Whose identity and permissions is it using? 6. How are its actions logged and attributed? 7. Who owns exceptions when it is wrong or blocked?
If these are unanswered, the agent is not ready for tools. It is ready for a scoping conversation.
Why do agents fail when the workflow is unclear?
An agent amplifies whatever process it is dropped into. If the workflow has no defined trigger, evidence, owner, or stop condition, the agent does not supply that structure; it acts confidently without it. The failure is rarely the model. It is an unbounded actor given real tools inside an undefined process, with no clear record of what it did or why.
What is the difference between an assistant, automation, and an agent?
- An assistant responds when a person asks and takes no independent action.
- Automation runs fixed rules on a trigger with no judgment.
- An agent chooses actions toward a goal and can use tools to act on systems.
The risk rises across that list because the agent decides and acts. That is why bounding it matters more, not less.
Decision guide: permissions by risk level
- Low risk, read and prepare only: the agent reads approved sources and drafts output. No write access, no external action.
- Medium risk, internal write under review: the agent updates internal records or creates tasks, with changes logged and an owner reviewing exceptions.
- High risk, customer, financial, legal, or record-changing: the agent may prepare the action but a named person must approve before it executes.
- Not approved: autonomous external commitments, irreversible deletions, access provisioning, or anything touching protected data in unapproved tools.
Grant the lowest tier that still lets the agent do one useful task.
Where are the human approval points?
Place a required approval before any action that contacts a customer, moves money, changes a contract or legal language, alters a system-of-record field, grants access, or makes an external commitment. The agent can assemble the evidence and the proposed action; the accountable person decides. Approval points are not a lack of trust in the model; they are how an action becomes governable.
What are good first agent use cases?
Start where the agent prepares work that a person finalizes: internal research briefs, drafting from approved sources, triaging and routing inbound work, or assembling a summary across systems for a human decision. These build the audit trail and exception habits before any autonomous write or external action is considered.
Connectors are not governance: the permission envelope
Connecting an agent to a tool through a connector or protocol such as MCP grants access. It does not decide what the agent should be allowed to do with that access. Those are different problems, and the second one is governance. Before any tool is connected, write the agent's permission envelope:
- Actor identity: the agent runs under its own distinct identity, not a shared human login.
- Allowed reads: the specific sources and records it may read, and nothing wider.
- Allowed writes: the specific fields or objects it may change, scoped to one task.
- Approval-required actions: the actions it may prepare but never execute without a named person approving.
- Forbidden actions: the actions it must never take under any condition.
- Logging: every read and write is logged and attributable to the agent identity.
- Pause and revoke owner: the named person who can suspend the agent or pull its access immediately.
The envelope is the unit you govern, not the connector. A connector with no envelope is unbounded access with a friendly name.
What should not be connected yet?
Do not connect billing, payments, contract systems, production infrastructure, access management, deletion or merge operations, or public communication channels until the workflow is bounded, the permission envelope is written, and logging is in place. Agent identity is a known gap: many organizations cannot reliably distinguish agent actions from human actions, which makes unlogged, broadly permissioned agents hard to audit after the fact.
What not to do
- Do not give an agent broad credentials "to be flexible."
- Do not deploy an agent before the workflow has a trigger, owner, and stop rule.
- Do not allow customer-visible or money-moving actions without a human approval point.
- Do not run an agent under a shared identity with no action attribution.
- Do not skip logging because the pilot is "just a test."
Related workflow pages
Related field reports
- The Difference Between AI Adoption and AI Deployment
- What Human Review Points Are Needed In AI Workflows?
Where to go next
The AI workflow automation and AI implementation services pages explain how ADA bounds an agent inside a defined workflow. The AI readiness assessment helps check whether identity, permissions, and audit are ready before any tool access. To write a permission envelope for a specific agent use case, request an implementation review.
FAQ
Is an AI agent the same as an AI workflow?
No. The workflow is the bounded process. The agent is an actor inside it that can choose actions and use tools. The workflow must be defined first.
What permissions should an AI agent have?
Only the permissions one task requires. Grant the lowest risk tier that still lets the agent complete its single job, and require approval for risky actions.
When does an agent need human approval?
Before any customer-visible, financial, legal, access-granting, or record-changing action. The agent prepares it; a named person approves it.
Why is agent identity a problem?
Many organizations cannot clearly distinguish agent actions from human actions, which makes unlogged or broadly permissioned agents difficult to audit.
What is a safe first agent use case?
Work where the agent prepares and a person finalizes: research briefs, drafting from approved sources, and triage or routing of inbound work.
References
- Cloud Security Alliance on organizations unable to distinguish AI agent from human actions: https://cloudsecurityalliance.org/articles/more-than-two-thirds-of-organizations-cannot-clearly-distinguish-ai-agent-from-human-actions
- Cloud Security Alliance on the AI agent identity crisis: https://cloudsecurityalliance.org/blog/2026/04/20/who-s-behind-that-action-the-ai-agent-identity-crisis
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- OpenAI ChatGPT agent controls and compliance logging: https://help.openai.com/en/articles/11752874-chatgpt-agent
Research Standard
AI Deployment Authority briefings are built to help operators make deployment decisions, not to summarize the AI conversation.
For new briefings and major updates, we review the search landscape around the topic: current results, common vendor claims, buyer objections, related workflows, and the practical questions the top pages often leave unanswered. We then compare the topic against ADA's workflow framework: trigger, evidence, owner, review point, risk boundary, stop rule, and measurable result.
- What the market usually says
- What operators still need to decide
- Where AI can prepare work safely
- Where a person still needs to review
- What evidence the workflow requires
- What should stop or stay manual
- Which workflow, briefing, or service page should come next
Some pages are more mature than others. We update the library as better examples, stronger source material, and clearer operating patterns become available.