First Workflow Selection Rubric
A rubric for choosing the first AI workflow without chasing the loudest idea or most exciting demo.
Direct Answer
The best first workflow is frequent, valuable, easy to review, low enough risk to test safely, and close to a bottleneck the business already feels.
How to use this
- Step 1: Write down every candidate workflow in one sentence.
- Step 2: Remove ideas that are really broad goals, tools, or departments.
- Step 3: Score frequency, value, evidence, reviewability, risk, and time to value.
- Step 4: Choose a first workflow that is useful, boring, and reviewable.
- Step 5: Create a backlog for the exciting ideas that need better evidence or ownership.
Selection rubric
- Frequency: Does the workflow happen weekly or more often?
- Value: Would improvement reduce delay, missed revenue, rework, customer friction, or owner time?
- Reviewability: Can a human owner judge the output quickly?
- Evidence: Does the source material already exist?
- Risk: Can the first version stay away from final decisions, legal exposure, and customer commitments?
- Time to value: Can the first useful version be tested in 30 days?
Worksheet prompts
- Good first workflow: Frequent, evidence-rich, reviewable, close to a bottleneck, and valuable enough to measure.
- Weak first workflow: Rare, political, data-poor, high-risk, or dependent on a large system migration.
- Best early AI role: Prepare work for a human owner: summarize, classify, compare, check, draft, route, or flag.
- Worst early AI role: Final decision-maker for pricing, legal language, customer promises, account ownership, or sensitive data actions.
- Selection meeting output: One selected workflow, one owner, one baseline metric, and one reason the next two workflows are not first.
- Decision rule: If two workflows tie, choose the one with cleaner evidence and lower customer impact.
What most teams choose instead
Teams often choose the most visible AI demo. That usually creates excitement, not operating value. A first workflow should be boring enough to run and important enough to matter.
ADA's default recommendation
Start with a workflow where AI prepares work for review: lead routing, proposal checks, onboarding missing-item review, reporting briefs, or support escalation summaries.
What should be parked
Park workflows that require final judgment, sensitive data, legal approval, pricing decisions, or customer-facing promises before the review process exists.
Quality bar
- The first workflow can be explained without mentioning a model name.
- The selected workflow creates an observable before/after difference.
- A human owner can review normal cases and edge cases.
- The first test can run with real work within 30 days.
- The chosen workflow teaches the company how to deploy the next one.
Where This Helps
Research basis
- NIST AI RMF Playbook: Supports mapping intended use, context, measurement, and risk management before rollout.
- ISO/IEC 42001: Frames AI as a management system with policies, objectives, processes, and continuous improvement.
- OECD AI Principles: Supports oversight and safeguards appropriate to the business context.
Related Resources
FAQ
- Why does the first workflow matter so much?: The first workflow sets the operating pattern for AI: evidence, owner, review point, stop rule, and metric.
- Should we start with the biggest problem?: Not always. Start with a meaningful problem that can be reviewed safely and tested quickly.
- What is a poor first workflow?: A poor first workflow is high-risk, hard to review, data-poor, or owned by nobody.