A.D.A.

Back to Workflow Library

Function: Delivery operations

Quality Assurance Review

Deployment Brief

Start with one pre-client QA checklist. Have AI prepare pass/fail notes, missing evidence, and rework items, then require owner approval before delivery.

Related Field Report

Quick Answer

A quality assurance review workflow checks a deliverable against acceptance criteria before the client sees it. AI can compare the work to a checklist, find missing evidence, and draft rework notes, but a person should approve final quality, exceptions, brand-sensitive details, and anything released to a client.

TL;DR

A quality assurance review workflow checks a deliverable against acceptance criteria before the client sees it. AI can compare the work to a checklist, find missing evidence, and draft rework notes, but a person should approve final quality, exceptions, brand-sensitive details, and anything released to a client.

What is quality assurance review?

Quality Assurance Review is the operating step that turns delivery signals into a clear action path. The useful version is not a generic summary. It names the source evidence, shows what is missing, identifies the owner, and makes clear which decisions need approval before the work moves forward.

Who is this workflow for?

This workflow is for service businesses, agencies, consultants, implementation teams, support teams, and small internal operations groups where client work moves through multiple people. It is especially useful when requests, status changes, reviews, or handoffs happen across email, Slack, project boards, documents, and calls.

What breaks in the manual process?

Manual delivery operations usually break because the signal is scattered. One person remembers the client promise, another owns the task, and a third sees the blocker too late. The result is delay, rework, unclear ownership, or a customer update that sounds confident but is missing the facts behind it.

The fix is not more reporting for its own sake. The fix is a simple evidence path: what happened, who owns it, what is blocked, what decision is needed, and what should not move without review.

How does the AI-enabled process work?

AI gathers the relevant project, request, ticket, review, or handoff evidence and prepares a structured draft. The draft should be useful enough for an owner to review quickly, but it should not become the final decision-maker. The owner still approves anything involving priority, scope, timing, budget, client expectations, quality release, or escalation.

The workflow should pause when evidence is missing, stale, contradictory, or tied to a customer-visible commitment.

What does this look like in practice?

Example scenario: A report is ready for client delivery. The workflow checks the statement of work, acceptance criteria, version history, brand rules, data source notes, open comments, and QA checklist. It finds that the executive summary is complete, but two charts lack source labels and one recommendation references an unapproved claim. It prepares a rework list and blocks release until the owner approves the corrected version.

What decision rules should govern this workflow?

  • Review against explicit acceptance criteria, not general quality language.
  • Require evidence links for pass/fail items that matter to the client.
  • Separate defects, preferences, missing evidence, and scope changes.
  • Block release when required criteria, version control, or source evidence is missing.
  • Route final release, exceptions, and subjective quality calls to the QA owner.

What are the implementation steps?

1. Trigger: A deliverable moves to internal review, pre-client delivery, milestone approval, final QA, or release-ready status. 2. Inputs collected: collect the required records, owner notes, client context, current status, and approved rules before AI prepares the output. 3. AI/system action: summarize the evidence, classify the work, flag missing context, suggest the owner or next step, and prepare the draft output. 4. Human review point: The delivery or QA owner approves final release, exceptions, subjective quality calls, client-facing notes, regulated claims, and any deliverable that fails a required checklist item. 5. Output generated: create the approved update, triage note, routing recommendation, QA note, or handoff packet. 6. Follow-up or next action: assign the owner, log the decision, track unresolved blockers, and measure whether the workflow reduced delay or rework.

Required inputs

  • Deliverable link, version, and owner
  • Scope, acceptance criteria, and client requirements
  • QA checklist, brand rules, and compliance requirements
  • Known defects, previous feedback, and unresolved comments
  • Release channel, client approver, and due date
  • Review owner and final release authority

Expected outputs

  • QA review note with pass/fail checklist items
  • Defect list with severity and rework owner
  • Missing-evidence flag
  • Release approval task
  • Measurement log for defects, rework, and approval time

Human review point

The delivery or QA owner approves final release, exceptions, subjective quality calls, client-facing notes, regulated claims, and any deliverable that fails a required checklist item.

Risks and stop rules

  • Treating checklist completion as quality approval
  • Missing client acceptance criteria
  • Releasing work with unresolved defects
  • Letting AI judge subjective quality without reviewer signoff
  • Sending the wrong version to the client

Stop the workflow when source evidence is missing, ownership is unclear, status conflicts with project records, a client-visible promise is involved, or the suggested action would change scope, timing, budget, quality release, escalation, or support responsibility.

Best first version

Start with one pre-client QA checklist. Have AI prepare pass/fail notes, missing evidence, and rework items, then require owner approval before delivery.

Advanced version

The advanced version connects the workflow to project records, client records, ticket history, documented rules, owner capacity, and reporting. It can suggest trends and recurring issues, but it still needs approval for decisions that affect a client, a deadline, a price, a scope boundary, or a release.

Related workflows

Measurement plan

  • Defects caught before client delivery
  • Client revision requests
  • Release blocks
  • QA cycle time
  • Rework owner completion
  • Recurring defect categories

What not to automate

  • Do not let AI approve final release.
  • Do not treat brand, legal, compliance, or strategic judgment as fully automated checks.
  • Do not ignore missing evidence because the deliverable sounds polished.
  • Do not overwrite reviewer notes or client comments.

FAQ

What is a quality assurance review workflow?

It checks a deliverable against scope, acceptance criteria, checklist items, and known requirements before the work is sent to a client or stakeholder.

What should AI check in QA review?

AI should check acceptance criteria, missing evidence, version, open comments, defect categories, brand rules, and whether required review steps are complete.

What should stay under human review?

Final release, exceptions, brand-sensitive judgment, regulated claims, client-facing notes, and subjective quality calls should stay under human review.

What is the simplest first version?

Start with a pre-client checklist that produces pass/fail notes, defect list, rework owner, evidence link, and release approval status.

How should QA review be measured?

Track defects caught before delivery, client revisions, QA cycle time, release blocks, recurring defect categories, and rework completion.