For each task you plan to improve, be clear about what AI will do, what people will do, and how outputs will be checked (Table 1). Start small and be specific.
For example:
- Customer emails – AI drafts the email; people review and approve before sending.
- Support tickets – AI sorts and prioritises tickets; people handle edge cases and exceptions.
- Data – AI extracts and structures key data from documents; people check accuracy and decide what to do next.
- Customer support – AI suggests responses; people select or edit before using.
Any workflow that uses AI needs checkpoints. Choose an oversight pattern that suits the task and decide:
- what triggers a check
- who can escalate issues
- who can pause or stop AI use if needed.
Table 1: Examples of when and how to check AI outputs
| Oversight pattern |
When to use |
How it works |
| Check before acting (human-in-the-loop) |
High business impact |
A person approves every AI output before using it |
| Watch while acting (human-on-the-loop) |
Medium business impact, high volume |
AI acts automatically; a person checks and steps in when needed |
| Spot-check |
Low business impact, very high volume |
A person reviews a sample of AI outputs |
| Exception handling |
Predictable edge cases |
AI handles routine cases and flags exceptions for a person’s review |
Why this matters
Clear roles and careful checks by people helps prevent errors in AI outputs from scaling. It also makes it clear who’s responsible for the final outcome at each step.