💬 Discussion no comments on ux yet comments don't trigger digest emails (mentions do)
Mention: @email@domain for a person,
@role:designer for everyone with that role,
or @all for everyone watching this module.
Markdown supported in the body.
No comments on the ux spec yet. Be the first.
🖼 Designs in Figma
FIGMA_PAT in .env and restart the web container to enable file linking.
AI Guardian – UX Specification
Related Technical Authority: AI Guardian – Technical Specification
1. Purpose
This UX specification governs the staff-facing experience of AI Guardian, the Intelligence Suite module that continuously audits operational signals across Primoro and converts detected gaps or risks into governed, human-owned findings. It defines how Findings are surfaced, actioned, escalated, and resolved by managers and staff, and how the interface keeps AI reasoning transparent and human authority unambiguous. The primary roles it serves are practice managers and clinical team leads who need a clear, auditable view of outstanding operational gaps without being overwhelmed by noise.
2. Core UX Principles (Non-Negotiable)
These principles take precedence over visual preferences. If a design choice conflicts with a principle below, the principle wins.
- Action-first — users see the action they need next, not abstract status displays.
- Governance always visible — when AI is involved, users always know what AI did and what they're confirming.
- No dead toggles — every UI control either does something or doesn't appear.
- Calm by default — the interface gets out of the way; alerts are reserved for things that genuinely need attention.
- Progressive disclosure — advanced detail is one click away, not always-on.
- Human authority is unambiguous — the UI must make it impossible to mistake an AI suggestion for a committed action; every AI-generated output is visually and semantically distinct from a confirmed human action. Inferred from the technical spec's §7 AI Boundaries, which prohibit autonomous resolution of any Finding.
- Closed-loop accountability — no Finding disappears silently; the interface must always show a path to resolution and make it clear when a Finding has not yet been acted upon. Inferred from the technical spec's §3.2 state machine rule that a Finding MUST NOT remain in Detected without an associated action.
- Severity is meaningful — the three-tier severity scale (informational / warning / critical) must be represented consistently and distinctly throughout every surface, so that managers can triage at a glance without opening individual records. Inferred from the technical spec's §3.1 and §13.5 stability contract for the severity scale.
3. Design Philosophy
AI Guardian's mental model is that of a diligent background analyst who flags issues clearly and then waits for a human decision. The interface should feel like a well-organised briefing, not a flood of alerts. Key stances:
Empty states are positive. An empty Findings list is the best possible state — it means no gaps have been detected. The empty state should communicate this explicitly and calmly, not leave the user wondering whether the system is working. Inferred from the module's purpose of continuous audit; if the list is empty, the audit is clean.
Error states name the source. Because Findings are derived from signals across multiple integrated modules, an error state must indicate which upstream source is affected (e.g. Appointment Manager signals unavailable) so that managers can act on the right module, not AI Guardian itself. Inferred from the technical spec's §14 reliability requirement and §15 open question on availability.
AI suggestions are always provisional. Any AI-generated reasoning text, suggested task, or proposed alert is presented in a visually distinct "pending review" treatment until a human explicitly accepts or rejects it. Accepted suggestions are then shown in the standard confirmed-action style. Inferred from the technical spec's §7, which states AI MAY suggest tasks or alert content for human approval before those outputs are committed.
Multi-step flows are linear and reversible where possible. Resolve, Escalate, and Dismiss are consequential actions that require a confirmation step. Dismissal additionally requires a stated reason before the action can be committed. These flows should be short (two steps maximum) and clearly indicate what will happen after confirmation. Inferred from the technical spec's §3.2, §4.2, and §9, which mandate audited, reason-bearing dismissal and manager-only resolve/dismiss/escalate.
Read-only surfaces are clearly signalled. The tablet surface is read-only by design. Any component that appears on tablet but is not actionable there must carry an unambiguous read-only treatment — not a greyed-out button that looks broken, but a deliberate non-interactive presentation. Inferred from the technical spec's §5.2 and §13.2 rule 9.
Undo is not available for resolved or closed Findings. Because the audit trail is immutable and Finding state cannot regress past Action Created, the interface must not offer an undo affordance for resolution or closure. The confirmation step before those actions is the only guard. Inferred from the technical spec's §3.2 enriched rule and §8 immutability requirement.
4. Primary Surfaces
4.1 Web Portal
Who uses it: practice managers and clinical team leads who need full Finding lifecycle management — view, triage, action, escalate, resolve, and dismiss. Inferred from the technical spec's §5.1 and §9, which restrict resolve/dismiss/escalate to manager role or above.
Key tasks performed here:
- Browse and filter the Guardian Findings list by severity, state, owning role, source module, and date range. Inferred from the technical spec's §5.1 and §13.4.
- Open a Finding detail panel to review source signals, AI-generated reasoning, the linked entity (appointment, task, patient, diary day), and the full audit history. Inferred from the technical spec's §5.1.
- Accept or reject an AI-suggested task before it is committed to Task Manager. Inferred from the technical spec's §7, which requires human approval before AI outputs are committed.
- Accept or reject an AI-suggested alert before it is committed to Communication Hub. Inferred from the technical spec's §7.
- Execute Resolve, Escalate, or Dismiss actions on an individual Finding, with mandatory confirmation and (for Dismiss) a mandatory stated reason. Inferred from the technical spec's §5.1, §4.2, and §9.
- View the immutable audit trail for a Finding, including all state transitions, escalation events, and AI suggestion acceptance/rejection events. Inferred from the technical spec's §8.
- Access practice-level configuration (enable/disable AI Guardian, detection thresholds, severity mappings) via Admin Control Plane, reached from a settings entry point in the portal. Inferred from the technical spec's §13.3.
Layout pattern: List-detail. The primary view is a filterable Findings list; selecting a Finding opens a detail panel (split-pane on wide viewports, full-page navigation on narrower breakpoints). Inferred from the technical spec's §5.1 description of a Findings list and a Finding detail panel.
4.2 Tablet App
Who uses it: clinical staff and reception staff during active sessions who need to be aware of critical operational gaps without leaving their primary workflow. Inferred from the technical spec's §5.2, which limits tablet to read-only notifications for critical Findings only.
Key tasks performed here:
- View read-only notifications for critical-severity Findings. Inferred from the technical spec's §5.2.
- Tap a notification to see a summary of the Finding (severity, type, linked entity, owning role) without the ability to take action. Inferred from the technical spec's §5.2 and §13.2 rule 9.
Touch ergonomics: tap targets must be ≥48 px to accommodate glove-friendly use in clinical areas. The read-only constraint means there are no destructive tap targets on this surface, reducing the risk of accidental commitment. Inferred from the technical spec's §5.2 and the platform's clinical context.
Resolution, dismissal, and escalation actions are absent from the tablet surface. If a staff member needs to act on a Finding, the interface should indicate that the action must be taken in the web portal, not present a greyed-out control. Inferred from the technical spec's §5.2 and §13.2 rule 9.
4.3 Mobile App (Patient or Staff)
AI Guardian has no patient-facing surface. No Guardian Finding or underlying signal is exposed to patients on any mobile surface. Inferred from the technical spec's §5.3 and §11.
There is no staff mobile surface defined for AI Guardian in the technical specification. If a future staff-mobile notification surface is added, it would need a new section in both the technical and UX specifications.
4.4 Smart Dashboards Integration (Future Release)
Smart Dashboards (a separate Intelligence Suite module) reserves an integration path for AI Guardian signals. When AI Guardian is enabled alongside Smart Dashboards, Guardian Findings will be surfaced on staff dashboards as AlertSignals — a signal category that Smart Dashboards treats as distinct from standard system-generated operational signals. Inferred from Smart Dashboards §3 and §11, which specify that AI Guardian signals arrive as AlertSignals and must be visually distinguished when the module is active.
The following conventions MUST be observed when this integration is active:
- Guardian AlertSignals on the Smart Dashboards surface must carry the same severity badge treatment (informational / warning / critical) and AI-origin indicator used within AI Guardian itself, so that the platform-wide grammar of AI transparency is consistent across surfaces.
- AlertSignals on a dashboard widget are read-only representations; they are not actionable from within Smart Dashboards. A clear navigation affordance must link from the dashboard widget to the relevant Finding detail panel in the AI Guardian web portal, where action controls are available to authorised managers.
- A dashboard summary widget showing Finding counts by severity and state (consistent with the engagement signals described in §12) serves as the entry point; clicking or tapping any severity tier navigates into the AI Guardian Findings list pre-filtered to that severity. This preserves the action-first principle without duplicating action controls outside their governed surface.
- Because this integration is planned for a future release, detailed widget layout and data-binding specifications will be produced as a joint addendum between the AI Guardian and Smart Dashboards UX specifications. No existing Smart Dashboards component may be extended in a way that bypasses the AI-origin indicator or the confirmation step requirements described in §11 of this specification.
(Needs product decision) The exact release milestone at which this integration is activated, and whether AlertSignals from AI Guardian appear on all dashboard profiles or only manager-role dashboards, must be confirmed before detailed widget design begins.
5. Interaction Model
5.1 Primary Flows
Flow 1 — Triage and action a new Finding (web portal, manager)
Inferred from the technical spec's §5.1, §3.2 state machine, §4.2, §7, and §13.2 rules 2–3.
1. Manager opens Findings list (default view: all active Findings,
sorted by severity descending, then by age ascending).
2. Manager spots a critical Finding in Detected state.
3. Manager selects the Finding → detail panel opens.
4. Panel shows: severity badge, finding type, linked entity,
AI reasoning text, source signal(s), owning role, created
timestamp.
5. AI-suggested task is displayed in "pending review" treatment.
6. Manager reviews reasoning and suggested task.
Branch A — Accept suggested task:
6a. Manager selects Accept.
6b. Confirmation step shown: "Create this task in Task Manager?"
*(needs UX writer input — confirmation modal copy)*
6c. Manager confirms → task committed to Task Manager →
Finding transitions to Action Created.
Branch B — Reject suggested task and create a custom task:
6b. Manager selects Reject → rejection recorded in audit trail.
6c. Manager uses inline task-creation control to define a
custom task → task committed to Task Manager →
Finding transitions to Action Created.
Branch C — Escalate without creating a task:
6b. Manager selects Escalate → confirmation step shown.
*(needs UX writer input — escalation reason/confirmation copy)*
6c. Manager confirms → Finding transitions to Escalated →
escalation event logged with actor and timestamp.
7. Finding no longer appears in the Detected filter; it is now
visible under Action Created or Escalated.
Findings linked to Digital Forms submissions
Where a Finding's source signal originates from a submitted Digital Forms record (for example, a flagged inconsistency or AI-surfaced risk indicator within form data), the Finding detail panel MUST make the form-submission origin clear. Inferred from Digital Forms §4.2 and §13.5, which describe AI-surfaced risk indicators and flagged inconsistencies arising from submitted form data, and from the governance-always-visible principle in §2.
The following rules apply to form-linked Findings throughout the triage flow above:
- The linked entity in step 4 MUST identify the specific form submission (form type, submission date, and patient or appointment reference where applicable) rather than displaying only a generic entity label. This ensures managers can contextualise the risk without navigating away from AI Guardian.
- The source signal description within the AI reasoning block MUST describe the nature of the flagged inconsistency or risk in plain language. It must not reproduce raw form field values in a way that could expose personally identifiable data outside its governed context.
- The AI-origin indicator on the reasoning block is especially important for form-linked Findings because the risk detection itself is AI-generated; managers must be able to distinguish the AI's interpretation of the form data from the form data itself. The AI reasoning block and any AI suggestion card carry the standard AI-origin visual treatment described in §6 and §11.
- If the manager needs to inspect the original form submission in full (for example, to verify the AI's interpretation before accepting a suggested task), a secondary navigation affordance to the relevant Digital Forms record MUST be available in the Finding detail panel. This is a secondary, contextual link — not a primary action — and does not leave the manager stranded outside the Finding lifecycle flow.
- No changes to the confirmation or audit trail requirements apply specifically to form-linked Findings; the standard two-step confirmation and mandatory dismissal reason rules from §3 and §11 apply in full.
(Needs product decision) The exact set of form-level signal types that AI Guardian will ingest from Digital Forms, and the mapping of those signal types to Guardian severity tiers, must be confirmed with the Digital Forms team before form-linked Finding labels and reasoning templates can be finalised.
Flow 2 — Resolve a Finding (web portal, manager)
Inferred from the technical spec's §3.2, §4.2, §8, and §9.
1. Manager opens a Finding in In Progress state.
2. Linked Task Manager task is shown as completed (or manager
is reviewing for explicit dismissal).
Branch A — Task-completion resolution:
3a. Task Manager signals task completion → Finding
automatically eligible for resolution.
3b. Manager reviews and selects Resolve.
3c. Confirmation step shown.
*(needs UX writer input — resolve confirmation copy)*
3d. Manager confirms → Finding transitions to Resolved →
audit event logged → related alerts cleared via
Communication Hub.
Branch B — Explicit dismissal:
3b. Manager selects Dismiss.
3c. Dismissal reason field shown (mandatory, free-text with
optional pre-set reasons).
*(needs UX writer input — dismissal reason field labels
and pre-set option copy)*
3d. Manager submits → Finding transitions to Resolved →
dismissal reason and actor recorded in audit trail →
related alerts cleared.
5. Finding moves to Resolved state; it is readable but no
further actions are available.
6. A separate Close action moves the Finding to Closed (final
state; no further transitions permitted).
Flow 3 — Reviewing a Finding on tablet (read-only)
Inferred from the technical spec's §5.2 and §13.2 rule 9.
1. Critical Finding notification appears in tablet notification
area.
2. Staff member taps notification → summary view opens:
severity, type, linked entity, owning role.
3. No action controls are present.
4. *(needs UX writer input — copy for "action required in portal"
guidance message shown in read-only view)*
5. Staff member dismisses summary → returns to previous screen.
5.2 State Machines (Mirror of Technical Spec §3.2)
The following state treatments are inferred from the technical spec's §3.2 state machine and §3.1 severity field. Specific colour values are not specified here; semantic colour roles are used.
| State | Visual treatment | Entry condition visible to user | Confirmation required for transition |
|---|---|---|---|
| Detected | High-prominence badge; warning or critical colour role depending on severity | Finding raised by system; no action yet taken | N/A (system-set) |
| Action Created | Active badge; neutral-positive colour role | Task or alert linked to Finding | Confirmation on task acceptance (see Flow 1) |
| In Progress | Active badge; neutral colour role | Linked task is open in Task Manager | No confirmation to enter; transitions out require confirmation |
| Escalated (optional) | Elevated badge; warning colour role | Manager explicitly escalated; escalating actor shown | Confirmation step + optional reason (needs UX writer input) |
| Resolved | Subdued badge; success colour role | Task completed or explicit manager dismissal with stated reason | Confirmation + (for dismissal) mandatory reason field |
| Closed | Archived badge; muted/inactive colour role | Manager explicitly closed a Resolved Finding | Confirmation step (needs UX writer input) |
The transition from Detected to Action Created must not be possible without a task or alert being linked. The UI must prevent the Resolve and Close actions from being reachable on a Finding that has not met the preceding state's entry conditions. Inferred from the technical spec's §3.2 rules and §13.2 rule 2.
A Finding cannot return to Detected from any later state. The UI must not present any control that would enable this. Inferred from the technical spec's §3.2 enriched rule.
5.3 Empty / Loading / Error / Offline States
Inferred from the technical spec's §3, §14 reliability and availability requirements, and the module's continuous-audit purpose.
Findings list — empty state: Displayed when no Findings match the current filter, or when no Findings exist at all. The two cases are distinct: a filter-empty state should offer a clear-filters affordance; a genuinely-empty system state should communicate that the audit is active and no gaps have been detected. Neither state should resemble an error. (needs UX writer input — empty state heading and supporting copy for both cases)
Findings list and detail panel — loading state: A skeleton layout matching the list and detail panel structure is shown while data loads. Spinners are used only for discrete actions (e.g. committing a task) not for full-page loads. Inferred from the platform's progressive-loading convention and the need not to alarm staff with blank screens while audit data resolves.
Findings list — error state: If signal ingestion from one or more source modules is temporarily unavailable, an inline banner identifies which source module is affected and indicates that new Findings from that source may be delayed. Existing Findings already in the system remain fully actionable. The banner must not block the list. (needs UX writer input — error banner copy for partial and total signal-source outages)
Offline state: The web portal requires connectivity; fully offline operation is not supported. If connectivity is lost, an offline state banner is shown and all action controls (Resolve, Dismiss, Escalate, Accept/Reject task) are disabled with a visible explanation. Previously loaded Finding data may remain readable. The tablet notification surface should also show an offline indicator if it cannot reach the service. (needs UX writer input — offline banner copy)
6. Component Inventory
New components introduced or extended by this module:
- Guardian Finding card — compact list-row representation of a single Finding, showing severity badge, finding type, linked entity name, owning role, state badge, and age. Appears in the Findings list. Inferred from the technical spec's §5.1 Findings list requirement and §3.1 Finding fields.
- Finding detail panel — full-detail view of a single Finding, including source signals, AI reasoning text, linked entity, audit history timeline, and action controls (Resolve / Escalate / Dismiss / Accept or reject AI suggestion). Inferred from the technical spec's §5.1.
- AI reasoning block — a visually distinct, read-only block that presents AI-generated reasoning text alongside the source signal(s) that triggered the Finding. Must carry an AI-origin indicator so users know this is AI-generated, not human-entered. Inferred from the technical spec's §7 (AI MAY generate reasoning text) and §13.2 rule 7.
- AI suggestion card — a provisional, visually distinct card presenting an AI-suggested task or alert for human review, with Accept and Reject controls. Becomes a standard confirmed-action record after acceptance. Inferred from the technical spec's §7 (AI MAY suggest tasks/alerts for human approval).
- Dismissal reason modal — a two-step confirmation modal requiring a stated reason (mandatory free-text or pre-set selection) before a Dismiss action is committed. Inferred from the technical spec's §4.2, §8, and §9.
- Finding state badge — a compact, colour-coded label showing the current state of a Finding (Detected / Action Created / In Progress / Escalated / Resolved / Closed). Used on both the card and the detail panel. Inferred from the technical spec's §3.2 state machine.
- Severity badge — a consistent, colour-coded label for informational / warning / critical severity, used wherever a Finding is shown. The three-tier scale is a stable contract surface. Inferred from the technical spec's §3.1 and §13.5.
- Findings filter bar — a persistent filter control above the Findings list supporting filter by severity, state, owning role, source module, and date range. Includes a saved-views affordance. Inferred from the technical spec's §5.1 and §13.4.
- Audit history timeline — a chronological, read-only list of all state transitions, escalation events, task linkage events, and AI suggestion acceptance/rejection events for a single Finding, shown within the detail panel. Inferred from the technical spec's §8.
- Tablet Finding notification — a read-only notification card for critical Findings, showing severity, type, linked entity, and owning role. No action controls. Inferred from the technical spec's §5.2.
Reused from the design system:
- Confirmation modal (used for Resolve, Escalate, Close, and Accept-task flows)
- Inline banner (used for signal-source error states and offline state)
- Toast notification (used for action-committed confirmations — see §12)
- Filter/search bar (extended for Finding-specific filter dimensions)
- Skeleton loader
- Role/permission indicator in header (surfacing current user's role, per governance principle)
7. Visual Design Notes
- Typography: heading scale, body scale, and monospace usage follow the platform design system. Monospace is appropriate for Finding IDs and audit log timestamps.
- Colour: semantic colour roles only — success (Resolved/Closed states), warning (Detected/Escalated states, warning severity), error/critical (critical severity), info (informational severity), neutral (In Progress/Action Created states). Specific hex values are not specified here.
- Severity must always be communicated with both colour and a text label or icon — never by colour alone. This is required for accessibility (colour-blind users) and because severity is a primary triage signal. Inferred from the §3.1 severity field importance and WCAG 2.2 AA requirements.
- The AI reasoning block and AI suggestion card must use a visually distinct treatment (for example, a consistent "AI" badge and a differentiated background or border) that persists at all zoom levels and in high-contrast mode. Inferred from the technical spec's §7 and the governance-always-visible principle.
- Iconography: icon set and sizing follow the platform design system. Icons are never used alone without a visible label or programmatic accessible name.
- Motion: transitions are used sparingly. State-change transitions (e.g. a Finding card updating its badge after an action) may use a brief, purposeful transition. Nothing is animated purely for decoration. All motion must respect
prefers-reduced-motion.
8. Accessibility & Inclusivity
The module MUST meet WCAG 2.2 AA. Specifically:
- Text contrast ≥4.5:1 (normal) / ≥3:1 (large).
- All interactive controls reachable via keyboard.
- Focus states visible.
- Form fields have programmatic labels.
- ARIA used only where native semantics are insufficient.
- Touch targets ≥44×44 px on mobile/tablet.
- Motion can be reduced via
prefers-reduced-motion. - Screen reader tested on NVDA (Windows), VoiceOver (macOS/iOS), and TalkBack (Android).
- Severity and state information must not be conveyed by colour alone. Each severity badge and state badge must include a visible text label. Inferred from WCAG 2.2 AA success criterion 1.4.1 (Use of Colour) and the centrality of severity triage to this module's purpose.
- The AI reasoning block and AI suggestion card must be announced appropriately by screen readers, including their AI-origin status, so that screen reader users have the same governance transparency as sighted users. Inferred from the governance-always-visible principle and WCAG 2.2 AA.
- The dismissal reason field (mandatory for Dismiss actions) must have a clear programmatic label and, where a character minimum or maximum applies, this must be surfaced to assistive technology as well as visually. Inferred from the technical spec's §4.2 and §9 mandatory-reason requirement.
- The audit history timeline must be navigable via keyboard and announced in logical order by screen readers, so that governance reviewers using assistive technology can audit a Finding fully. Inferred from the technical spec's §8 audit requirements.
9. Internationalisation
- Locale-aware date/time/number formatting throughout. Audit timestamps and Finding ages must respect the practice's configured locale.
- All user-facing strings externalised to localisation files. This includes AI-reasoning block labels, severity and state badge labels, and all action button labels.
- Layouts tolerant of 30% string-length growth (to accommodate German, French, and other languages).
- RTL support: required, to be consistent with the platform standard.
- The dismissal reason free-text field must accept Unicode input to support multilingual reason entries. Inferred from the technical spec's §4.2 mandatory dismissal reason and the platform's multilingual context.
10. Cross-Module UX Touchpoints
All touchpoints inferred from the technical spec's §6 integration contracts and §10 integration summary.
- Task Manager — when a manager accepts an AI-suggested task or creates a custom task from a Finding, they are creating a record in Task Manager. The transition should feel seamless: the task appears linked within the Finding detail panel, and the Finding state updates to Action Created. When a linked task is completed in Task Manager, the Finding becomes eligible for resolution in AI Guardian. The user should not need to navigate away from AI Guardian to observe this status change; Task Manager completion events should be reflected in the Finding detail panel in near-real time.
- Communication Hub — when a manager accepts an AI-suggested alert, it is committed to Communication Hub for delivery. The Finding detail panel should confirm that the alert was dispatched and by whom. When a Finding is resolved or closed, related alerts are cleared; this clearance should be reflected in the Finding's audit trail.
- Access Manager — role-gating is enforced throughout. Staff without manager-level access will see Findings (per practice configuration) but will not see Resolve, Dismiss, or Escalate controls at all — those controls are absent rather than disabled. The current user's role is visible in the portal header. MFA prompts (where Access Manager mandates them, e.g. for bulk dismissal of critical Findings) appear as Access Manager–owned overlays.
- Audit & Compliance — the audit trail within each Finding detail panel is the staff-facing representation of the immutable log that AI Guardian emits to Audit & Compliance. A link or affordance to the fuller Audit & Compliance view (for export or compliance inspection) should be available from the Finding detail panel for users with appropriate access.
- Financial Insights — AI Guardian consumes aggregated signals from Financial Insights but does not link out to Financial Insights surfaces directly. Where a Finding's linked entity or source signal references a financial anomaly, the Finding detail panel should describe the signal in plain language without embedding Financial Insights UI components.
- Aftercare Manager — Findings sourced from Aftercare Manager signals are presented identically to other Findings, but should display the source module clearly in the Finding card and detail panel so that the owning team can contextualise the risk. When AI Guardian is enabled, Aftercare Manager surfaces relevant Guardian Findings alongside Aftercare Instruction records within its own detail view (per Aftercare Manager §4.1). From the AI Guardian perspective, this means that a manager viewing a Finding whose linked entity is an Aftercare Instruction record should find a secondary navigation affordance to that Aftercare Instruction in the Finding detail panel, consistent with the same pattern used for appointment-linked Findings. The Finding remains the authoritative actionable artefact; the Aftercare Instruction view is secondary context. Escalation flows initiated from within Aftercare Manager's interface are handled by the same Guardian escalation mechanics described in §5.1 Flow 1, Branch C, and the resulting audit events are recorded in AI Guardian's audit history timeline in the same way regardless of which surface initiated the escalation.
- AI Quality Monitor — quality findings ingested from AI Quality Monitor are treated as operational signals that produce Guardian Findings. The source is identified in the Finding detail panel. Staff should understand that the Guardian Finding is the actionable artefact, and that the originating AI Quality Monitor output is traceable through the source signals field.
- Admin Control Plane — practice-level Guardian configuration (enable/disable, detection thresholds, severity mappings) is accessed via Admin Control Plane. The entry point from the AI Guardian web portal should be a clear settings or configuration affordance that navigates into Admin Control Plane rather than hosting configuration UI within AI Guardian itself.
- Appointment Manager — Findings sourced from appointment events display the linked appointment as the linked entity in the Finding detail panel. Navigation to the appointment record in Appointment Manager (for context) should be available as a secondary action, not the primary focus of the Finding view.
- Digital Forms — Findings whose source signal originates from a submitted Digital Forms record (for example, a flagged inconsistency or AI-surfaced risk indicator within form data) display the relevant form submission as the linked entity, identifying the form type, submission date, and associated patient or appointment reference. A secondary navigation affordance to the original Digital Forms record is available in the Finding detail panel for managers who need to verify the AI's interpretation against the raw submission. This link is contextual and does not transfer action responsibility outside AI Guardian; all resolution, dismissal, and escalation actions remain within AI Guardian's governed workflow. The source module label for these Findings MUST clearly identify Digital Forms so that managers can orient the risk without ambiguity.
- Smart Dashboards — when Smart Dashboards is active alongside AI Guardian (see §4.4), Guardian Findings are surfaced on staff dashboards as AlertSignals. The source-module label and severity badge used on dashboard widgets must match the treatment defined in this specification, maintaining a consistent visual grammar of AI transparency across surfaces. Dashboard widgets are read-only; the primary action path always routes back to the AI Guardian web portal.
UX consistency rules:
- Action controls (Resolve, Escalate, Dismiss, Accept, Reject) are always located in a consistent position within the Finding detail panel — bottom or trailing edge on web. On tablet, no action controls appear at all. Inferred from the platform-wide convention and the technical spec's §5.1–5.2 surface definitions.
- The source module of every Finding is always displayed — on the card and in the detail panel — so that staff can orient the Finding within their operational context without guesswork. Inferred from the technical spec's §13.4 source-module filter requirement.
- AI-origin indicators (badge + visual treatment for AI reasoning blocks and AI suggestion cards) are used consistently across AI Guardian and any other Intelligence Suite module that surfaces AI-generated content, to build a platform-wide grammar of AI transparency.
11. Governance & Auditability
All items below are inferred from the technical spec's §7, §8, §9, and the technical spec's §3.2 state machine rules.
- AI suggestions (reasoning text, suggested tasks, suggested alerts) are visually distinct from human-confirmed actions at all times. The AI reasoning block and AI suggestion card use a dedicated AI-origin visual treatment (badge plus differentiated surface) that is never reused for human-entered content.
- Every audit-significant action — Accept task, Reject task, Resolve, Dismiss, Escalate, Close — is preceded by a confirmation step that shows the user what will be recorded in the audit trail. The confirmation step is not skippable.
- Dismissal requires a stated reason before the confirmation step is reachable. The reason is shown in the audit history timeline after submission.
- The current user's role and identity are visible in the portal header at all times, consistent with the platform governance standard. This is especially important in AI Guardian because role determines which action controls are visible.
- Read-only states are visually distinct from editable states. On the tablet surface, the entire Finding view is read-only; this is communicated by the absence of action controls and a clear read-only indicator, not by disabled buttons.
- The audit history timeline within the Finding detail panel is always visible (not hidden behind a disclosure) for Findings in Resolved or Closed states, so that compliance reviewers can inspect the full lifecycle without extra navigation.
- All AI suggestion acceptance and rejection events appear in the audit history timeline, including the actor identity and timestamp, so that the governance record shows not just what was done but what AI proposed and whether the human agreed.
- Deletion of Findings is not available in the UI. The interface must not present a delete or remove control at any point in the Finding lifecycle.
12. Notification & Communication Patterns
All patterns below are inferred from the technical spec's §5, §6.2, §13.2 rule 5, and §13.2 rule 10.
- In-app banner — used on the web portal to communicate signal-source availability issues (e.g. a source module temporarily unavailable, potentially delaying new Finding detection) and to communicate the offline state. Banners are non-blocking and dismissible where the condition is informational; persistent and non-dismissible where the condition affects the user's ability to act.
- Toast — used on the web portal to confirm completed actions: task committed to Task Manager, alert dispatched via Communication Hub, Finding resolved, Finding closed. Toasts are transient (auto-dismiss after a short interval) and non-blocking. They are not used for critical Findings that require human attention — those use the Findings list itself.
- Push notification (via Communication Hub — NOT directly) — AI Guardian does not send push notifications directly. All staff alert and summary notifications are emitted to Communication Hub as outbound events, and Communication Hub owns delivery to the appropriate channel. The AI Guardian UX has no direct notification composition surface for push. Inferred from the technical spec's §6.2 outbound contract and §13.2 rule 5.
- Email / SMS (via Communication Hub — NOT directly) — As with push, email and SMS alerts are committed to Communication Hub as structured outbound events. AI Guardian staff-facing UI shows the status of dispatched alerts (sent, acknowledged) within the Finding detail panel, but does not compose or send email/SMS directly. Inferred from the technical spec's §6.2 outbound contract.
- Tablet notifications — critical-severity Findings are surfaced as read-only notifications on the tablet surface. These are in-app notifications delivered within the Primoro tablet app, not operating-system push notifications, and they are populated by the same Communication Hub outbound event path. Inferred from the technical spec's §5.2 and §6.2.
- Dashboard engagement signals — Finding counts by severity and state are surfaced as summary indicators on staff dashboards. SLA breach records (where configured) are surfaced as engagement signals for practice managers. These are passive informational displays, not alerts, and follow the calm-by-default principle. Inferred from the technical spec's §5.4 engagement signals.
13. Open Questions
The following UX decisions must be resolved before this spec is promoted from draft to published.
-
(Needs product and UX writer input) Empty state copy and illustration — what is the correct tone and message for a genuinely clean Findings list (no gaps detected)? This state is positive but the copy must not be complacent or suggest the module is inactive.
-
(Needs product decision + UX writer input) Escalated state trigger and UI — the technical spec marks Escalated as optional and does not fully define the conditions under which a Finding may be escalated, or the full escalation flow (does escalation notify another role? does it require a reason?). The UX for this state cannot be fully specified until the product decision in Technical Spec §15, open question 6 is resolved.
-
(Needs product decision) Recall and follow-up signals — the technical spec lists recall and follow-up logic as a signal source (§4.1) but does not attribute it to a named module. Until the owning module is confirmed and the integration contract is defined, the source module label for these Findings in the UI cannot be finalised.
-
(Needs product decision + legal/privacy input) GDPR erasure in the audit timeline — if a patient exercises a right-to-erasure request, how should Findings whose audit trail references that patient appear in the UI? The tension between immutable audit and erasure rights (Technical Spec §15, open question 4) must be resolved before the Finding detail panel and audit history timeline can be fully specified for this case.
-
(Needs product decision) Detection threshold configuration UX — the technical spec states that thresholds are configurable per signal type via Admin Control Plane (§13.3), but does not define what configuration options exist, what the defaults are, or how many threshold parameters there are. The configuration surface within Admin Control Plane cannot be designed until these defaults and options are defined (Technical Spec §15, open question 5).
-
(Needs product decision) Saved views scope — the technical spec states that saved views are configurable per user (§13.4). Should saved views be private to the individual user, shareable within a practice, or both? This affects the filter-bar component design.
-
(Needs product decision + legal input) Data retention and visible history — how long do resolved and closed Findings remain visible in the Findings list versus being archived or hidden by default? Until the retention policy is defined (Technical Spec §15, open question 3), the default list scope and archive/retrieval UX cannot be specified.
-
(Needs product decision) Bulk actions — the technical spec references MFA requirements for bulk dismissal of critical Findings (§9), implying that bulk actions are anticipated. The UX for bulk selection, bulk dismissal (with a shared or per-Finding reason?), and the MFA confirmation flow has not been specified and requires a product decision before it can be designed.
-
(Needs engineering clarification) Near-real-time Task Manager status reflection — Flow 1 and Flow 2 above assume that Task Manager task completion is reflected in the Finding detail panel without requiring a page reload. The latency and mechanism for this update need to be confirmed by engineering before the interaction model for that step can be finalised.
-
(Needs product decision) Smart Dashboards AlertSignal release milestone and role scope — as noted in §4.4, the release milestone at which AI Guardian signals are activated within Smart Dashboards, and whether AlertSignals appear on all dashboard profiles or only manager-role dashboards, must be confirmed before detailed widget design can begin.
-
(Needs product decision) Digital Forms signal type mapping — the specific form-level signal types that AI Guardian will ingest from Digital Forms, and the mapping of those signal types to Guardian severity tiers, must be agreed with the Digital Forms team before form-linked Finding labels and AI reasoning templates can be finalised.