AI Quality Monitor

Post-MVP ROADMAP — Intelligence Suite 💰 GTM ⚙ Settings
Journey progress
33% complete · 6d since last change
📝 Specs drafted
Specs published
🎨 Design in progress
👀 Design reviewed
🔨 Built
🚀 Released
💬 Discussion no comments on ux yet comments don't trigger digest emails (mentions do)

Mention: @email@domain for a person, @role:designer for everyone with that role, or @all for everyone watching this module. Markdown supported in the body.

Sign in as a designer or higher to post comments.

No comments on the ux spec yet. Be the first.

Versions (UX Specification)
Currently viewing
v0.1 · ux
Status: published
Updated: 2026-05-14

🖼 Designs in Figma

Figma integration not configured. Set FIGMA_PAT in .env and restart the web container to enable file linking.
🎨 Generate AI design prompt
Compose a prompt from this UX spec, paste it into your AI design tool of choice (UX Pilot, Galileo, v0, etc), then send the result into Figma.

AI Quality Monitor – UX Specification

Related Technical Authority: AI Quality Monitor – Technical Specification

1. Purpose

This UX specification governs the staff-facing, clinician-facing, and patient-facing surfaces of the AI Quality Monitor module — a governed quality-assurance and clinical decision-support system that surfaces findings, draft outputs, and remediation prompts for human review. It defines the interaction model, surface breakdown, and governance-visibility patterns for the roles that consume and act on the module's outputs: practice owners, clinical governance leads, clinicians, and compliance staff. The spec ensures that every AI-generated output is visually distinguishable, review-first, and auditable, and that no surface enables the module to be used as a performance-surveillance tool.

2. Core UX Principles (Non-Negotiable)

These principles take precedence over visual preferences. If a design choice conflicts with a principle below, the principle wins.

  • Action-first — users see the action they need next, not abstract status displays.
  • Governance always visible — when AI is involved, users always know what AI did and what they're confirming.
  • No dead toggles — every UI control either does something or doesn't appear.
  • Calm by default — the interface gets out of the way; alerts are reserved for things that genuinely need attention.
  • Progressive disclosure — advanced detail is one click away, not always-on.
  • Review-first, always — no AI-generated finding, draft, or summary is ever presented as final or committed without an explicit human action. The interface must make the review step unavoidable and never skippable. Inferred from the technical spec's §3.4 rule that a Draft Output Artefact "MUST NOT be auto-finalised or auto-committed under any circumstance" and §7 AI Boundaries.
  • Anti-surveillance by design — team-level aggregations are the default view; named individual data is never surfaced to general staff dashboards. No screen may imply a performance-ranking or individual-scoring purpose. Inferred from the technical spec's §3.2 prohibition on individual performance scoring and §9 access table.
  • Explainability at every finding — the UI always surfaces the reason a finding was generated, in plain language, before asking the user to act on it. Inferred from the technical spec's §14.2 rule 1 and the ExplainabilityNote field in §3.1.

3. Design Philosophy

AI Quality Monitor operates mostly in the background; the interface should reflect this. The dominant mental model is an exception-and-review inbox: the system does work silently and only surfaces items that genuinely require a human decision. Inferred from the technical spec's §1 statement that the module "operates in the background."

Empty states are positive — an empty findings list or an empty draft-review queue means the system has found nothing requiring attention, and this should be communicated as a reassuring signal rather than a broken state. Inferred from the module's background-operation posture and the fact that no finding is generated without a derived signal trigger (§5.1).

Error states must never obscure governance obligations — if a finding or draft cannot be loaded, the UI must explain that an item exists and is inaccessible (rather than silently disappearing it), so that governance completeness is not undermined. Inferred from the immutable audit-trail requirement in §3.1 and §8.

AI suggestions are always labelled and distinct — every AI-generated finding, draft, or suggestion carries a persistent visual marker indicating its AI origin, the derived signal type that triggered it, and its current review state. This is not a badge that disappears after first view. Inferred from the ExplainabilityNote requirement (§3.1) and the governance-visibility principle in §7.

Read-only and editable surfaces are visually unambiguous — evidence clips are always read-only and must never resemble an editable transcript. Draft output artefacts in Pending or InReview state are editable; in Accepted, Rejected, or Expired states they are read-only and archived. Inferred from §5 evidence access rules and §3.4 state machine.

Multi-step flows for irreversible actions — dismissing a finding, rejecting a draft, and approving a patient-safe summary for release to the patient app are all irreversible in their downstream effects and each requires a confirmation step with a mandatory reason or explicit acknowledgement. Inferred from §3.2 rule that Dismissed requires a mandatory reason field and §14.2 rule 12 on patient-safe summary release.

No undo for committed drafts — once a draft is accepted or edited and committed to the PMS workflow, the UI must communicate that the action is final and direct the user to the PMS for any subsequent correction. Inferred from §11.3 PMS boundary rule that "AI Quality Monitor owns the draft; the PMS owns the record."

4. Primary Surfaces

4.1 Web Portal

Who uses it: practice owners, clinical governance leads, designated compliance roles, and authorised clinical/operational staff. Inferred from the §12.1 surface description and §9 access control table.

Key tasks performed here:

  • View the quality dashboard: zone summaries, finding lists, and telemetry indicators, filtered by zone, finding type, state, and date range. Inferred from §12.1 and §14.4 filtering requirements.
  • Open and triage quality findings — read the explainability note, review context linkage, and progress a finding to Remediated, Resolved, or Dismissed (with mandatory reason). Inferred from §3.2 state machine and §12.1.
  • Access evidence clips for findings that require human validation, restricted to governance-authorised roles only, with MFA gate on access. Inferred from §5.3 and §9 access table, and the enriched MFA note in §9.
  • Review and action the draft output review queue — clinical summaries, treatment plan structures, care plan suggestions, hygiene plan suggestions, and patient-safe summaries — with accept, edit, or reject disposition. Inferred from §12.1 and §3.4.
  • Configure zones and signal types: enable or disable zones, activate or deactivate individual signal types per zone, with all changes audited. Restricted to practice owner and compliance roles. Inferred from §4.1 and §9.
  • View the immutable audit log and export it for CQC, DSPT, or UK GDPR inspection purposes. Inferred from §8 and §9 read-only audit log access row.
  • Manage evidence retention extensions via a logged governance decision flow. Inferred from §5.4 and §9.

Layout pattern: the quality dashboard follows a list-detail pattern — a filterable findings list on the left, a detail panel on the right for finding content, explainability note, context linkage, and available actions. The draft-review queue is a separate list-detail view. Zone configuration uses a form/wizard pattern. Inferred from the filtering and view requirements in §14.4 and the multi-type configuration surfaces in §14.3.

4.2 Tablet App

Who uses it: clinicians in surgery, reviewing AI-generated draft outputs at point of care during or after an appointment. Inferred from §12.2 which states clinician-facing draft outputs are surfaced in the "clinician's appointment / day-list view on the tablet, enabling review and approval at point of care."

Key tasks performed here:

  • View pending draft outputs linked to the current or recent appointment — clinical summary draft, treatment plan structure, care plan suggestion, hygiene plan suggestion — and accept, edit, or reject each one. Inferred from §12.2 and §6.1.
  • Review and approve patient-safe summaries before they are released to the patient app. Inferred from §6.1 and §14.2 rule 12.
  • View finding alerts relevant to the current appointment context (e.g., a documentation gap flagged for the active patient), with a prompt to act or defer. Inferred from §6.1 and the appointment-context linkage in §4.3.

Touch ergonomics: all interactive controls (accept, edit, reject, approve) must meet a minimum touch target of 48 × 48 px. Dismiss and approve actions for patient-safe summaries must not be adjacent to prevent accidental mis-tap. The review flow must be completable one-handed. Inferred from the clinical point-of-care context described in §12.2 and general tablet ergonomics principles.

4.3 Mobile App (Patient)

Who uses it: patients, receiving their approved appointment summary after the clinician has explicitly approved and released it. Inferred from §12.3 and §6.1.

Key tasks performed here:

  • View the patient-safe appointment summary after it has been approved and released by the clinician. The patient has no ability to edit, dismiss, or act on this content — it is a read-only informational output. Inferred from §12.3 and the review-first release gate in §14.2 rule 12.

The patient surface is intentionally minimal: a single read-only summary view per appointment. No quality findings, evidence, or AI attribution metadata are ever visible to the patient. Inferred from §12.3 and the access control restrictions in §9.

4.4 Zone Scope — Monitored Output Sources

The following clarifies which modules' AI-generated outputs are in scope for zone-based monitoring, to inform both the zone configuration surface (§4.1) and the signal-type inventory surfaced in the findings list.

Clinical zones monitor AI-generated drafts produced during or after appointments: clinical summaries, treatment plan structures, care plan suggestions, and hygiene plan suggestions. These are the primary draft output artefact types described in §3.3 of the technical spec.

Back-office and communication zones may extend monitoring to other AI-generated content where a zone has been explicitly enabled by a practice owner or compliance role. Campaign Manager is a relevant example: Campaign Manager introduces AI-generated draft content (email bodies, audience segment logic, and campaign messaging) that is subject to its own internal review gate before activation. Whether AI Quality Monitor observes Campaign Manager builder outputs as an additional zone is a configuration and scoping decision for the practice owner — it is not enabled by default. If such a zone is enabled, Campaign Manager draft findings would appear in the findings list, carry AI origin badges, and follow the standard finding triage flow (§5.1, Flow 1). The zone configuration surface (§4.1) MUST present Campaign Manager as a named, configurable zone option if the platform-level integration is available, with clear labelling distinguishing it from clinical monitoring zones. Inferred from the Campaign Manager module's introduction of Aiden-generated draft content with explicit review gates, and the extensible zone model in technical spec §4.1.

No zone — clinical or back-office — may be enabled without the governance gate confirmation step described in Flow 4 (§5.1). The scope of monitored output sources is always visible in the zone configuration summary and in the quality dashboard's zone filter, so that governance leads can audit exactly what the module is and is not observing.

5. Interaction Model

5.1 Primary Flows


Flow 1: Quality finding triage (web portal — governance role)

Inferred from the Quality Finding state machine in §3.2, the filtering requirements in §14.4, and the access control table in §9.

1. User arrives at the quality dashboard (filtered to open findings by zone — default saved view).
2. User selects a finding from the list → finding detail panel opens; state transitions to `UnderReview`.
3. User reads the explainability note and reviews context linkage (zone, appointment, timestamps, attribution status).
4. User chooses one of three actions:
   a. Create remediation task → Finding moves to `Remediated`; Task Manager receives outbound event.
   b. Resolve → Finding moves to `Resolved`; confirmation step required.
   c. Dismiss → Mandatory reason field presented; user must complete before dismissal is accepted;
      finding moves to `Dismissed`; dismissal logged as audit event.
5. Where evidence is attached and user holds evidence access role:
   a. MFA prompt before evidence panel unlocks.
   b. Evidence clip plays read-only within the panel; no download or export control present.
   c. Evidence access logged as audit event.

Flow 2: Draft output review (web portal or tablet — clinician)

Inferred from the Draft Output Artefact state machine in §3.4, §12.1, §12.2, and §6.1.

1. Clinician sees a badge or count indicator on their draft-review queue (web portal) or
   appointment day-list (tablet) showing pending drafts.
2. Clinician opens a draft → state transitions to `InReview`.
3. AI origin badge, draft type label, and explainability context are visible throughout.
4. Clinician reads the proposed content.
5. Clinician chooses:
   a. Accept → confirmation step ("You are approving this draft for commitment"); state → `Accepted`;
      content committed to PMS workflow; audit event logged.
   b. Edit → inline editor opens with diff tracking; on save, confirmation step presented;
      state → `Edited`; edited content committed; diff logged in audit trail.
   c. Reject → reason prompt (*(needs UX writer input — e.g. label and placeholder for rejection reason field)*);
      state → `Rejected`; content not committed; audit event logged.
6. Queue view updates to remove actioned draft.

Flow 3: Patient-safe summary release (tablet — clinician)

Inferred from §6.1, §14.2 rule 12, and the patient app surface in §12.3.

1. After appointment, clinician sees a patient-safe summary draft in their pending review queue.
2. Clinician reviews the plain-language summary content.
3. Clinician must explicitly approve (accept or edit-then-accept) before the summary is released.
4. A confirmation modal presents: "This summary will be sent to [patient name]'s app."
   *(needs UX writer input — exact modal copy and confirmation button label)*
5. On confirmation: state → `Accepted`; patient app surface unlocked for that appointment.
6. If clinician rejects: state → `Rejected`; patient sees no summary for that appointment.

Flow 4: Zone configuration (web portal — admin / compliance role)

Inferred from §4.1 zone configuration rules and §14.3 configuration surfaces.

1. Admin navigates to zone configuration screen.
2. Existing zones are listed with their current enablement state (enabled / disabled) and active signal types.
3. Admin selects a zone to configure or creates a new zone.
4. Admin enables or disables the zone — a governance gate is presented:
   "Enabling this zone activates monitoring. Confirm that staff policy and patient signage
   records have been updated." *(needs UX writer input — exact gate copy)*
5. Admin selects which signal types are active for the zone (each is an explicit toggle).
6. On save: all changes are written to the audit log with actor identity and timestamp.

Flow 5: Evidence retention extension (web portal — practice owner / governance lead)

Inferred from §5.4 and §9, and the MFA requirement for evidence-sensitive actions.

1. Governance user views a finding whose evidence clip is approaching expiry.
2. User selects "Extend retention" — MFA challenge presented.
3. User completes MFA and provides a documented reason for the extension.
4. Extension decision is logged as an immutable governance action.
5. New expiry date is displayed on the finding detail panel.

Flow 6: Smart Treatment Proposals draft promotion (web portal or tablet — clinician / governance role)

Inferred from Smart Treatment Proposals technical spec §9.4, which defines AI Quality Monitor as the notification and promotion pathway for STP draft proposals, and from the STP UX spec §4.1 task "Review AI Quality Monitor draft proposals and promote them to Presented after editing."

1. An STP-originated draft proposal appears in the draft-review queue alongside other Draft Output
   Artefacts. It is labelled with its draft type (e.g., "Treatment Proposal — AI draft") and an
   AI origin badge; its source module context ("Generated via Smart Treatment Proposals") is
   surfaced in the explainability note.
2. The clinician or governance reviewer opens the draft → state transitions to `InReview`.
3. The AI origin badge, STP source context, and explainability note are visible throughout.
4. Reviewer reads the proposed treatment content and any supporting rationale surfaced by STP.
5. Reviewer chooses:
   a. Accept → confirmation step explicitly states that the proposal will be promoted to
      `Presented` state within the Smart Treatment Proposals module and committed to the PMS
      workflow; state → `Accepted`; STP module receives the promotion event; audit event logged.
   b. Edit → inline editor opens with diff tracking; on save, confirmation step presented,
      stating that the edited proposal will be promoted to `Presented`; state → `Edited`;
      diff logged in audit trail; STP module receives the promotion event.
   c. Reject → reason prompt presented; state → `Rejected`; proposal is not promoted to
      `Presented`; STP module receives the rejection event; audit event logged.
6. The confirmation modal for accept and edit actions MUST make explicit that promotion to
   `Presented` in STP is the downstream effect, and that correction after promotion must be
   made via the PMS or STP module directly.
7. Queue view updates to remove the actioned proposal.

The UX treatment for STP draft proposals is identical to other Draft Output Artefact types — the same review card, state machine, AI origin badge, and confirmation gate apply. The only distinction is in the explainability note (which references the STP signal source) and the confirmation modal copy (which names the Presented promotion as the downstream consequence). This ensures governance consistency regardless of which AI module originated the draft.


5.2 State Machines (Mirror of Technical Spec § 3)

Quality Finding states

Inferred from the Quality Finding state machine in §3.2.

State UI treatment Entry condition visible to user Confirmation pattern
Generated Neutral badge; item appears in open findings list System-generated; explainability note present
UnderReview Active/highlighted badge; detail panel open Triggered by user opening the finding Implicit (opening the finding records the transition)
Remediated Progress badge; linked task reference shown Remediation task created and linked Task creation confirmation
Resolved Muted/closed badge; moved to resolved list Explicit user action Confirmation step required
Dismissed Muted/closed badge; dismissal reason shown Explicit user action with mandatory reason Reason field mandatory before submission
Expired Archived indicator; content and evidence no longer accessible System-enforced on retention window end None — system action; audit entry visible

Findings in Expired state are visible in the audit log but their content is not surfaced in the active findings list. Inferred from §3.2 rule that expiry cannot be reversed and from the immutable audit trail requirement in §8.


Draft Output Artefact states

Inferred from the Draft Output Artefact state machine in §3.4.

State UI treatment Entry condition visible to user Confirmation pattern
Pending AI badge; queued in review list System-generated; draft type and context shown
InReview Active indicator; editor/reader open Triggered by user opening the draft Implicit
Accepted Accepted badge; read-only; committed indicator Explicit accept action by authorised user Confirmation modal before commit
Edited Edited badge; diff reference link available Explicit save-after-edit action Confirmation modal; diff shown before commit
Rejected Rejected badge; reason shown; read-only Explicit reject action Reason prompt required
Expired Archived; content not surfaced downstream System-enforced on retention window end None — system action; visible in audit log

The Accepted and Edited confirmation modals must make explicit that the action commits the content to the PMS workflow and cannot be undone from within this module. Inferred from §11.3 PMS boundary rule.


5.3 Empty / Loading / Error / Offline States

All states below are inferred from the module's background-operation posture (§1), the exception-and-review inbox mental model, and the audit/governance obligations in §8.

Quality dashboard — findings list

  • Empty state: (needs UX writer input — reassuring message confirming no open findings require attention) — displayed with a calm, neutral treatment (no warning iconography).
  • Loading state: Skeleton rows matching the expected list density; zone filter and date filter controls render immediately so the user can adjust scope while content loads.
  • Error state: Inline error message explaining that findings could not be loaded, with a retry action. The total count of open findings (from a lightweight count endpoint) should remain visible if available, so the user knows items exist even if the list has failed to render.
  • Offline state: A persistent banner indicates loss of connectivity. The findings list renders from the last cached state (clearly timestamped as stale). No state transitions (resolve, dismiss, remediate) are permitted offline — action controls are disabled with a tooltip explaining why. Inferred from the immutable audit-trail requirement, which requires server-side logging of all transitions.

Draft-review queue

  • Empty state: (needs UX writer input — message confirming no drafts are pending review) — positive framing.
  • Loading state: Skeleton cards.
  • Error state: Inline error with retry; draft count badge on the queue entry point remains visible if available.
  • Offline state: Draft content may be read from cache; accept, edit, and reject actions are disabled until connectivity is restored.

Evidence clip viewer

  • Empty state: Not applicable — the evidence panel only appears when a clip is attached to a finding.
  • Loading state: Spinner within the evidence panel; clip metadata (duration, timestamp) renders first.
  • Error state: Inline message that the clip could not be loaded, with a retry. If the clip has expired, a specific message must distinguish expiry from a load failure (the audit record of the clip remains accessible).
  • Offline state: Evidence clips are not cached locally; the panel shows an offline notice and disables playback.

Zone configuration

  • Empty state: A prompt to add the first zone, with a clear call to action. (needs UX writer input — empty-state heading and action label)
  • Loading state: Skeleton form fields.
  • Error state: Form-level error if a save fails; field-level validation errors inline.
  • Offline state: Configuration changes cannot be submitted offline; controls are disabled with explanation.

6. Component Inventory

New components introduced or extended by this module:

  • Quality Finding card — compact list item showing finding type, zone, state badge, attribution status, and age. Appears in the findings list on the quality dashboard. Inferred from §12.1 and §14.4 filter/view requirements.
  • Finding detail panel — split-pane detail surface showing explainability note, context linkage (zone, appointment, patient reference, timestamps, attribution), state history, attached evidence indicator, and available action controls. Appears in the web portal. Inferred from §12.1 and §3.1 minimum fields.
  • AI draft review card — review surface for a single Draft Output Artefact, showing draft type label, AI origin badge, proposed content, and accept / edit / reject controls. Appears on the web portal review queue and the tablet appointment view. Inferred from §12.1, §12.2, and §3.3.
  • Evidence clip viewer — read-only, in-panel media player with no download or export controls, time-bounded to the clipped excerpt. Accessible only to governance-authorised roles after MFA. Inferred from §5.3 and §9.
  • Dismissal reason modal — modal dialog with a mandatory free-text reason field and confirmation/cancel controls. Appears when a user attempts to dismiss a finding. Inferred from §3.2 dismissal rule and §14.2 rule 8.
  • Zone configuration form — multi-step form allowing zone creation, signal type activation per zone, and enablement/disablement with a governance gate confirmation step. Appears in the web portal admin/compliance area. Inferred from §4.1 and §14.3.
  • Governance gate modal — confirmation dialog presented before any irreversible governance action (zone enablement, evidence retention extension, patient-safe summary release). Contains a structured acknowledgement prompt. Inferred from §4.1, §5.4, and §14.2 rule 12.
  • Attribution status indicator — inline badge within a finding card or detail panel showing Named, Rota-attributed, or Unattributed. Named attribution only visible to governance roles. Inferred from §4.4 and §9.
  • Audit log viewer — paginated, filterable, read-only table of immutable audit events with export action. Accessible to authorised compliance roles. Inferred from §8 and §9.

Reused from the design system:

  • State badge / status chip — for both Quality Finding and Draft Output Artefact states.
  • Filter bar — zone, finding type, state, date range, attribution status filters as described in §14.4.
  • Saved views selector — for the default saved views specified in §14.4.
  • MFA challenge modal — platform-level component invoked before evidence access and retention extension actions.
  • Toast notification — for non-blocking confirmations (e.g., task created, draft accepted).
  • Inline error message — for form validation and load failures.
  • Skeleton loader — for list and form loading states.

7. Visual Design Notes

  • AI origin badge — all AI-generated content (findings, draft artefacts, explainability notes) must carry a persistent visual marker identifying the content as AI-generated. This marker must be present in both the list view and the detail view and must not disappear after the item has been reviewed. Inferred from §7 AI Boundaries and the governance-visibility principle.
  • State colour semantics — state badges should use the platform's semantic colour system: open/active states use an attention-appropriate treatment; resolved/dismissed/expired states use a muted/neutral treatment; Accepted and Edited draft states use a positive treatment; Rejected uses a neutral-negative treatment. Exact colour values are to be defined by the design system team — this spec does not prescribe hex values.
  • Read-only surfaces are visually distinct — evidence clips, committed drafts, and expired artefacts must use a distinct visual treatment (e.g., a background tint or border style from the design system's read-only token) to prevent any ambiguity about editability. Inferred from §5.3 evidence access rules and §3.4 state machine.
  • Iconography — icon-only controls must not be used for governance-critical actions (dismiss, accept, reject, approve for patient release). Every such control must carry a text label. Inferred from the governance-visibility principle and WCAG 2.2 AA requirements.
  • Dashboard telemetry — zone-summary indicators and quality metric charts on the dashboard should use calm, neutral data-visualisation treatments. Red/amber/green traffic-light patterns must not be applied to zone summaries in a way that implies individual performance ranking. Inferred from §3.2 prohibition and the anti-surveillance design principle.
  • Typography, specific colour tokens, and motion/animation specifics: (needs UX writer input — to be defined by design system in alignment with platform design language)

8. Accessibility & Inclusivity

The module MUST meet WCAG 2.2 AA. Specifically:

  • Text contrast ≥4.5:1 (normal) / ≥3:1 (large)
  • All interactive controls reachable via keyboard
  • Focus states visible
  • Form fields have programmatic labels
  • ARIA used only where native semantics are insufficient
  • Touch targets ≥44×44 px on mobile/tablet
  • Motion can be reduced via prefers-reduced-motion
  • Screen reader tested on NVDA / VoiceOver / TalkBack
  • The evidence clip viewer must expose playback controls to keyboard and screen-reader users; the read-only nature of the clip must be communicated programmatically (e.g., via aria-readonly or equivalent). Inferred from §5.3 evidence access rules and the tablet/web delivery surfaces in §12.
  • The mandatory dismissal reason field must carry an explicit programmatic label and an inline error message when submitted empty, as this field is a governance-critical control. Inferred from §14.2 rule 8.
  • AI origin badges must not rely on colour alone to convey AI provenance — an accessible text label or icon-with-label pattern is required. Inferred from §7 AI Boundaries and WCAG 1.4.1 (use of colour).
  • The technical spec references WCAG 2.1 AA (§13); this UX spec upgrades the target to WCAG 2.2 AA in line with the UX template's platform default, pending confirmation that this does not conflict with any engineering constraint. Inferred from §13 and the UX template canonical standard.

9. Internationalisation

  • Locale-aware date/time/number formatting
  • All user-facing strings externalised
  • Layouts tolerant of 30% string-length growth (German, French)
  • RTL support: requirement not explicitly addressed in the technical spec. (needs UX writer input — confirm whether RTL is required for this module's planned deployment markets.)
  • Audit log timestamps must be displayed in the user's local timezone with an explicit timezone label, given that clinical and governance records may be reviewed across locations in multi-site deployments. Inferred from §4.1 multi-site, multi-zone, multi-tenant requirement in §13 and the governance/CQC export obligation in §8.
  • Evidence clip timestamps and retention expiry dates must use unambiguous date formats (ISO 8601 or locale-aware long-form) to avoid misreading in governance contexts. Inferred from §5.4 retention and expiry rules.

10. Cross-Module UX Touchpoints

All touchpoints below are inferred from the integration summary in §10 and §11 of the technical spec.

  • Task Manager — when a governance user creates a remediation task from a quality finding (transition to Remediated), the task is handed off to Task Manager. The finding detail panel should surface a reference link to the created task so that the user can navigate to it without losing their place in the findings view. The handoff is one-way at creation; Task Manager owns the task lifecycle thereafter.
  • Communication Hub — the module emits alerts and governance review notifications to Communication Hub on finding state changes. These appear in the user's notification surface as delivered by Communication Hub; AI Quality Monitor does not own or render the notification UI. In-app banners and push notifications originating from this module are delivered through Communication Hub's standard patterns.
  • AI Guardian — zone-level quality indicators and finding telemetry are emitted to AI Guardian for higher-level operational awareness. The quality dashboard's zone-summary telemetry is the origin of this data; no direct navigation between AI Quality Monitor and AI Guardian is required at the finding level, but a link to the relevant AI Guardian view from the quality dashboard (needs UX writer input — confirm whether a contextual navigation link is required) may be appropriate.
  • Access Manager — all role and permission states that control which controls are visible (zone configuration, named attribution, evidence access) are resolved via Access Manager at session time. The UI must not render controls that the user's current role cannot activate. Role and permission context must be visible in the session header.
  • Appointment Manager — finding and draft context linkage (zone, patient, practitioner, appointment timestamps) is resolved via a sync lookup to Appointment Manager. The appointment reference is surfaced as a non-editable context field in every finding detail panel and draft review card, with a navigation link to the appointment record where the user's role permits.
  • AI Concierge — automated-action events from AI Concierge (forms sent, tasks created, call recovery outcomes) are ingested as system-event signals and logged as attribution events within AI Quality Monitor. These appear in the audit log and may surface as attribution context within findings, but the user has no direct action to take on the AI Concierge origin — it is contextual information only.
  • PMS (Dentally and others) — when a clinician accepts or edits a draft output artefact, the committed content flows to the PMS workflow. The confirmation modal for acceptance must explicitly state that the content will be committed to the patient record via the PMS, and that corrections after commitment must be made in the PMS directly. AI Quality Monitor surfaces a read-only record of the committed draft; it does not provide an edit path post-commitment.
  • Smart Treatment Proposals — STP-originated draft proposals are surfaced in the AI Quality Monitor draft-review queue as standard Draft Output Artefacts (see Flow 6, §5.1). When a reviewer accepts or edits an STP draft, AI Quality Monitor emits a promotion event to STP, which transitions the proposal to Presented state within the STP module. The confirmation modal for STP draft acceptance must name this downstream effect explicitly. Rejection events are similarly passed to STP. No direct navigation from AI Quality Monitor to the STP proposal builder is required; the STP source context is surfaced as read-only information in the explainability note. Inferred from Smart Treatment Proposals technical spec §9.4 and STP UX spec §4.1.
  • Digital Forms / Aftercare / Care Plans — completion and delivery status signals from these modules are consumed as cross-verification inputs and may appear as context in documentation-gap findings. Users seeing such a finding should be able to view the linked status (e.g., "Aftercare not delivered for appointment X") as part of the explainability note, with a contextual link to the originating module where role permits.

UX consistency rules:

  • Action buttons for governance-critical transitions (dismiss, accept, reject, approve for patient release) always appear in a consistent position within their respective panels — bottom-right on web portal, full-width at the bottom of the panel on tablet — to build muscle memory and reduce mis-tap risk. Inferred from the clinical point-of-care tablet context in §12.2 and multi-surface consistency requirements.
  • AI origin badges use the same visual treatment and label across all surfaces (web portal, tablet, patient app context indicators) so that users never encounter an unlabelled AI output. Inferred from §7 AI Boundaries.
  • State badge labels for Quality Findings and Draft Output Artefacts use identical terminology across the quality dashboard, finding detail panel, draft review queue, and audit log, matching the canonical state names in §3.2 and §3.4.

11. Governance & Auditability

  • All AI-generated findings and draft output artefacts carry a persistent AI origin badge and a visible explainability note throughout their lifecycle — from the list view through to the archived audit record. This treatment is never removed after review. Inferred from §7 AI Boundaries and the ExplainabilityNote field in §3.1.
  • Every governance-critical action (dismiss a finding, accept/edit/reject a draft, approve a patient-safe summary for release, extend evidence retention, enable a zone) presents a confirmation step before the action is committed. Irreversible actions state explicitly that they cannot be undone within this module. Inferred from §3.2 dismissal rule, §3.4 commit rules, §5.4 retention extension, and §4.1 enablement rules.
  • The Dismissed finding state surfaces the mandatory reason field's content in the finding detail panel, in the audit log, and in any export — it is never hidden after submission. Inferred from §3.2 and §14.2 rule 8.
  • Named attribution data (where surfaced) is accompanied by a visible indicator that access to this view has been logged. This is not a warning — it is a calm, persistent governance label. Inferred from §4.4 and the full-audit requirement on named attribution access in §9.
  • The current user's role and active permission context are visible in the session header on all surfaces. Controls that the user's role cannot activate are not rendered (not disabled-and-greyed, which would imply they are temporarily unavailable rather than role-gated). Inferred from the RBAC enforcement model in §9 and the no-dead-toggles principle.
  • Evidence clips are presented in a read-only viewer that carries a persistent label indicating that the clip is governed evidence and cannot be downloaded, exported, or used outside the originating finding. Inferred from §5.3 evidence access rules.
  • Evidence clip retention and expiry policy coordination — evidence clip retention windows and expiry policies are subject to configuration by authorised administrators via the Security and Privacy module's web portal interface (Security and Privacy UX §4.1, §13.3). The evidence retention extension flow (Flow 5, §5.1) and the expiry dates displayed in finding detail panels must reflect the authoritative retention policy as configured in the Security and Privacy module; AI Quality Monitor does not independently define retention durations. Any admin-initiated change to the evidence clip expiry policy for a zone MUST be reflected in the finding detail panel's displayed expiry date without requiring the governance user to take a separate action. The audit log entry for evidence clip access — including MFA-gated access by governance-authorised roles — is written by AI Quality Monitor and MUST align with the audit trail schema expected by the Security and Privacy module's compliance export, so that a single export can satisfy both CQC and UK GDPR inspection requirements. Inferred from Security and Privacy UX §4.1, §4.7, and §13.3, and from the immutable audit-trail requirement in §8.
  • The audit log view is accessible to authorised compliance roles and is filterable by event type, actor, and date range. Audit log entries are presented as read-only records; no editing or deletion controls are present. The export action (for CQC, DSPT, UK GDPR inspection) is a primary action on the audit log surface. Inferred from §8 and §9.
  • No quality dashboard view, zone summary, or telemetry display may present data in a format that implies individual staff performance ranking or scoring. Zone-level and team-level aggregations are the maximum granularity on general staff dashboards. Inferred from §3.2 prohibition, §9 access table, and the anti-surveillance design principle.

12. Notification & Communication Patterns

All patterns below are inferred from the module's outbound integrations with Communication Hub (§10, §11.2) and the audit events in §8. AI Quality Monitor does not deliver notifications directly — all user-attention requests are routed via Communication Hub.

  • In-app banner — displayed when zone monitoring activates for the first time in a session (to orient the user that the module is active), and when a governance action has been submitted and is pending a background process (e.g., evidence retention extension in progress). Banners are calm and informational; they do not persist once dismissed.
  • Toast — displayed on successful completion of non-critical single actions: draft accepted, finding resolved, remediation task created, zone configuration saved. Toasts are brief and do not block the UI. They confirm what happened and who acted, supporting the governance-always-visible principle.
  • Push notification (via Communication Hub) — used for: a new quality finding generated that requires governance review; a draft output artefact pending clinician review linked to an upcoming or recent appointment; an evidence clip approaching expiry. All push notifications are routed via Communication Hub; AI Quality Monitor emits the event and Communication Hub governs delivery, channel selection, and escalation threading.
  • Email / SMS (via Communication Hub) — used for: governance escalation notifications when a finding remains in Generated or UnderReview state beyond a defined threshold (escalation timing to be defined — see Open Questions); evidence retention expiry notifications to governance leads; patient-safe summary availability notification to the patient (where the patient app is enabled and the summary has been approved). All email and SMS are delivered exclusively via Communication Hub.

13. Open Questions

UX decisions to resolve before this spec is promoted from draft to published.

  • Evidence retention window display — the technical spec's open question §17.1 (default retention window of "e.g. 7–14 days") is unresolved. The UI must display a specific, authoritative retention window and expiry date on every evidence clip. This value cannot be finalised until the technical spec open question is resolved.
  • Performance SLA and loading state thresholds — the technical spec's open question §17.2 notes that finding-generation and draft-output latency SLAs are undefined. Loading state treatments and "still generating" UI patterns for the tablet draft-review surface (which must not disrupt appointment flow) cannot be fully specified until SLA targets are set.
  • AI Concierge integration contract — the technical spec's open question §17.3 notes ambiguity between an event push and a shared event stream for AI Concierge signals. The audit log attribution display for AI Concierge events may differ depending on the contract type (e.g., event source label). Pending resolution.
  • Patient-safe summary — practice-level opt-in default — the technical spec's open question §17.4 asks whether the patient-safe summary feature defaults to enabled or disabled at platform level. This affects whether clinicians see a patient-safe summary draft in their review queue on day one, and what the empty state of that queue communicates. Pending resolution.
  • Sentiment flag signal basis — the technical spec's open question §17.5 notes that the basis and permissible signal types for "comfort/sentiment flags" in clinical zones are undefined. The UX treatment of sentiment flags in the finding detail panel and draft context cannot be specified until the signal basis is scoped — specifically, these must not be presented in a way that implies individual judgement. Pending resolution.
  • WCAG target version — the technical spec references WCAG 2.1 AA (§13); this UX spec targets WCAG 2.2 AA. Confirm with engineering that WCAG 2.2 AA is achievable within the build plan before promoting this spec.
  • RTL support requirement — RTL layout support is not addressed in the technical spec. Confirm deployment market scope to establish whether RTL is required.
  • Escalation notification timing — the threshold after which an unactioned finding triggers a governance escalation notification via Communication Hub is not defined in the technical spec. This must be defined before the notification pattern for governance escalations can be fully specified.
  • Navigation link from quality dashboard to AI Guardian — confirm whether a contextual navigation link from the zone-summary telemetry view to the relevant AI Guardian view is required, or whether these surfaces are intentionally separate.
  • Tablet offline behaviour for draft review — confirm whether tablet caching of draft content (for reading offline) is within the engineering scope of this module, and define the maximum staleness window for cached draft data.
  • Campaign Manager zone integration — confirm whether the platform-level integration enabling Campaign Manager as a configurable monitoring zone is within the scope of this module's first release, or whether it is deferred to a subsequent phase. Until this is resolved, the zone configuration surface should be designed to accommodate back-office zones without requiring rework if Campaign Manager is added.
  • Security and Privacy audit log schema alignment — confirm with the Security and Privacy module team that the audit event schema written by AI Quality Monitor for evidence clip access, retention extension, and governance actions is compatible with the Security and Privacy compliance export format, so that a unified export satisfies CQC, DSPT, and UK GDPR inspection requirements without requiring post-hoc data transformation.