💬 Discussion no comments on ux yet comments don't trigger digest emails (mentions do)
Mention: @email@domain for a person,
@role:designer for everyone with that role,
or @all for everyone watching this module.
Markdown supported in the body.
No comments on the ux spec yet. Be the first.
🖼 Designs in Figma
FIGMA_PAT in .env and restart the web container to enable file linking.
Performance Dashboards – UX specification
Related technical authority: Performance Dashboards – Technical Specification
1. Purpose
This UX specification governs the staff-facing experience of the Performance Dashboards module — the primary surface through which practice staff and group managers gain live, role-appropriate visibility into what is happening across a practice or group. It covers the web portal and tablet app surfaces, the interaction model for viewing, filtering, drilling through, and exporting metrics, and the governance patterns that keep aggregated data trustworthy and traceable. The module serves all authenticated staff roles, from front-of-house receptionists to group owners, each receiving a view scoped precisely to their permitted data.
Boundary with Smart Dashboards: Smart Dashboards (a separate module) is oriented around what is happening today and in the immediate week — its primary view deliberately avoids a historical or trend-analysis stance. That module's design explicitly routes historical context and trend analysis to Performance Dashboards. Consequently, when a user in Smart Dashboards needs to understand patterns over time, compare periods, or examine aggregated outcomes across domains, they are directed — via a clearly labelled navigation action — to Performance Dashboards. Performance Dashboards should therefore treat incoming navigation from Smart Dashboards as an expected, common entry point, and the destination view on arrival should reflect the metric domain or time-range context that Smart Dashboards passed. This boundary must be understood by anyone designing either module; neither module should replicate the other's primary stance.
2. Core UX principles (non-negotiable)
These principles take precedence over visual preferences. If a design choice conflicts with a principle below, the principle wins.
Action-first — users see the action they need next, not abstract status displays.
In Performance Dashboards this means attention indicators always link to a drill-down or a Task Manager workflow, never terminating as dead-end callouts. Inferred from technical spec §3.3 — "Indicators MAY link to a drill-down or to a Task Manager workflow."
Governance always visible — when AI is involved, users always know what AI did and what they're confirming.
No AI is embedded in this module directly; however, metrics originating from AI Concierge MUST be presented with a clear provenance label so users know the signal's source. Inferred from technical spec §7 — "metric data originating from AI Concierge is consumed as a pre-aggregated signal; Performance Dashboards renders it as a read-only metric."
No dead toggles — every UI control either does something or doesn't appear.
Filter controls (date, site, provider) MUST only appear when the underlying metric supports that dimension. Export controls MUST be hidden entirely for roles that do not hold export permission, not merely disabled. Inferred from technical spec §13.2 rule 5 — "Export capability MUST be suppressed for roles that do not hold the export permission."
Calm by default — the interface gets out of the way; alerts are reserved for things that genuinely need attention.
Attention indicators are non-blocking visual cues. They MUST NOT interrupt workflow with modals or alarms. Inferred from technical spec §3.3 — "An Attention Indicator is a non-blocking visual cue."
Progressive disclosure — advanced detail is one click away, not always-on.
Drill-through to underlying data and Task Manager workflows is always one deliberate navigation step from the dashboard surface; raw source data is never surfaced on the dashboard itself. Inferred from technical spec §1 — "direct drill-through to the source workflows that own them."
Data currency always explicit — every metric displays its freshness label so staff can judge whether the figure they are reading is live, near-live, or end-of-day. Inferred from technical spec §13.2 rule 3 — "Every metric MUST display its freshness label so the user can assess data currency."
Read-only is inviolable — the dashboard surfaces information only; no user action taken on the dashboard surface can mutate data in any source module. This must be perceptible in the UI: no editable fields, no confirmation flows that imply action. Inferred from technical spec §3.4 and §13.2 rule 2.
Anti-surveillance by design — metrics that aggregate team or individual performance data MUST be presented at team or role-group level by default. No named individual scoring, ranking, or comparative grading of individual staff members may appear on any dashboard surface. This constraint applies universally, including to metrics sourced from AI Quality Monitor, HR & People Manager, and any other module that emits individual-level signals. Consistent with AI Quality Monitor Core UX Principle §2 (anti-surveillance by design).
3. Design philosophy
Performance Dashboards is a read-only observation layer. Its mental model is a control room, not a command console — staff arrive here to understand what is happening, then navigate elsewhere to act. The design must reinforce this consistently. Inferred from technical spec §1 — "Dashboards inform and highlight; they do not judge, enforce, or take action."
Empty states: When a metric has no data for the selected scope (e.g. no appointments today at a newly-onboarded site), the empty state must explain why — showing the metric name, its data source, and a plain-language explanation that no data is available for the current filter. It must not imply a system error. Inferred from technical spec §4.1 update model structure and §13.2 rule 7.
Error / degraded states: When a source module feed is unavailable, the metric MUST continue to render using its last known value, with a clearly labelled staleness indicator. The design must distinguish between "live" and "last known" values so staff are never misled. Inferred from technical spec §13.2 rule 7 and §14 reliability requirement.
AI suggestions: No AI-generated suggestions are present in this version. If future versions introduce AI-generated summaries, they must be visually distinct from raw metric values and carry a clear AI-origin label, per platform-wide AI Boundaries policy referenced in technical spec §7.
Multi-step flows: The primary flow is intentionally shallow — view, optionally filter, optionally drill through. Export is the only flow requiring a confirmation step (for audit reasons). No wizard-style multi-step flows are needed. Inferred from technical spec §3.4 state machine.
Undo / redo: Not applicable. The dashboard is read-only; clearing a filter is the functional equivalent of "undo" and requires no special pattern. Inferred from technical spec §3.4 — "A View returns to Viewed state when the user clears filters or navigates back."
Read-only vs editable: The entire dashboard surface is read-only. Layout configuration (where enabled) is the only user-initiated mutation, and it affects only the user's own view, never source data. Inferred from technical spec §13.3 configuration surfaces.
Alignment with Financial Insights: Financial Insights is a separate module that presents financial metrics with its own Insight Card components, progressive-disclosure detail panes, and action-gating patterns. Performance Dashboards consumes financial headline aggregates from Financial Insights as read-only batch signals and renders them through its own Metric Card component. To avoid presenting users with two visually inconsistent paradigms for similar data, the Metric Card's drill-through treatment for financial domain metrics should align with Financial Insights' progressive-disclosure conventions — specifically, consequential navigation into financial statements or forecasting must remain within Financial Insights, and the Performance Dashboards surface must not duplicate or shadow those views. Where a user needs more than a headline figure, the drill-through link takes them to Financial Insights rather than attempting to reproduce its content.
4. Primary surfaces
4.1 Web portal
Who uses it: all authenticated staff roles — practitioners, front-of-house staff, treatment coordinators, managers, and group owners. Each role receives the Dashboard View bound to their role. Inferred from technical spec §5.1 and §13.4.
Key tasks performed here:
- View a role-appropriate dashboard on login, showing live or near-live performance metrics for their permitted scope. Inferred from technical spec §5.1.
- Apply date-range, site, and provider filters to narrow the metric view. Inferred from technical spec §13.4 filtering.
- Drill through from an attention indicator or metric to the underlying detail or a Task Manager workflow. Inferred from technical spec §3.3 and §6.2 outbound.
- Export a PDF or CSV summary of the current view, where the user's role permits. Inferred from technical spec §2.1 and §9.
- Access a group or multi-site comparison view, where explicitly granted by Access Manager. Inferred from technical spec §13.4 — "Group / Multi-Site View (RBAC-gated)."
Layout pattern: dashboard — a metric-card grid with a persistent filter bar, supporting drill-through navigation to a detail pane or external module. Inferred from technical spec §3.2 Dashboard View structure and §13.4.
The web portal is the primary and most fully-featured surface; all five role-bound views (Practitioner, Front-of-House, Treatment Coordinator, Manager/Owner, Group/Multi-Site) are available here. Inferred from technical spec §13.4.
4.2 Tablet app
Who uses it: in-practice staff accessing the dashboard from a shared or personal tablet during the working day — likely front-of-house, treatment coordinators, and managers. Inferred from technical spec §5.2.
Key tasks performed here:
- View the same role-bound metric set as the web portal, adapted for a touch-first, smaller-viewport context. Inferred from technical spec §5.2 — "MUST surface the same role-bound metric set as the web portal, adapted for touch interaction and smaller viewport."
- Apply standard filters and drill through to underlying detail, subject to the same RBAC scope as the web portal. Inferred from technical spec §5.2.
Touch ergonomics: all tap targets must be ≥48 px to accommodate reliable touch interaction in a clinical or front-of-house environment. Filter controls and attention indicator links must be reachable with one hand. Inferred from technical spec §5.2 and §14 accessibility requirement (WCAG 2.1 AA).
(needs UX writer input — which of the five role-bound views are available on tablet, and whether any view is web-only; see open question §13.2 of the technical spec)
4.3 Mobile app (patient or staff)
Performance Dashboards does not surface any content in the patient mobile app. This module is staff-only. Inferred from technical spec §5.3 — "Performance Dashboards do not surface any content in the patient mobile app."
No mobile app surface is defined for this module.
5. Interaction model
5.1 Primary flows
Flow 1: View dashboard on login
Inferred from technical spec §5.1, §3.4, and §9. The user's role determines which Dashboard View is presented; no selection step is required on login.
User authenticates (via Access Manager)
→ Role binding evaluated at session start
→ Correct Dashboard View rendered (Viewed state)
→ Each metric displays value + freshness label
→ Attention indicators visible where trigger conditions are met
Flow 2: Apply a filter
Inferred from technical spec §3.4 state machine and §13.4 filtering.
User is in Viewed state
→ Selects date range / site / provider from filter bar
→ View transitions to Filtered state
→ Metrics re-render for the new scope
→ Filter event written to audit log
→ User clears filter → returns to Viewed state
Flow 3: Drill through to detail
Inferred from technical spec §3.4, §6.2 outbound, and §13.2 rule 9.
User is in Viewed or Filtered state
→ Selects an attention indicator or metric drill-through link
→ View transitions to Drilled-Into state
→ Drill-through event written to audit log
→ Detail pane or Task Manager workflow opens
(if Task Manager: user-initiated navigation only; no task auto-created)
→ User navigates back → returns to Viewed or Filtered state
Flow 4: Export
Inferred from technical spec §3.4, §2.1, §9, and §13.2 rule 5.
User is in Viewed, Filtered, or Drilled-Into state
[Export permission check — if role does not hold export permission,
export control is not visible]
→ User selects export format (PDF / CSV)
→ Confirmation step shown (audit-significant action)
→ Export generated for current scope
→ Export event written to audit log
→ User returned to their prior state
(needs UX writer input — exact copy for the export confirmation step, including scope summary shown to user before they confirm)
5.2 State machines (mirror of technical spec §3.4)
The following UI treatments correspond to the Dashboard View Interaction State Machine defined in technical spec §3.4.
| State | Entry condition visible to user | Visual treatment | Confirmation pattern |
|---|---|---|---|
| Viewed | Dashboard loaded; role and scope confirmed | Full metric grid rendered; freshness labels on each metric; no filters active | None required |
| Filtered | Active filter(s) shown in persistent filter bar | Filter chips or tokens displayed; metrics scoped to filter; filter event logged | None required (reversible — clear filter to return) |
| Drilled-Into | User has selected a metric or attention indicator link | Detail pane or external module view opened; breadcrumb or back-navigation visible | None required (navigational, not mutating) |
| Exported | Export control selected; permission confirmed at export time | (needs UX writer input — loading/progress indicator during generation) | Confirmation step required — shows scope of export before generation |
No state transition may mutate data in any source module; this is non-negotiable and must be reflected in the absence of any editable fields or action-triggering controls on the dashboard surface. Inferred from technical spec §3.4 rule — "No transition from any Dashboard View state may mutate data in any source module."
5.3 Empty / loading / error / offline states
The following requirements are inferred from technical spec §13.2 rules 3 and 7, §14 reliability, and the read-only nature of the module.
Empty state
When no data exists for the current scope and filter combination (e.g. no appointments at a site today), the metric card displays the metric name, its data source, its freshness label, and a plain-language message indicating no data is available for the selected period. The card must not be removed from the layout; its position must be held to preserve layout stability. (needs UX writer input — empty state message copy)
Loading state
On initial load and after filter changes, metric cards must use a skeleton loading pattern (content-shaped placeholder) rather than a spinner, to minimise perceived latency and preserve layout stability. Real-time metrics that update in-place should animate their value change subtly, respecting prefers-reduced-motion.
Error state — source feed unavailable
When a source module feed is unavailable, the affected metric card must display its last known value with a clearly labelled staleness indicator (the freshness_label value from the metric definition), visually distinct from a live or near-live value. A non-blocking inline message on the affected card explains that the feed is temporarily unavailable. The rest of the dashboard continues to function normally. Inferred from technical spec §13.2 rule 7 and §14 reliability.
(needs UX writer input — staleness indicator label copy and inline message copy)
Error state — permission denied (cross-site view)
If a user attempts to access a cross-site or group view without the required Access Manager grant, they receive a clear permission-denied message on the view, not a blank page or silent failure. The attempt is still written to the audit log. Inferred from technical spec §13.2 rule 6.
(needs UX writer input — permission-denied message copy)
Offline state
Performance Dashboards is a read-only, server-rendered aggregation surface. Offline functionality is not a primary requirement; if connectivity is lost, the interface should display a non-blocking connectivity banner and continue to show the last rendered state of the dashboard rather than a blank screen. Filters and drill-through navigation should be disabled while offline, with a clear indication of why. Inferred from the real-time and near-real-time feed model in technical spec §4.1.
(needs UX writer input — offline banner copy)
5.4 Domain-specific metric rendering notes
The following notes govern how metrics from specific source modules are rendered and interacted with, supplementing the general interaction model above.
Subscription analytics (Hygiene Subscriptions)
Hygiene Subscriptions emits operational analytics including active subscriber count, churn rate, subscription revenue, fulfilment timeliness, and payment failure rates. These signals MUST be surfaced in the Manager/Owner and Group/Multi-Site Dashboard Views as a distinct Subscriptions domain group within the metric grid. Subscription metrics are batch-aggregated (not real-time); their freshness labels must reflect this. Churn and payment failure figures are operationally significant and MAY qualify as attention indicator triggers when they exceed practice-configured thresholds. Drill-through from subscription metric cards leads to the Hygiene Subscriptions module where individual subscription records can be managed; this navigation is user-initiated only. Inferred from Hygiene Subscriptions technical spec §14.
Commerce metrics (Product Shop)
Product Shop emits commerce performance signals: revenue totals, order volume, recommendation-to-purchase conversion rate, fulfilment exception queue depth, and refund volume with reason-code breakdown. These MUST appear in the Manager/Owner Dashboard View under a Commerce domain group. Fulfilment exception queue depth and refund volume with reason-code breakdown are operationally significant and SHOULD surface as attention indicators when exception or refund counts exceed configured thresholds. Drill-through from commerce metric cards leads to the Product Shop module. The conversion rate metric (recommendation-to-purchase) MUST be labelled clearly as a recommendation-driven figure so staff understand it reflects AI-assisted recommendation outcomes; a provenance label consistent with the AI-origin labelling pattern (§11) MUST accompany it. (needs UX writer input — confirm drill-through destination for Product Shop metrics)
Loyalty cohort and retention metrics (Loyalty Insights)
Loyalty Insights emits practitioner-linked cohort metrics and retention trend signals, including referral effectiveness and reward-impact analytics. These are advisory, non-deterministic outputs — they MUST NOT be presented as point-in-time scorecards or individual performance scores. On the dashboard surface, Loyalty cohort metrics must carry a tooltip or inline label that makes the advisory, trend-based nature of the signal explicit (e.g. indicating the cohort window and the non-deterministic derivation method). Drill-through from Loyalty metric cards leads to the Loyalty Insights module for full cohort analysis. The mid-session enable/disable behaviour for Loyalty Insights is an open question; see §13 open question 6.
HR operational efficiency metrics (HR & People Manager)
HR & People Manager emits operational efficiency signals for manager-level views: approval speed, overdue task count, onboarding duration, and workflow throughput. These signals MUST be surfaced in the Manager/Owner Dashboard View under an HR Efficiency domain group, displayed at team or role-group level only — never at named individual level, consistent with the anti-surveillance principle (§2). Drill-through from HR efficiency metric cards leads to HR & People Manager. These metrics arrive as batch signals; freshness labels must reflect batch cadence. (needs UX writer input — confirm drill-through destination and batch update frequency)
Staffing coverage and rota availability (Rota Manager)
Where the Manager/Owner Dashboard View surfaces coverage or utilisation metrics, rota availability patterns sourced from Rota Manager MUST be used as the foundational input for coverage-by-role and coverage-by-location reporting. This ensures that coverage figures displayed on the Performance Dashboard are consistent with what Rota Manager shows for the same period. Coverage metrics must be labelled with their data source (Rota Manager) so staff understand the provenance of the figure. Discrepancies between rota-planned coverage and actual utilisation (sourced from Appointment Manager) may surface as attention indicators. (needs UX writer input — confirm coverage metric card design and attention indicator threshold for under-coverage)
Task Manager SLA and workload metrics
Task Manager emits SLA state, overdue task counts, and workload-by-role/location summaries as dashboard KPIs. Where Performance Dashboards surfaces these signals, the visual treatment MUST align with Task Manager's governance-always-visible and ownership-explicit principles: every task-derived metric must display which role group or location owns the tasks in the aggregation, and the SLA state label must use vocabulary consistent with Task Manager's own SLA status labels (to avoid staff encountering two different terms for the same state). Drill-through from task workload metrics leads to the Task Manager module. Task metrics MUST NOT imply named individual ownership on the dashboard surface; aggregation is always at role-group or location level. Inferred from Task Manager §4.1.
Communication Hub audit and oversight metrics
Communication Hub emits audit and oversight signals relevant to manager and admin roles: SLA health for message handling, AI suggestion acknowledgement rates (AI suggestions shown vs. dismissed), message-to-work conversion events, blocked group-chat attempt counts, and SLA breach counts. These signals MUST be surfaced in the Manager/Owner Dashboard View under a Communications Oversight domain group, visible only to roles with the relevant Access Manager permission. AI suggestion acknowledgement rates MUST carry an AI-origin provenance label consistent with §11. SLA breach counts SHOULD surface as attention indicators. Drill-through from communication oversight metrics leads to the Audit & Compliance module or Communication Hub depending on the metric type. (needs UX writer input — confirm per-metric drill-through destination)
6. Component inventory
New components introduced by this module:
- Metric card — the primary display unit for a single Performance Metric; shows metric name, current value, domain badge, freshness label, and optional attention indicator. Appears in all Dashboard Views. Inferred from technical spec §3.1 and §3.2.
- Attention indicator — a non-blocking visual cue (badge + icon + optional colour treatment) attached to a metric card when a trigger condition is met; includes a link to drill-through or Task Manager. MUST NOT rely on colour alone. Inferred from technical spec §3.3 and §14 accessibility.
- Freshness label — an inline label on each metric card indicating the update model and data currency (e.g. "live", "updated 5 min ago", "daily summary"). Inferred from technical spec §3.1
FreshnessLabelfield and §13.2 rule 3. - Dashboard filter bar — a persistent control area containing date-range, site, and provider filter controls; shows active filter chips; visible only when the underlying metric supports the filter dimension. Inferred from technical spec §13.4.
- Dashboard View container — the role-bound grid layout that holds metric cards and the filter bar; supports a configurable layout where the practice has enabled that capability. Inferred from technical spec §3.2.
- Export control — a control (suppressed entirely for non-permitted roles) that initiates the export flow and triggers the confirmation step. Inferred from technical spec §2.1 and §13.2 rule 5.
- Staleness indicator — a visually distinct treatment on a metric card when its source feed is unavailable, showing the last known value and a plain-language feed-unavailable message. Inferred from technical spec §13.2 rule 7.
- Domain badge — a labelled badge on each metric card identifying the data domain (Operational / Clinical / Financial / Engagement / Compliance / Subscriptions / Commerce / Communications Oversight). Colour MUST NOT be the sole differentiator; each badge carries a text label. Domain badges enable users to assess a metric's provenance at a glance and support screen-reader comprehension when programmatically associated with the metric value.
- Provenance label — an inline label or tooltip attached to metrics whose source is an AI-derived or non-deterministic signal (e.g. AI Concierge call-handling metrics, Product Shop recommendation-conversion metrics, Loyalty Insights cohort outputs). Visually distinct from the metric value; carries an AI-origin or advisory-signal indicator as appropriate. Inferred from technical spec §7 and platform-wide AI Boundaries policy.
Reused from the design system:
- Date-range picker
- Filter chip / token
- Skeleton loader
- Toast notification
- In-app banner
- Confirmation modal
- Breadcrumb navigation
- Permission-denied state panel
7. Visual design notes
- Typography: (needs UX writer input — heading scale and body scale decisions for metric values, labels, and domain badges)
- Colour: Semantic colour is used for attention indicators and domain badges (Operational / Clinical / Financial / Engagement / Compliance / Subscriptions / Commerce / Communications Oversight). Colour MUST NOT be the sole differentiator for attention states — an icon or text label MUST accompany any colour-coded indicator. Inferred from technical spec §14 accessibility — "Attention indicators MUST not rely on colour alone to convey meaning." (needs UX writer input — specific semantic colour mappings, including new Subscriptions, Commerce, and Communications Oversight domains)
- Iconography: Each attention indicator and domain badge requires an accompanying icon so meaning is never conveyed by colour alone. Icons must be labelled (visually or via ARIA) and never used icon-only. Inferred from technical spec §14 accessibility requirement. (needs UX writer input — icon set selection, including icons for Subscriptions, Commerce, and Communications Oversight domain badges)
- Motion: Subtle in-place value transitions for real-time metrics are permitted; all motion MUST respect
prefers-reduced-motion. Skeleton loaders on initial render. No decorative animation. Inferred from technical spec §14 WCAG 2.1 AA and the real-time update model in §4.1.
8. Accessibility & inclusivity
The module MUST meet WCAG 2.2 AA. Specifically:
- Text contrast ≥4.5:1 (normal) / ≥3:1 (large)
- All interactive controls reachable via keyboard
- Focus states visible
- Form fields have programmatic labels
- ARIA used only where native semantics are insufficient
- Touch targets ≥44×44 px on mobile/tablet
- Motion can be reduced via
prefers-reduced-motion - Screen reader tested on NVDA / VoiceOver / TalkBack
Module-specific accessibility requirements:
- Attention indicators MUST include an icon or text label in addition to any colour treatment, so the indicator is meaningful to users who cannot perceive colour differences. Inferred directly from technical spec §14 — "Attention indicators MUST not rely on colour alone to convey meaning; they MUST include an accompanying icon or text label."
- Freshness labels must be programmatically associated with their metric card so screen reader users understand the data currency of the value they are reading. Inferred from technical spec §13.2 rule 3 and the WCAG requirement for accessible name associations.
- Drill-through links must have descriptive accessible names that include the metric name, so screen reader users understand where the link leads without relying on visual context. Inferred from the drill-through interaction model in technical spec §3.4.
- The staleness indicator state (last known value, feed unavailable) must be announced to screen reader users when the state changes, using a live region or equivalent ARIA pattern. Inferred from technical spec §13.2 rule 7 and the need for non-sighted users to be equally informed of data degradation.
- Domain badges and provenance labels must be programmatically associated with their metric card values so that screen reader users receive full context (metric name, domain, provenance, freshness) when navigating the metric grid.
9. Internationalisation
- Locale-aware date/time/number formatting
- All user-facing strings externalised
- Layouts tolerant of 30% string-length growth (German, French)
Metric values that include currency figures (consumed from Financial Insights) must use locale-aware currency formatting consistent with the practice's configured locale. Inferred from the Financial domain metric set in technical spec §4.2.
Freshness labels and staleness indicators must be fully internationalised strings, as they appear on every metric card and carry operational meaning. Inferred from technical spec §3.1 FreshnessLabel field.
Domain badge labels (including Subscriptions, Commerce, and Communications Oversight) and provenance label copy must be externalised and internationalised on the same basis as freshness labels.
RTL support: (needs UX writer input — confirm whether any target market requires RTL layout)
10. Cross-module UX touchpoints
All touchpoints below are inferred from integration contracts declared in technical spec §6 and §10, supplemented with obligations identified from consuming-module specifications.
- Task Manager — when a user selects a drill-through link attached to an attention indicator, they may be navigated into a Task Manager workflow. This is a user-initiated, one-way navigation; the user completes or abandons the task in Task Manager and returns to the dashboard via browser/app back-navigation. No task is created automatically by the dashboard. The handoff must preserve the user's active filter context so they return to the same scoped view. Task-derived metrics (SLA state, overdue task counts, workload by role/location) must use vocabulary and status labels consistent with Task Manager's own labels, and must aggregate at role-group or location level only — never surfacing named individual task ownership. Inferred from Task Manager §4.1.
- Appointment Manager — real-time utilisation, cancellation, and DNA metrics originate here. No direct UX navigation; the Appointment Manager is a data source only. Drill-through from diary-gap metrics may link to the Appointment Manager surface (needs UX writer input — confirm drill-through destination for diary/utilisation metrics).
- Integrated Payments — payment and outstanding balance metrics originate here. Drill-through from financial headline metrics may surface detail in Integrated Payments (needs UX writer input — confirm drill-through destination).
- Financial Insights — financial domain metrics are consumed as batch headline aggregates from this module. The dashboard renders them read-only. Drill-through from financial metric cards leads to Financial Insights rather than reproducing its content on the Performance Dashboards surface, ensuring alignment between the two modules' progressive-disclosure conventions. No duplication of Financial Insights' statement or forecasting views is permitted here.
- AI Concierge — call handling statistics are consumed as pre-aggregated real-time signals. No direct UX navigation; metrics derived from AI Concierge should carry a provenance label indicating their origin.
- Lab Manager — lab turnaround and overdue case metrics originate here. Drill-through destination (needs UX writer input — confirm).
- Rewards Manager — reward, referral, and charity metrics originate here. Drill-through destination (needs UX writer input — confirm).
- HR & People Manager — staffing, absence, onboarding metrics, and operational efficiency signals (approval speed, overdue task count, onboarding duration, workflow throughput) originate here as batch signals. These are displayed at team or role-group level only, never at named individual level. Drill-through destination (needs UX writer input — confirm).
- Rota Manager — rota availability and coverage-by-role/location data originate here and serve as the foundational input for coverage and utilisation metrics on the Manager/Owner view. Coverage figures must be labelled with Rota Manager as their source. Discrepancies with actual utilisation (from Appointment Manager) may trigger attention indicators. Drill-through destination (needs UX writer input — confirm).
- Hygiene Subscriptions — active subscriber count, churn rate, subscription revenue, fulfilment timeliness, and payment failure rates originate here as batch-aggregated signals. These appear in the Manager/Owner and Group/Multi-Site views under the Subscriptions domain group. Drill-through leads to the Hygiene Subscriptions module. Inferred from Hygiene Subscriptions technical spec §14.
- Product Shop — commerce metrics (revenue totals, order volume, recommendation-to-purchase conversion rate, fulfilment exception queue depth, refund volume with reason-code breakdown) originate here. These appear in the Manager/Owner view under the Commerce domain group. Recommendation-conversion metrics carry an AI-origin provenance label. Drill-through leads to the Product Shop module. (needs UX writer input — confirm drill-through destination)
- Loyalty Insights — recall compliance trends, churn risk alerts, referral effectiveness, reward-impact analytics, and practitioner-linked cohort views originate here, conditionally. Cohort and retention metrics are advisory and non-deterministic; they MUST NOT be presented as point-in-time scorecards. The dashboard must handle the case where Loyalty Insights is enabled or disabled mid-session gracefully (see open questions).
- Communication Hub — SLA health, AI suggestion acknowledgement rates, message-to-work conversion events, blocked group-chat attempt counts, and SLA breach counts are emitted as oversight signals for manager and admin roles. These appear under the Communications Oversight domain group. AI suggestion acknowledgement metrics carry an AI-origin provenance label. Drill-through leads to the Audit & Compliance module or Communication Hub depending on metric type. (needs UX writer input — confirm per-metric drill-through destination)
- AI Quality Monitor — any metrics sourced from or influenced by AI Quality Monitor MUST be aggregated at team or role-group level by default. Individual-level performance data MUST NOT surface on any dashboard view. No named individual scoring, ranking, or comparative grading is permitted. This is a hard constraint derived from AI Quality Monitor Core UX Principle §2 (anti-surveillance by design) and applies regardless of the requesting user's role.
- Smart Dashboards — Performance Dashboards is the designated destination for historical and trend analysis when users navigate outward from Smart Dashboards. Entry points from Smart Dashboards should land on a contextually appropriate Dashboard View where possible (e.g. pre-filtering to the relevant metric domain or date range implied by the Smart Dashboards context). (needs UX writer input — confirm landing-view logic for Smart Dashboards hand-offs)
- Access Manager — governs all role binding, permission scope, and export control. No direct UX navigation from the dashboard; Access Manager is consulted at session start and at export time. Permission-denied states must be handled gracefully on the dashboard surface.
- Audit & Compliance — all access, filter, drill-through, and export events are emitted here. No direct UX surface for the end user on the dashboard; audit log inspection is handled in the Audit & Compliance module. Practice owners and group managers may access engagement signals (dashboard adoption) via that module.
UX consistency rules:
- Export controls must appear in a consistent location across all Dashboard Views, suppressed entirely (not disabled) for non-permitted roles. Inferred from technical spec §13.2 rule 5.
- Drill-through navigation must always be user-initiated — no automatic redirection from the dashboard to another module. Inferred from technical spec §13.2 rule 9.
- Filter state must persist during drill-through so that navigating back returns the user to their scoped view. Inferred from technical spec §3.4 state machine — "returns to Viewed state when the user clears filters or navigates back to the top level."
- Freshness labels must use a consistent vocabulary and visual treatment across all metric cards in all Dashboard Views. Inferred from technical spec §3.1
FreshnessLabelfield and §13.2 rule 3. - Domain badge labels and provenance labels must use consistent vocabulary across all Dashboard Views and all metric domains.
11. Governance & auditability
The following governance patterns are derived from technical spec §8, §9, and §13.2.
- Read-only surface: The entire dashboard UI is read-only. No control on the dashboard surface initiates a mutation in any source module. This must be visually apparent — no editable fields, no save buttons, no action-triggering controls except drill-through navigation links and the export control. Inferred from technical spec §3.4 and §13.2 rule 2.
- Freshness labels always visible: Every metric card displays its freshness label (live / near-live / daily summary) at all times. This is not progressive-disclosure content — it must be visible without interaction, so staff can always assess data currency. Inferred from technical spec §13.2 rule 3.
- Attention indicator provenance: Each attention indicator must display its trigger condition (the reason it has been raised) in a tooltip or inline label accessible without drill-through, so staff can judge whether the indicator is relevant to them before navigating away. Inferred from technical spec §3.3 — "An Attention Indicator MUST declare its trigger condition."
- Export confirmation: Export is an audit-significant action. The UI must show a confirmation step before generating an export, summarising the scope (date range, site, data domains) that will be included. This gives the user a final check before a permanent audit record is created. Inferred from technical spec §8 export events and §13.2 rule 5.
- Permission scope visible: The user's active permission scope (which sites and data domains they are viewing) must be visible on the dashboard surface — for example, in a persistent header or filter bar indicator — so staff always know the boundaries of the data they are seeing. Inferred from technical spec §9 and §3.2
PermissionScopefield. - Cross-site access labelled: When a group or multi-site view is active, the dashboard must make this explicit — labelling the view as a cross-site or group view and listing the sites in scope. Inferred from technical spec §8 — "cross-site view access: any access to a group or multi-site comparison view, with the sites in scope."
- AI-origin metrics labelled: Metrics derived from AI Concierge, Product Shop recommendation conversions, Communication Hub AI suggestion signals, or any other AI-influenced source must carry a provenance label. If future AI-generated summaries are introduced, they must be visually distinct (e.g. a clear AI-origin badge) and must not be presented as equivalent to raw metric values. Inferred from technical spec §7.
- Advisory-signal metrics labelled: Metrics derived from non-deterministic or advisory sources (e.g. Loyalty Insights cohort outputs) must carry a provenance label or tooltip that makes the advisory nature and derivation method explicit, so staff do not misinterpret trend-based signals as definitive point-in-time facts.
- Anti-surveillance enforcement: No metric card, attention indicator, or drill-through surface may present named individual performance scores, rankings, or comparative grading. Aggregation is at team, role-group, or location level only. Consistent with AI Quality Monitor Core UX Principle §2.
- Metric definition audit trail: Changes to metric definitions (calculation source, visibility rules, update model) are version-controlled and auditable. The UI does not expose this audit trail directly in dashboard views; it is available via Audit & Compliance. Inferred from technical spec §3.1
AuditTrailfield and §8.
12. Notification & communication patterns
The following patterns are inferred from the module's non-blocking attention model (technical spec §3.3), its audit event emission (technical spec §8), and the explicit non-goal of automated action (technical spec §11).
- In-app banner: Used for connectivity loss (offline state) and source feed unavailability affecting multiple metrics. Non-blocking; does not prevent dashboard use. Dismissible once the condition is resolved. Inferred from technical spec §13.2 rule 7 and §14 reliability. (needs UX writer input — banner copy for feed-unavailability and offline states)
- Toast: Used to confirm that an export has been successfully generated and is available for download. Brief, auto-dismissing. Inferred from the export flow in technical spec §3.4 and §2.1. (needs UX writer input — export success toast copy)
- Attention indicators (in-dashboard, non-blocking): The primary notification surface for this module. Rendered as badges, colour cues, and icons on metric cards when trigger conditions are met. Not push notifications — they are visible only when the user is actively viewing the dashboard. Inferred from technical spec §3.3.
- Push notification: Performance Dashboards does not emit push notifications directly. If a future version requires staff to be alerted to critical metric thresholds outside of an active session, that capability must be routed via Communication Hub — not implemented directly by this module. Consistent with technical spec §11 non-goal — "dashboards surface exceptions and provide drill-through; they do not initiate tasks, send communications."
- Email / SMS: Not emitted by this module directly. Export delivery via email, if required in a future version, would be routed via Communication Hub. Inferred from the non-goal of automated communication in technical spec §11.
13. Open questions
(The following open questions are carried forward from technical spec §15, supplemented with UX-specific questions identified during synthesis.)
-
Export role permissions: Which staff roles are permitted to export, and is this configurable per practice or fixed at platform level? This directly affects whether the export control appears in the UI for a given role, and what the permission-denied state looks like. Carried from technical spec open question 1.
-
Tablet surface scope: Are all five role-bound Dashboard Views available on the tablet app, or is a subset defined for tablet only? This affects the tablet layout design and whether any views require a "web only" indicator on tablet. Carried from technical spec open question 2.
-
Dashboard layout configurability: Who decides whether per-user layout configuration is enabled — is it a practice-level Admin Control Plane setting? Which roles can change it? This affects whether a layout-edit mode needs to be designed and for whom it is visible. Carried from technical spec open question 3.
-
Metric definition ownership: Can practice admins create custom metrics, or is the metric set entirely platform-defined? This determines whether any metric-definition UI is needed within this module. Carried from technical spec open question 4.
-
Saved filter configurations: Are named/saved filter sets in scope for this build? If so, a save-filter and manage-saved-filters UI pattern is needed. Currently deferred per technical spec §13.4. Carried from technical spec open question 5.
-
Loyalty Insights mid-session disable: If Loyalty Insights is disabled mid-session, does the dashboard re-render automatically or require a page reload? The UX treatment for a metric that disappears mid-session (card removed, or card shown in empty/unavailable state?) needs to be defined. Carried from technical spec open question 6.
-
Drill-through destinations: The technical spec confirms that drill-through links exist from metric cards to source modules, but does not specify which modules are the destination for each metric domain. UX patterns for the drill-through handoff (e.g. does the destination open in a new tab, a side panel, or full navigation?) need to be confirmed per destination module. (needs UX writer input and product alignment)
-
Attention indicator trigger copy: Each attention indicator must display its trigger condition. The vocabulary and copy pattern for trigger condition labels across all metric domains needs to be defined. (needs UX writer input)
-
Freshness label vocabulary: The technical spec defines
freshness_labelas a human-readable staleness indicator but does not define the vocabulary. A canonical set of freshness label strings needs to be agreed and externalised as internationalised strings. (needs UX writer input) -
Permission scope display: The user's active permission scope (sites and data domains in view) must be visible on the dashboard surface. The precise location and component pattern for this indicator has not been designed. (needs UX writer input)
-
Smart Dashboards hand-off landing view: When a user navigates from Smart Dashboards to Performance Dashboards, should the destination view be pre-filtered to the metric domain or time-range context implied by the Smart Dashboards surface? If so, what is the mechanism by which Smart Dashboards communicates that context? (needs product alignment between Smart Dashboards and Performance Dashboards teams)
-
Subscription and commerce attention indicator thresholds: Hygiene Subscriptions churn rate, payment failure rate, Product Shop fulfilment exception count, and refund volume are operationally significant. What are the configurable threshold values that trigger attention indicators for these metrics — are they platform defaults, practice-configurable, or both? (needs product decision)
-
Communication Hub per-metric drill-through destinations: Communication oversight metrics (SLA health, AI suggestion acknowledgement rates, conversion events, blocked group-chat attempts, SLA breach counts) may drill through to either Communication Hub or Audit & Compliance depending on the metric. The per-metric routing needs to be agreed between the Communication Hub and Performance Dashboards teams. (needs product alignment)
-
Advisory-signal tooltip copy: Loyalty Insights cohort outputs and other non-deterministic signals require a tooltip or inline label explaining the advisory nature of the figure. The copy pattern for this label needs to be defined so it is consistent across all advisory-signal metric cards. (needs UX writer input)