Returning to Principles
Earlier this year, I wrote about design principles as intervention levers - exploring what principles can and cannot achieve in constrained environments. The reflection was a bit sobering: principles create shared vocabulary and support evaluation, but they cannot guarantee consistency, overcome platform constraints, or enforce anything without governance structures to back them. I also developed a first version of design principles organised by agency level, distinguishing between things we control, things we influence, and things we can only aspire to. The framework was useful for being honest about constraints rather than writing aspirational documents that ignore reality.
That work was somewhat abstract. Two months into the performance reporting programme, it has had to become concrete. What principles actually apply here? What has the research taught us that generic principles miss? And how do you create accountability for design quality when you are working across teams with different capabilities and priorities?
What the Research Revealed
Two streams of work have shaped how I am thinking about principles for this programme. User research into the transparency hypothesis revealed that the public cannot find the dashboards, and NHS staff rely heavily on intermediaries to interpret them. The assumption that publishing data leads directly to informed choices does not hold, and this has implications for what principles should prioritise: if most users encounter the data through intermediaries, perhaps we should optimise for intermediary workflows rather than end-user self-service.
Conversations with the analytics team revealed that executives consume performance data primarily through printed board packs, not screen-based dashboards. The team also articulated an aspiration for visual brand identity - wanting outputs to be recognisably professional, like Health Foundation publications - and described wanting to move from "playing back numbers" to providing genuine insight. These findings do not invalidate generic design principles; clarity, hierarchy, and accessibility still matter. But they add specificity. Print-first is not a generic principle; it is a response to how this particular audience actually consumes information.
From Five Principles to Eight
The first version of the principles document had five principles, derived from literature review and general good practice. After the research, it has eight. The additions are not arbitrary; each responds to something we learned.
The Original Five
The first principle, start with decisions not data, establishes that dashboards exist to support action rather than to display information comprehensively. Every metric should map to a specific decision and decision-maker: "A&E four-hour performance below 85% triggers COO activation of surge protocol". If you cannot articulate what decision a metric supports, it probably should not be on the dashboard. This draws on Penin's (2018) observation that data is noise if we cannot decide what matters, and on the trajectory overlay pattern the analytics team already uses - showing performance against plan with management information as a leading indicator, directly answering "are we veering off track?".
The second, ruthless prioritisation, addresses cognitive overload. Research on dashboard effectiveness consistently finds that simpler designs outperform comprehensive ones; Bach et al.'s (2023) analysis of 144 dashboards recommends starting with "simple designs that spotlight just the most important data". The CEO Email - which contains just four metrics - exists for a reason. Even the analytics team's "fan favourite" top-ten tables were questioned in our conversations; the team recognises that less may be more. The principle establishes thresholds: five to eight metrics for daily operational views, fifteen to twenty for weekly review, with progressive disclosure for deeper analysis.
The third, signal versus noise, pushes visualisations from descriptive ("87.2% performance") toward analytical ("three trusts below threshold requiring immediate intervention"). It mandates statistical context - funnel plots to distinguish meaningful outliers from natural variation, SPC charts to identify special cause versus common cause variation. A metric that never changes or never triggers action is just noise.
The fourth, design for scanning not reading, reflects how executives actually engage with reports. They have seconds, not minutes; reports get scanned quickly, often while walking between meetings or preparing for a board discussion. This principle emphasises preattentive processing - using colour, size, and position to enable rapid detection of what matters - and draws on Gestalt principles of proximity, similarity, and enclosure. The test: if understanding the state takes more than five seconds, it is too complex.
The fifth, build to learn, mandates iterative development rather than specifying everything upfront. It requires shipping an MVP with five to eight metrics, instrumenting everything to track what gets clicked and printed, and planning iteration based on behaviour rather than opinions. The analytics team noted that there are "different opinions" about what to display, with SPC charts being a specific example; you cannot standardise without real-world testing.
Added After Research
The user research and analytics team conversations revealed three gaps in the original principles - aspects of this particular context that generic dashboard guidance does not address.
The sixth principle, design for multiple output formats, emerged directly from the analytics team meeting. "A lot of the time, I will be honest, they print them out", one team lead said. "In this job I had to learn how to use the printer first". Print is not a secondary output format; it is the primary one for executive consumption. Board packs get printed, reports get photocopied, and outputs need to work in grayscale on a black-and-white laser printer. The principle establishes a print-first checklist: test in grayscale, ensure high contrast (WCAG AA minimum), check readability at actual print size, verify line weights are distinguishable, use patterns and shapes alongside colour. It also addresses multi-format output - the same underlying data optimised differently for print (high contrast, 10-12pt text), screen (interactive, hover states), and slide (simplified, one message per visual).
The seventh, support narrative not just numbers, responds to the analytics team's aspiration: "As analysts we often get stuck in that space of playing back numbers, but we really want to get into the more interesting insights space". This mandates annotation capabilities - key event markers ("winter pressure period"), anomaly highlights ("bank holiday effect"), inflection points ("performance began declining here") - along with temporal context showing trajectory over time rather than just current rank. The goal is moving from "Trust A: 78% performance" to "Trust A: 78% (12 points below target, declining for six consecutive weeks, requires investigation)".
The eighth, establish visual brand identity, responds to another direct observation: "When you go on the Health Foundation website or the Nuffield Trust, you can tell it's one of their graphs right away. They have their colours, their style, and it's very blatant". The aspiration was recognisability - outputs that look distinctively NHS, that signal quality and authority. This principle establishes the NHS colour palette (with grayscale-safe alternatives), typography standards, and spacing systems. It addresses the risk that outputs become indistinguishable from any other business intelligence tool. Brand identity is not decorative; it is about trust and credibility.
A Different Kind of Agency
The performance reporting work operates differently from the product design I wrote about earlier. There, the challenge was lack of authority - design sat adjacent to decision-making, principles became suggestions without governance to enforce them. Here, the challenge is different: we have substantial agency to shape these products, but we are operating without formalised process in a top-down, politically-driven environment. Nobody started with user needs. The oversight framework exists because ministers mandated league tables. The executive dashboards exist because senior leaders wanted consolidated views. The asks arrived as solutions - "build a dashboard" - not as problems to be understood. Design input happens after the initial build, retrofitting user-centred thinking onto products that were commissioned without it.
Within that constraint, though, there is more direct influence than in the earlier work. I am writing code that goes into production, shaping the public dashboard homepage, and being brought in to improve what the vendor built on a new prototype. The principles are not suggestions I hope someone follows; they are guidance for work I am directly involved in.
What we control includes direct design input into products, implementation of patterns through code contributions, visual specifications and component design, and iteration based on feedback from analytics teams. What we influence includes whether formal process gets established (planned for the new year), how the vendor interprets design requirements on future builds, and the balance between speed and rigour as the programme matures. What we are working against includes top-down requirements that do not start with user needs, compressed timelines set by political and governance cycles, the retrofit problem of improving products after architectural decisions are made, and vendor reporting lines running directly to executives that bypass design input.
Design Quality Without Formal Process
In orthodox user-centred design, there is a methodology: discovery, definition, development, delivery. Research precedes design; design precedes development; testing validates assumptions. The process creates natural checkpoints where principles get applied. In rapid delivery mode, those checkpoints do not exist. Products get built to political timelines, and design input happens when there is an opportunity rather than when methodology prescribes. Principles need to work in that reality.
The approach I am exploring involves going where the work is: rather than producing specifications and hoping they get implemented, I am writing code directly, and the React components I build embody the principles so that developers using them get consistent components, visual hierarchy, and NHS brand identity without needing to remember the rationale. Building credibility through delivery matters in an environment that values speed; being able to contribute code rather than just critique creates different conversations with developers. It also allows for iteration based on real-world feedback rather than hypothetical scenarios. The principles are not just theoretical ideals; they are baked into the products we are shipping, and we can see how they perform in practice, adjusting as needed.
References
Bach, B., Freeman, E., Abdul-Rahman, A., Turkay, C., Khan, S., Fan, Y. and Chen, M. (2023). Dashboard design patterns. IEEE Transactions on Visualization and Computer Graphics, 29(1), 342-352.
Penin, L. (2018). An Introduction to Service Design: Designing the Invisible. Bloomsbury.