Principles as Intervention Lever

Systems Thinking and Leverage Points

In Part 1, I described how our design systems working group was stopped by a colleague's observation that we needed to establish fundamental principles before rushing into patterns and components. In The Limits of Design Guides, I reflected on the design guide that previous team members had created, why documentation alone had not solved the consistency problem, and how guides compare to principles as systems interventions. That post ended with a distinction worth carrying forward: guides tell you what to do; principles tell you why. Both are weak interventions, but principles might be necessary groundwork for stronger ones. This post picks up from there, framing principles through the lens of systems thinking.

Donella Meadows identified twelve places to intervene in a system - twelve "leverage points" where a small shift can produce large changes (Meadows, 2008). These range from relatively weak interventions, such as adjusting parameters, to increasingly powerful ones: changing the rules of the system, its structure, and ultimately the paradigm from which the system arises. Dan Hill, in his work on mission-oriented innovation, applies this thinking to design practice, noting that "Meadows' 12 intervention points suggest how a lever can be exerted to produce systemic change" (Hill, 2022). The Systemic Design Association's intervention strategy tools help design teams identify "the various areas where leverage might induce significant effects in interventions" (Jones and Van Ael, 2022); the key insight across all three sources is that not all interventions are equal, and being realistic about the power of a chosen intervention is essential for setting appropriate expectations.

Where Principles Sit as a Lever

If we map design principles onto Meadows' framework, they sit somewhere in the middle - more powerful than tweaking parameters, but less powerful than changing system structure or goals. Principles attempt to influence the information flows and rules of a design system. They provide vocabulary for critique ("this violates our progressive disclosure principle"), which shapes the information that flows between team members; they also function as implicit rules, though crucially rules without enforcement.

The most immediate benefit is shared vocabulary. When reviewing a design, instead of saying "I don't like how busy this looks", we can say "this violates our progressive disclosure principle - too much information is visible by default". The vocabulary makes critique more precise and less personal. Principles also support evaluation: as I noted in Part 1, heuristics are better for evaluation than generation, so given a design we can systematically assess it against each principle. They document intent, even where platform constraints prevent achieving the principle in practice - we are saying "this is what good looks like, and here is why we didn't achieve it". And they provide ammunition for stakeholder conversations, offering principled justification where otherwise we would be relying on professional assertion alone.

The limitations are equally real. A team can have excellent principles and still produce inconsistent work, because principles only operate if people remember them at the moment of decision and have the time and freedom to apply them. Principles cannot overcome platform constraints: if the platform does not support semantic HTML for accessibility, a principle stating "design for accessibility" will not change that. Principles describe what good looks like, but shared components are good design - when a team uses a well-designed component, the principle is embodied in code and does not need to be remembered at all. We do not have shared components, so every team must re-implement every principle every time. And we have no design authority; teams can read our principles, disagree, and do something else.

Accepting the Limitations

Having spent time developing principles, articulating them clearly, and grounding them in academic literature - to then acknowledge their weakness as an intervention is uncomfortable but essential. As Cababa (2023) notes, "the first limitation is ignorance. When you are designing solutions, you often don't know what you don't know". Part of doing good work is being honest about its limits.

Acknowledging that principles are a weak intervention does not make them worthless. It means setting appropriate expectations with ourselves and stakeholders, pursuing stronger interventions in parallel rather than relying on principles alone, being realistic about timescales since weak interventions take longer to produce change, and documenting what would need to change for principles to gain more purchase.

Stronger Levers

If principles are a weak lever, stronger ones in our context would include a shared component library that encodes principles directly in code so teams do not need to remember them; building in the open across programmes so that problems are solved once rather than reinvented in silos, following the government-as-a-platform vision where "platforms enable citizens and private organizations, as well as all levels of government, to interact" (Peters and Fontaine, 2022); platform advocacy that influences the vendor to improve accessibility support, customisation options, and defaults aligned to NHS standards; and design authority that mandates review before products launch, converting principles from suggestions to enforceable rules.

None of these are currently available to us. They represent what would be needed for design quality to be more than a matter of individual team commitment.

Why Proceed Anyway

Given all this, we proceeded with developing principles for three reasons. First, it is the lever we can pull. We do not control the component library, we do not have platform influence, and we do not have design authority; what we do have is the ability to write documents, facilitate discussions, and create shared vocabulary. Systemic change often requires working at multiple levels simultaneously, and as Metcalf (2014) notes of sociotechnical systems, "the consequence of this limitation is to eliminate any possibility of changing everything in lockstep". We cannot wait for perfect conditions.

Second, principles are foundational for stronger interventions. Even if we later build a shared component library, we will need to know what principles those components should embody; even if we gain design authority, we will need to know what standards to enforce. Principles are necessary groundwork - not sufficient on their own, but required for anything else to work.

Third, the process of developing principles - discussing them as a team, grounding them in evidence, debating edge cases - has intrinsic value. It builds shared understanding, surfaces disagreements that would otherwise emerge messily during project work, and creates intellectual infrastructure that supports future conversations.

What We Developed

In practical terms, we developed fifteen principles organised by agency level. Seven are principles we control: visual hierarchy, progressive disclosure, task conformance, content and terminology, emotional tone, data provenance, and responsiveness. These are areas where we have genuine decision-making power within the platform's constraints. Three are principles we influence through coordination: cross-product consistency, "be consistent not uniform", and coherent visual language. These require collaboration with other teams and cannot be achieved by any one team alone. Five are principles we aspire to: accessibility, user control and freedom, error prevention, design for interruption, and familiarity with NHS patterns. These describe good design that the platform's architecture makes difficult or impossible to achieve fully; accessibility, despite being a legal requirement of the NHS as a provider of public services and therefore something we might have more leverage over, sits in this category because platform limitations constrain what we can directly control.

The categorisation by agency is what prevents the document from being aspirational fiction. We are not claiming we will achieve perfect accessibility when the platform's architecture prevents it. We are documenting what good looks like, what we can achieve, and what would need to change for us to achieve more.

Toward an Evaluation Focus

Given the limitations discussed, framing principles as "evaluation vocabulary" may be more honest than presenting them as design guidance. The document is a shared vocabulary for design critique, a framework for systematic evaluation, a communication tool for stakeholder discussions, and documentation of intent including where constraints prevent achievement. It is not a solution to platform constraints, a substitute for shared components, an enforcement mechanism, or a guarantee of consistency. The practical draft that follows this post still calls them "principles", but the evaluation framing may become more central as we learn what actually works.

The Longer Game

Principles as a weak intervention does not mean principles are pointless - it means they are part of a longer game. The sociotechnical systems literature emphasises that "users of systems 'interpret it, amend it, massage it and make such adjustments as they see fit and/or are able to undertake'" (Walker and Stanton, 2016, citing Clegg). Change happens through accumulation of small shifts rather than through single dramatic interventions. Over time, vocabulary spreads and evaluation becomes habitual; documented gaps support arguments for platform change; principles guide what to build if resources for shared components materialise; and individual designers who internalise principles produce better work even without enforcement. These are modest outcomes, but they are real and achievable.

The colleague who stopped us to ask about fundamental principles was right. We needed to establish them before rushing into patterns. But having established them, we also need to be clear-eyed about what they can and cannot do - and keep advocating for the stronger interventions that would make good design more than a matter of hope and individual commitment.

References

Cababa, S. (2023). Closing the Loop: Systems Thinking for Designers. Rosenfeld Media.

Hill, D. (2022). Designing Missions: Mission-oriented innovation in Sweden. Vinnova.

Jones, P.H. and Van Ael, K. (2022). Design Journeys Through Complex Systems. BIS Publishers.

Meadows, D. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.

Metcalf, G.S. (2014). Social Systems and Design. Springer.

Peters, B.G. and Fontaine, G. (2022). Research Handbook of Policy Design. Edward Elgar Publishing.

Walker, G.H. and Stanton, N.A. (2016). Command and Control: The Sociotechnical Perspective. CRC Press.