Governance, Legibility, and What Programmes Cannot See

Governance as design material

This series has been circling around the concept of governance for some time without confronting it directly. The first post described the epistemological tension between design and programme management, and noted that programme management cultures are shaped by real institutional pressures - political accountability, clinical safety, audit - that make governance not a bureaucratic preference but a structural necessity. The second post argued that design brings programme management a cross-cutting view and the ability to surface invisible decisions; the third examined the translation work required to make design insights legible within governance structures. But in each case, governance appeared primarily as the context within which design operates - the landscape to be navigated.

What I want to explore here is a different possibility: that governance might be better understood not as the landscape but as part of the material. The direction of the thought is from treating governance - approval steps, consent architecture, access controls, audit requirements - as a set of obstacles to navigate around or fixed criteria to be met, toward treating these structures as design inputs: things to work with rather than friction to minimise. But to work with governance as a material, one must first understand its properties - why it is shaped the way it is, what it can see, and what it structurally prevents from being known.

The legibility problem

Scott (1998) provides the foundational account. His analysis of how states render complex social realities governable identifies a mechanism that applies with equal force to programme management: legibility requires simplification. The state, Scott argues, takes "exceptionally complex, illegible, and local social practices" and creates "a standard grid whereby it could be centrally recorded and monitored" (p. 3). The resulting simplifications function as "abridged maps" that "did not successfully represent the actual activity of the society they depicted, nor were they intended to; they represented only that slice of it that interested the official observer" (p. 3). This is not a criticism of malice or incompetence; it is a description of what governance at scale necessarily requires. A national programme coordinating change across dozens of provider organisations cannot operate on the basis of situated, particular knowledge about each organisational context. It needs standardised categories - products deployed, organisations onboarded, milestones achieved - that are portable enough to travel from local delivery team to regional board to national programme report without requiring the reader to understand any particular context.

Shore and Wright (2024, p. 13) name this mechanism explicitly: governance operates by "imposing standardised taxonomies and measurements to make the world legible to the state so that it can be governed at a distance". The distance is not incidental; it is constitutive of the governance relationship. A programme board sitting in London reviewing a highlight report about a data platform deployment in a regional trust is governing at distance by definition, and the reporting infrastructure must produce categories that work across that distance. The categories that survive the journey are those that can be standardised, counted, and aggregated: numbers of deployments, percentages of adoption, RAG-rated risks. What cannot survive is the particular - the situated knowledge of how a specific team in a specific trust actually uses a specific product, whether their workflow has genuinely changed, whether the data quality has improved in ways that affect clinical decisions.

Strathern (2003, p. 5) identifies a further mechanism. In her account of how audit practices reshape organisations, she observes that "what is being assured is the quality of control systems rather than the quality of first order operations. In such a context accountability is discharged by demonstrating the existence of such systems of control, not by demonstrating good teaching, caring, manufacturing or banking". The observation is not that audit is fraudulent; it is that the audit relationship systematically privileges second-order evidence (does the organisation have processes for ensuring quality?) over first-order evidence (is the service delivery actually good?). A programme can demonstrate comprehensive governance while producing services that clinicians cannot use, patients cannot navigate, and operational staff work around rather than with. The assurance operates at the level of the control system, and the first-order operations remain, in an important sense, ungoverned - not because nobody cares about them, but because the governance infrastructure was not designed to see them.

Hunter (2015, p. 3) extends the analysis from what audit sees to what it produces. Audit technologies are not merely observation instruments; they are "a means of governing subjects; of making them more governable by constituting them as the sorts of subjects demanded by the programmatic ambitions of government". The programme manager who organises their work around highlight reports, risk registers, and RAG ratings is being constituted as a particular kind of professional subject - one whose competence is legible through the metrics the governance apparatus can register. The programme manager is not choosing to ignore situated, qualitative evidence about whether services work; they are operating within an accountability structure that recognises certain kinds of evidence and not others.

Decomposition as the mechanism

Loading diagram…

The mechanism through which legibility requirements produce specific blindnesses in programme management is decomposition. Suoheimo and Jones (2025, p. 21) identify the pattern directly: "contemporary governance models tend to deconstruct problems into smaller sub-problems but then isolate them into separate units", resulting in "specialised silos without horizontal information flow". A complex, cross-cutting challenge - redesigning how health data flows between primary and secondary care, or building a platform that must serve clinical, operational, and analytical users simultaneously - is decomposed into workstreams because that is what programme governance requires. Each workstream needs a lead, a plan, a set of deliverables, a risk register, and a reporting line. The decomposition makes the challenge legible to the programme board: instead of an irreducible complexity, the board sees a set of manageable components, each with its own status and trajectory.

The problem is not that decomposition is wrong; it is that the categories of decomposition determine what the programme can subsequently perceive. When a programme decomposes a challenge by product or technology domain, each workstream generates reporting about its own deliverables. What no workstream reports on, because no workstream owns it, is the user journey that crosses workstream boundaries, the service quality that emerges from integration rather than from any component individually, or the behaviour change in clinical practice that the programme was ostensibly commissioned to produce. Scott's (1998) distinction between epistemic knowledge and metis - the practical, situated knowledge that comes only from direct engagement with a particular context - maps precisely onto this structural gap. The programme's reporting infrastructure captures epistemic knowledge: counts, categories, status indicators that can be codified and transmitted across distance. What it cannot capture is the metis of actual service use - the workarounds clinicians develop, the informal practices that emerge around a poorly designed data entry process, the difference between an organisation that has technically adopted a platform and one where the platform has genuinely changed clinical practice.

When constraints are generative

If these are the properties of governance as a material, the question is what follows for design. One response - the instinct trained into designers formed in the tradition of digital service design with its emphasis on speed, iteration, and the removal of friction - is to design the service first and accommodate governance afterwards. In low-stakes contexts this ordering does relatively little harm. But in healthcare, or other contexts where services handle sensitive personal data and where decisions affect clinical outcomes, something about this ordering starts to feel wrong. The approval steps, the consent architecture, the access controls, the audit trails: for the people who use these services, these structures are not peripheral to the experience. They may, in important respects, be the experience.

Metcalf (2014, p. 8) describes formal and informal governance systems as working together to "constrain or liberate" what design can do. The phrasing is suggestive: constrain or liberate, not constrain and then, separately, liberate. There is an implication that constraints do not only close down options; they might also create the conditions for trust, which in turn opens up possibilities that would not exist without them. Participants share whole genomes because they trust the governance. Researchers get access to sensitive datasets because the access controls are robust. Clinicians adopt new tools in part because the clinical safety processes give them confidence. If this holds, then weakening governance does not merely create risk; it erodes the foundation on which participation and adoption depend. Cooper (2019) frames something similar at an institutional level: governance provides stability, and that stability is what allows innovation to occur within a trusted framework.

Vogd and Knudsen (2014, p. 17) identify a related point: constraints can have "enabling effects for organisations in terms of how they take account of ethics". In healthcare, the clinical safety requirements that designers sometimes experience as bureaucratic obstacles are the mechanisms through which the organisation discharges its duty of care. Reading them this way - as expressions of institutional ethics rather than as administrative overhead - changes the design problem. The task becomes not to streamline the clinical safety process but to design a service that makes the clinical safety process work well for the clinicians who depend on it. The dashboard work from the parallel series illustrates this concretely. The distinction between measurement for improvement and measurement for accountability, and the assumptions embedded in transparency as a policy doctrine, both point to the same observation: the same data, governed differently, serves fundamentally different purposes. The governance model is not a wrapper around the data; it constitutes the meaning of the data.

Whether governance functions generatively presumably depends on the quality of the governance; the argument is for thoughtful constraint, not for constraint as such. And the distinction matters, because the legibility analysis above shows precisely how governance can become decoupled from its protective function.

Mismeasurement, ceremony, and issue bias

Christiansen and Lægreid (2007, p. 14) give this decoupling a name: "mismeasurement happens when less important but quantifiable aspects of organizational activities are reported, whereas more crucial but non-quantifiable aspects remain unreported". The problem is not that nothing is measured, but that the wrong things are measured, and the measurement creates the appearance of knowledge where there is in fact ignorance. A programme that reports 85% deployment across target organisations appears to know something about its progress; what it does not know - and what its reporting infrastructure cannot tell it - is whether the deployed product is being used, whether the use has changed anyone's practice, or whether the change in practice has produced the outcome the programme was commissioned to deliver.

The consequences of mismeasurement extend beyond ignorance. Shore and Wright (2024) document how performance indicators, once established, reshape the organisations they measure. When a programme is judged by deployment percentages, organisational energy flows toward deployment and away from the situated work of understanding whether deployed products serve their intended users. This is not gaming in the cynical sense; it is a rational response to the incentive structure that the measurement apparatus creates. The governance apparatus produces evidence about what it can see, and what it can see is determined by the categories of legibility it was designed to operate with.

This is the point at which Meyer and Rowan's (1977) analysis of ceremony becomes concrete. When governance structures decouple from the activities they nominally govern - when the formal structure exists for legitimation rather than effect - the performance-and-substance gap opens. A programme can demonstrate that it has a benefits realisation framework without demonstrating that any benefits have been realised. It can document a theory of change without testing whether the theory holds. It can report stakeholder engagement metrics without establishing whether anyone's understanding of the domain has actually deepened. In each case, the governance apparatus provides assurance about the control system while the first-order operations remain unexamined.

Parkhurst (2016) offers a sharper vocabulary for the epistemic mechanism. Where Meyer and Rowan describe the what - governance adopted for legitimation rather than effect - Parkhurst names the how: what he calls issue bias, in which a supposedly evidence-based argument is made by reference to a body of evidence that "only represents a limited number of relevant social concerns", converting a contested value judgement into an apparently technical finding and foreclosing harder questions. A recent example illustrates the mechanism. A programme benefits toolkit required product teams to produce an attribution estimate - a percentage (5-50%) expressing how much of any measured improvement could be credited to the product. This convention derives from Full Business Case methodology, where structured expert estimates are a defensible approach to investment decisions under uncertainty. But the toolkit asked the same estimate to serve as post-pilot evidence that benefits were actually realised - a different epistemic task that the estimate cannot perform: it has no comparison group, no mechanism specification, no counterfactual. The percentage converts a governance judgement into a technical-looking finding that satisfies programme board reporting while foreclosing the questions a genuine evaluation would require. The toolkit's structural position - as a compliance checkpoint rather than an early-stage design instrument - produces this outcome almost inevitably. Governance designed as a gate will be treated as a gate.

Healthcare data as social contract

In health data contexts, the stakes of getting governance wrong become particularly visible. Health data often cannot be anonymised in any reliable sense, and the consequences of governance failure extend beyond individual harm to institutional trust. If a health service loses public trust through a data breach or a consent violation, the damage is not limited to the affected individuals; it undermines the willingness of future participants to contribute their data, which undermines the research that depends on large-scale participation, which undermines the clinical applications that depend on the research.

This suggests that the governance architecture of a health data service might be better understood not as a compliance layer but as the infrastructure of a social contract between participants and the service. Every consent step, every access control, every audit trail is a promise made to participants about how their data will be used. If that framing holds, then part of the designer's job is to make those promises legible - to design experiences in which participants can understand what they are consenting to and feel confident that the consent will be honoured.

Working this way requires understanding why each governance element exists, not just that it is required, but what trust relationship it sustains. The boundary objects analysis from the previous post becomes concrete here. The governance documentation - the data protection impact assessments, the information governance frameworks, the clinical safety cases - are boundary objects that coordinate between regulatory, clinical, technical, and design communities. Reading these documents as design specifications, seeing in the access control requirements the shape of a user experience, translating consent requirements into interaction patterns: this is what treating governance as a design material looks like in practice.

What this means for design's position

The argument of this post is not that programme governance is illegitimate or unnecessary. The pressures that produce it - political accountability, clinical safety, public spending scrutiny - are real, and the structures that respond to those pressures serve genuine functions. The argument is that these structures have specific properties: they determine what the programme can see, and what they render invisible is precisely the kind of knowledge that design practices produce.

Design's cross-cutting perspective - the user journey that traverses workstream boundaries, the service experience that emerges from integration, the behaviour change that requires situated understanding - is structurally excluded not because anyone decided to exclude it but because the categories of legibility through which programme governance operates do not have a place for it. The programme can see products but not services; outputs but not outcomes; deployment but not adoption; adoption but not practice change. Each of these distinctions corresponds to a transition between layers in the five-layer model this series has been developing.

If governance is a design material, then designers are in a position to contribute to the design of governance itself - not by arguing against governance in principle, but by applying the same user-centred lens to governance processes that they apply to services. Who uses this approval process? What decisions does it support? What happens to the documentation it produces? Is the governance achieving what it was designed to achieve, or has it become ceremony - adopted for legitimation rather than effect? The question of whether a benefits framework could be redesigned to function as a genuine design input - shaping decisions about what to build, how to measure it, and what would constitute evidence that it worked - is as much a design problem as a governance one. The later post on value hypotheses examines what that redesign might require.

The practical strategies that the next post develops - finding insertion points in existing governance rhythms, speaking the language of risk, packaging design evidence in programme-legible forms - are strategies for operating within a system whose properties this post has tried to make explicit. They are strategies for making design's cross-cutting knowledge visible within an infrastructure that was not built to carry it; for creating what Scott might call legible representations of metis, without losing the situated particularity that gives metis its value. Whether that translation is possible without the knowledge being fundamentally transformed in the process is the tension that runs through the remainder of this series.

References

Christiansen, T. and Lægreid, P. (2007) Organisation Theory and the Public Sector. London: Routledge.

Cooper, S. (2019) Are We There Yet?: The Digital Transformation of Government and the Public Sector. Canberra: Department of the Prime Minister and Cabinet.

Hunter, S. (2015) Power, Politics and the Emotions: Impossible Governance? London: Routledge.

Metcalf, G.S. (2014) Social Systems and Design. Tokyo: Springer.

Meyer, J.W. and Rowan, B. (1977) 'Institutionalized Organizations: Formal Structure as Myth and Ceremony', American Journal of Sociology, 83(2), pp. 340-363.

Parkhurst, J. (2016) The Politics of Evidence: From Evidence-Based Policy to the Good Governance of Evidence. Abingdon: Routledge.

Scott, J.C. (1998) Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven: Yale University Press.

Shore, C. and Wright, S. (2024) Audit Culture: How Indicators and Rankings Are Reshaping the World. London: Pluto Books.

Strathern, M. (ed.) (2003) Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. London: Routledge.

Suoheimo, M. and Jones, P.H. (2025) Systemic Service Design. London: Springer.

Vogd, W. and Knudsen, M. (2014) Systems Theory and the Sociology of Health and Illness. Abingdon: Routledge.