In the previous post I described three kinds of artefact from the algorithm archaeology: concept maps that synthesised and performed a gap analysis of the Pathway Generator's structure against literature-informed models of the occupational health rehabilitation conceptual apparatus and typical machine learning software development processes; typed interface definitions that specified the algorithm's data structures variable by variable; and architecture diagrams that showed what the system would or could look and function like as an implemented systemic whole. Each operated at a different level of specificity, and each forced different kinds of question. This post is about the middle layer - the typed interfaces - and why the discipline of type specification turned out to be the most epistemically productive method I used at SCÖ.
The observation is straightforward, even banal, from a software engineering perspective. When you type a data structure, you force every element to declare what it is, what values it can take, and what it depends on. In frontend development this is increasingly routine; the value of strong typing is not that it makes code run differently but that it forces the developer to be explicit about what the code expects. The application to the Pathway Generator was the same method applied to a different object: instead of typing a UI component's properties, I was typing the data contracts between an algorithm and the service context it claimed to operate within. The question each type declaration posed was not "how should this work?" but "what does this require to exist?"
What I think is worth articulating for a service design audience is that the method produces knowledge as a side effect of specification. You do not set out to discover what is missing; you set out to specify what exists, and the gaps declare themselves. As Jackson (2021) observes of formal specification more generally, "the very activity of writing [formal specifications] revealed inconsistencies and confusions in the intended behaviour". The specification is not documentation of a known system; it is a method for discovering what a system actually requires.
There is a broader point here about where the design inquiry starts. Service design's instinct is to follow the user - their needs, journeys, experiences - and this is often right. But there are situations where the material you need to understand first is not the user but the data: what data exists, what infrastructure produces it, what governance frameworks surround it, what technical contracts would need to hold for any of the promised tools to function or user-facing functionality to be delivered. At SCÖ, the algorithm required specific data in specific formats from specific institutional processes. Understanding those requirements - following the data, not the user - was the necessary precondition for understanding whether the service could ever work. This is not a rejection of user-centredness; it is a recognition that in data-dependent service contexts, the material technical reality is itself design material, and understanding it is prerequisite to serving users at all.
What typing does
In loosely-typed software, a function accepts whatever it is given and fails at runtime if the input is wrong. The failure is often silent, delayed, or difficult to trace back to its cause. In strongly-typed software, the type system forces you to declare what a function expects and what it returns before the code runs. If you declare that a function takes a RehabilitationAssessment and that a RehabilitationAssessment must contain a physicalHealth score of type number, a protocol identifier of type string, and a dateAssessed of type Date, then anything that claims to be a RehabilitationAssessment must supply all three. The compiler checks this before the programme runs. The value is not the types themselves but the questions the type system forces you to answer: what shape is this data? What can be absent? What happens at the boundaries between systems that exchange it?
The distinction matters for service design because it illustrates the difference between describing a system and constraining it. A description says "the caseworker gathers assessment data". A type constraint says "this variable requires a numeric score between 0 and 100, derived from a validated instrument, administered by a qualified practitioner, under a specific governance protocol, and recorded in a system that exposes it through a defined interface". The description communicates; the constraint commits. And the commitment is where the epistemic work happens, because every commitment is a claim that can be tested against material conditions.
Applying this to the Pathway Generator
The Pathway Generator's Python code defined variables - mean_physical, stddev_physical, bullying, empowerment - but left their upstream dependencies implicit. The code assumed values would arrive; it did not specify where they came from, how they were gathered, or what institutional processes produced them. At the level of the code, the algorithm was complete. At the level of the service, it was a set of currently unfulfilled promises, I used these type definitions as the starting point for enquiring into the algorithm's data requirements. Each variable was declared with a type - number, boolean, string - and each type declaration was a question: what does it take for this variable to exist, and does that thing actually exist in the service context? What human labour produces it? What technical infrastructure produces it? What governance arrangements govern it? What data-sharing agreements allow it to be shared? What interfaces allow it to be accessed or input?
Writing typed interface definitions for these variables was not a software engineering task in the conventional sense, although my hope was, in time, it might at least become a prototype we could test. It was a design inquiry method. Each type declaration was a question posed to the project: mean_physical: number - measured by whom? Using which instrument? Validated against what norms? Gathered through which protocol? Governed by what data-sharing agreement? Stored in what system? Accessible through what interface? The type demanded an answer to each of these questions, and the project could not supply them.
In the algorithm archaeology post, I described how Burns and Hajdukiewicz's (2017) abstraction hierarchy provided the analytical framework. The concept maps operated at the level of Abstract Function and Generalised Function - they showed what the system needed to do, in terms that allowed stakeholders to nod along and confirm relevance without committing to specifics. The typed interfaces operated at Physical Function and Physical Form - they specified what concretely would have to exist. The concept maps held the project together; the interfaces pulled it apart, because at the level of concrete specification the gaps became undeniable.
The comments I wrote in the interface definitions - // unclear what protocol for gathering, // Hard to interpret this... a boolean on whether they consider they are a victim of being bullied, or a string containing narrative about such histories?? - what protocol is used to assess this? - were not annotations added after analysis. They were the analysis. Each comment marks a point where the type system demanded specificity and the project, at that point in time could not supply it.
Data contracts and the spaces between systems
In software architecture, a data contract is a formal agreement between systems about what data will be exchanged, in what format, with what guarantees. Microservices depend on this discipline: if Service A promises to send Service B a RehabilitationAssessment with specific fields, both teams can build independently as long as the contract holds. The contract is the boundary specification - it defines what each system owes to the other, and what each can assume.
The same logic applies to service systems, even when the services are not software. When an algorithm claims to support caseworker decision-making, it is implicitly claiming a data contract with the service context: "give me these variables, in this format, gathered through these protocols, and I will return a recommendation". The Pathway Generator's data contracts had never been specified. The prototype algorithm existed; the proposed service context existed; but the boundary between them - the specification of what each owed the other - was undefined. The typed interfaces were an attempt to make those implicit contracts explicit, and in doing so to discover whether the contracts could be fulfilled.
Data has its own lifecycle - creation, collection, governance, storage, processing, presentation - and in public sector contexts much of this infrastructure is fixed, legacy, or constrained by regulation in ways that cannot be designed away. Following the data through that lifecycle, tracing its material conditions, is not a detour from design work; in data-dependent service contexts it is the design work, or at least its necessary precondition. The typed interfaces were a way of following the data: not asking "what do users need?" but asking "what does this data require to exist, and does the institutional infrastructure to produce it actually exist?" The answer, at SCÖ, was consistently no - and that answer was more informative about the service's actual constraints than any amount of user research could have been at that stage.
A state space requires that the objects in a domain be defined - their properties, possible values, and valid transitions. Typing those objects is how you discover whether the domain definition is coherent. If you cannot type a variable - if you cannot say what values it takes, what produces it, what depends on it - then the state space that includes it is not yet defined. The typed interfaces were, in effect, a partial state-space specification for the rehabilitation domain, and the gaps they revealed were gaps in the domain definition, not merely gaps in the data.
Why service design does not do this
Service design's standard representations - journey maps, blueprints, system maps - operate at levels of abstraction that do not demand this kind of specificity. A journey map can show "caseworker assesses client" without specifying what the assessment measures, what instruments it uses, what data it produces, or what downstream systems consume that data. The representation works because it is loosely typed - it communicates flow and experience without committing to the contracts between systems. This is often appropriate; not every design situation needs formal specification.
But when a project claims to be building data-driven tools - when the programme logic depends on data flowing between systems in specific formats - the loosely typed representations cannot do the epistemic work the situation demands. They show the shape of the service without testing whether the interfaces between its parts are coherent. In Harel's (1987) terms, they represent the system's structure but not its behaviour - the static arrangement of components but not the dynamic requirements that determine whether those components can actually transact with one another. Behaviour is where the inconsistencies live.
There is a methodological gap here that matters beyond SCÖ. Public sector services increasingly depend on data infrastructure - shared records, algorithmic decision support, cross-agency data sharing. Designing these services requires understanding the material technical reality: what data exists, in what systems, governed by what agreements, flowing through what pipelines. Much of this infrastructure is inherited, constrained, or simply absent; it cannot be wished into existence by a sufficiently ambitious funding application. A design approach that starts and ends with user needs will miss these material conditions, because users experience the service but do not see the infrastructure. Following the data - tracing what it requires to exist and what produces it - is a form of design inquiry that service design's standard methods do not currently support, and that engineering methods like type specification are well suited to.
Strong typing is not a technical preference. It is a design method for discovering what a system actually requires by forcing every element to declare its dependencies. It embraces the material technical reality that software and data engineering work within daily, and which service design - when it operates in data-dependent contexts - needs methods to engage with rather than abstract away.
What came next
The typed interfaces did their epistemic work. They surfaced what was missing, documented what would need to exist, and made the gap between aspiration and infrastructure legible. What they could not do was make the project act on what they revealed. The specificity that the type system demanded was precisely what the consortium's political arrangements required to remain unspecified; the data contracts that would need to hold were exactly the commitments nobody was prepared to make. That is the subject of the next post.
References
- Burns, C.M. and Hajdukiewicz, J. (2017). Ecological Interface Design. CRC Press.
- Harel, D. (1987). Statecharts: A Visual Formalism for Complex Systems. Science of Computer Programming, 8(3), pp. 231-274.
- Jackson, D. (2021). The Essence of Software: Why Concepts Matter for Great Design. Princeton University Press.