Forward Deployed Design

Designing in Code - But Whose Code?

Last week I was asked to scope what it would take to build a React component library for a national healthcare data platform. As I've been writing the scoping document, I keep returning to a tension that's been present throughout my career: the relationship between designing and building.

I'm a designer who codes. I've built React applications, iterated designs directly in the browser, and I believe in the power of working prototypes over static specifications; Brad Frost (2016) captures this well when he argues that "once the designs are in the browser, they should stay in the browser" (p. 9). That's been my practice for years.

So when I reflect on the "just build it" methodology of the platform vendor, I'm not arguing against designing in code. I'm arguing about whose code, whose design sensibility, and whose problems get centred when we skip the conversation about what should be built, and how possible it actually is to conduct this code-based iterative design work in a genuinely collaborative way.

The vendor's approach to building software is called Forward Deployed Engineering - a model pioneered by Palantir in the early 2010s that claims lineage from military design thinking, a field I've studied as part of my doctoral research and encountered professionally during my time at FOI, the Swedish Defence Research Agency. Having spent time with both the practice and the wider literature around military design thinking, I'm increasingly convinced that the FDE model has inherited a preoccupation with the alleged "need for speed" of military operations while discarding the representation and reflection that makes military design thinking actually work in practice, particularly in non-military healthcare contexts.

The Forward Deployed Engineer

The Forward Deployed Software Engineer (FDE) concept is straightforward: instead of building software in an office and shipping it to customers, you embed engineers directly with customers to build software in situ. Palantir (2022) describe the role as a software engineer who "embeds directly with our customers to configure [the vendor's] existing software platforms to solve their toughest problems". The pitch is compelling: rather than the "gulf between user and developer" that plagued traditional defence contractors, FDEs work alongside the people who will use the software.

Karp and Zamiska (2025) tell the origin story through the lens of Afghanistan. In 2011, soldiers were being killed by IEDs while the Army's incumbent software couldn't help them analyse the intelligence they needed. The problem, Karp argues, was distance: "those designing the army's software system at the time... were too far and too disconnected from the actual users of the software, the soldiers and intelligence analysts, in the field" and "the gulf, between user and developer, had grown too wide to sustain any sort of productive cycle of rapid iteration and development" (p. 22). The solution was proximity - engineers sent to Kandahar, software built alongside soldiers, feedback loops compressed from months to hours.

This is a powerful critique of traditional enterprise software development; the procurement processes, the layers of subcontractors, the separation of specification from implementation really do produce software that fails its users. Scaman (2025) captures the FDE ethos: engineers "go into client organisations and build fast, hacky, whatever-works solutions". The approach has genuine strengths, but it also has structural absences that become visible when set against the military design thinking tradition it claims to draw upon.

What Military Design Thinking Actually Says

The FDE model draws on military imagery - "forward deployed" is itself a military term meaning positioned at the front lines - and the emphasis on speed, iteration, and user proximity echoes themes from military design thinking literature.

During my time at FOI, I became familiar with how Scandinavian defence research approaches complex operational environments. Later, through my doctoral studies, I have engaged more deeply with the academic literature on military design thinking - a field that emerged in response to the recognition that modern warfare is complex and adaptive. What that literature describes, however, is quite different from the FDE model.

Wrigley et al.'s (2021) extensive review of military design thinking identifies "conception of the environment as a system" as a core characteristic, noting that "systems thinking, and the representation of systems using diagrams, is a common element of military design" (p. 12). The emphasis is mine, but the point is theirs: representation matters. Military design thinking emerged from a recognition that you can't plan your way to victory using traditional analytical methods; the response wasn't to abandon planning but to change how planning works. Design became about "framing" problems, developing shared mental models, and - crucially - representing those models in forms that could be examined, critiqued, and revised.

Zweibelson (2023), drawing on Donald Schön's work, distinguishes between "knowing in action" and "reflecting on action"; reflective practitioners don't just do, they think about what they're doing while they're doing it. This requires slowing down enough to represent what you're learning, moving away from what Zweibelson calls "what-centric descriptions that reinforce legacy sanctioned activities" toward a practice where "reflective practitioners consider 'knowing in action'" (p. 9). The US Army's doctrinal approach to design includes "the development of environment and problem frames to ensure adequate understanding" (Jackson, 2019, p. 9). Frames are representations - diagrams, maps, models, ways of making complexity legible so it can be discussed.

This is the opposite of "if the problem could have been solved with a requirements document, it would have". Military design thinking says: yes, actually, you need the document. Not as a contract, but as a shared representation that enables collective sensemaking.

The Divergence

The FDE model's divergence from its claimed military heritage reflects a broader pattern. The US Department of Defense's own acquisition reform, as Hobson (2023) documents, was driven by a "particularly keen focus on engaging with, and emulating, the apparent agility, speed and boldness of private sector innovation - especially that of 'move fast and break stuff' start-up culture, Silicon Valley and Venture Capital" (p. 85). The FDE model emerged from this nexus, and it carries the same selective inheritance: speed, iteration, customer proximity, outcome focus - the bits that suit a commercial software company - while leaving behind deliberation, representation, reflection, and shared mental models. Bailey (2021) identifies the same logic operating in UK public sector design, where speed "is reflected in the nomenclature - 'rapid' prototyping, lateral thinking 'sprints', hackdays and 'jams'" and design "proposes itself as a light-footed and entrepreneurial catalyst of change" in contrast to "the supposed inertia of the bureaucratic machine" (p. 179). As Holliday (2022) warns, the technology and innovation mantra of "move fast and break things" risks genuinely bad outcomes in public service contexts, where the consequences of building the wrong thing fall on people who depend on services they didn't choose (p. 15).

Part of the divergence is the nature of the product itself. The vendor's platforms are "ontology-oriented" in their language: they model the customer's world in software. As Palantir (2024) describe it, the ontology is "a technical solution to this linguistic challenge: instead of treating each different interface as a distinct language, an ontology represents a single language capable of being expressed in graphical, verbal and programmatic forms". The vendor's instinct is to build rather than to discuss; to model the customer's world directly in the platform's own object and conceptual structures rather than to develop shared representations that might precede or challenge those structures. From the vendor's perspective, why spend time drawing a systems map or debating what the right solution might be when you can build a working version shaped by - and, crucially, constained to - what the platform already supports?

Part of it is competitive advantage. Speed is a moat; if you can build overnight what your competitors take months to specify, you win deals. Karp and Zamiska (2025) celebrate this: their engineers in Afghanistan built software that the Army's procurement process couldn't deliver in years. There is a cost, however. I watched a senior platform engineer describe the vendor building a React application overnight to justify drag-and-drop tiles, only for a stakeholder to request the feature be removed two weeks later because "we want it to look the same for everyone". The speed was real; the iteration was real. But the sensemaking never happened. Nobody asked what problem was actually being solved. The feature was built, shipped, and then two weeks later removed - velocity without direction. This focus on speed and what individual - usually very senior "users" requests at a given moment risks creating a patchwork of features that don't fit together, that don't fit the users' actual needs, and that create technical and design debt that slows down future development or might be over indexed on what one or two users expressed at a given moment rather than on the broader user base's needs and what it would take to deliver products that successfully scale. The FDE model optimises for speed, but in contexts like healthcare we need to optimise for velocity and quality, which, even with a focus on 'speed to value' means, when considering the wider health system we are delivering for, taking the time to think, to represent, to reflect, and to iterate with intention rather than just with speed in mind.

Contrast with GDS and Agile Approaches

The FDE model isn't just different from traditional waterfall development; it's also different from the agile and GDS-influenced approaches that have shaped UK public sector digital practice.

The Government Digital Service established a delivery model with explicit phases: Discovery, Alpha, Beta, Live. Each phase has a purpose - discovery for understanding the problem before committing to a solution, alpha for prototyping and testing hypotheses, beta for building and iterating with real users. As Kimbell (2015) observes, "the idea of prototyping is already familiar to some people within government", noting that "GDS has a clearly defined delivery life cycle for digital projects, including discovery, alpha, beta and live phases" (p. 27). This isn't bureaucracy for its own sake; it's a structured approach to ensuring that representation - in the form of research findings, prototypes, and tested hypotheses - precedes and helps risk-assess commitment.

The key difference is where design happens in the process. In GDS-influenced approaches, design happens before and during development; user research informs design decisions, prototypes are tested before code is committed, and there's a "double diamond" of divergent and convergent thinking with explicit space for exploring the problem before jumping to solutions. In the FDE model, design happens through development; the artefact is the software itself, and feedback comes from usage rather than from pre-implementation review. Herbert (2023) notes that "with the advent of GDS came a brave new world of user-centrism, where the requirement to consider user needs became the dominant focus" (p. 24); the FDE model shares this focus on users but inverts the methodology, building first and then, in theory at least, observing whether needs are met rather than researching needs before building.

Both approaches can work, but they require different conditions. The FDE model works when you have embedded engineers with strong product intuition, when the problem space is relatively well-understood, and when iteration cycles can be genuinely rapid. The GDS model works when problems are contested, when multiple stakeholders need to align, and when the cost of building the wrong thing is high. Healthcare, for the most part, particular when working in a National health system, is the latter: clinical users have complex, often conflicting needs; accessibility requirements are non-negotiable; the consequences of poor design - interrupted workflows, missed information, excluded users - can affect patient care.

Where This Leaves Public Sector Design

I'm working in a context where two models collide daily. Public sector user-centred design is representation-heavy: research, synthesis, prototyping, testing; journey maps and service blueprints; design principles and pattern documentation. These are artefacts that exist to be examined, critiqued, and revised in some cases before anything gets built, but, my practice - and my own software development capabilities - prototyping in code, and adopting practices developed over the last fifteen years drawing on my work exploring technical probes in safety-critical workplaces and settings has always had a focus on building working "functional artefacts" and quickly testing assumptions. But even my relatively novel practice of designing in code is still representation-heavy; I build prototypes to explore and communicate design decisions before committing to production code, and the prototype is a form of representation that can be tested and debated. These prototypes are also built in open design systems, so that the full range of creative possibilities can be explored and tested before committing to a particular design direction and one that isn't just constrained to what the drag and drop interface of the vendor's platform supports.

One of the key problems is power asymmetry. The vendor's forward-deployed engineers have platform access; they can build overnight. Public sector designers have guidelines and, in some cases, limited access to the platform's configuration tools, but in general we can recommend without being able to iterate the prototype ourselves.

Design Systems as Frozen Representation

This is why building a design system matters - and why it needs to be done deliberately, not in the FDE style.

I'm not opposed to designing in code; my career has involved exactly that, building React applications, iterating in the browser or in native code, using prototypes as the primary design artefact. Sometimes it helps to put a provocotype in front of people to get feedback, but the point is that it's a prototype - a representation that exists to be examined, critiqued, and revised and tested at increasing scale not just one client site before the product is launched or made "generally available".

Without representation - without diagrams, prototypes, specifications - those contests happen implicitly, in code, where they're invisible to anyone who can't read TypeScript. A vendor with a dogmatic preoccupation with speed can build overnight, but without the conversation about what you're building, why, and how it might be wrong, you're just building faster without necessarily building better.

A design system is an attempt to create artefacts that exist between "nothing" and "shipped software" - to force the conversation that the FDE model assumes is unnecessary. It enables faster iteration by reusing proven components that themselves have been designed and tested and will become familiar to users, whilst still allowing enough flexibility to iterate anew when a task doesn't fit existing patterns. It's a way of freezing representation in a form that can be examined, critiqued, and revised, but that also runs, so it can be tested and iterated in the browser in production-quality code.

The Politics of Speed

There's a question I keep returning to: who benefits when there's no time to think?

Karp and Zamiska (2025) frame speed as a virtue; soldiers were dying while the procurement process deliberated, and the moral case for velocity is clear. But even if this was true then - I can't help but feel even in those situations the benefits and reliability that the preoccupation speed claims are likely overstated - British healthcare isn't warfare in Afghanistan. Clinical users are under pressure, not under fire. The urgency is real - patients wait, staff burn out, systems fail - but it's not the kind of urgency that justifies skipping sensemaking. Much of what the current platform supports constitutes an additional tab or app for users to navigate to in an existing workflow, or a new operational dashboard for managers; the consequences of bad design are real but they're not the existential consequences the FDE model's origin story invokes to justify its approach.

When the vendor builds overnight, the people who benefit are the vendor (who demonstrate capability), the client sponsor (who sees rapid progress), and sometimes the one or two end users or organisation that directly informed and specified what got built (who get a version of working software faster directly tailored to their quickly specified demands). The people who lose are the end users who get the wrong software faster, the designers who never got to contribute, and who, with the accessibility specialists will have to remediate later to bring the quality up to standard and into alignment with the wider design patterns the organisation demands. The product and delivery managers who also struggle to understand or regulate what is getting built and when also lose, as does the organisation that accumulates technical and design debt as features built one week in response to one user's feedback get reworked a few weeks later in response to another's, and a host of products built last year to one provider's specification struggle to scale to system wide design patterns or data infrastructures.

The FDE model isn't wrong; it's incomplete. It works when the problem is clear, localised and data sources are clean and readily available and when iteration can be genuinely informed by user feedback. It fails when the problem is contested, when the users can't articulate what they need, or when the systemic challenges the software is intended to address extend to aspects of the system - data pipelining, unreliable source systems - that no amount of quickly iterated, drag-and-drop-builder applications can solve. The key distinction is between speed, velocity, and quality: speed is how fast you can build; velocity is how fast you can build the right thing; quality is how well the thing you built actually works for users. The FDE model optimises for speed, but in contexts like the NHS - where legacy data infrastructure, analogue processes, and unreliable source systems mean that the front-end application is only ever as good as the data and workflows feeding it - we need to optimise for velocity and quality, which means taking the time to think, to represent, to reflect, and to iterate with intention and acknowledgement of the wider human systems we are working within rather than just with speed.

What This Means for the Scoping Work

I'm writing a document that says: here's the team we need, here's the timeline, here's the approach. It's a representation. It exists to be examined before resources are committed.

The FDE approach would be different: just start building, see what works, iterate. The representation is the software itself. I understand the appeal; scoping documents can become bureaucratic cover for inaction, substituting for doing, and in the time it takes to write, review, and approve a scoping document, a good developer could have built something real.

But I've seen what happens when you skip this step. You build the wrong thing fast. Each product develops its own design patterns, its own technical debt. Patterns propagate across a platform before anyone has asked whether they're the right patterns, and the cumulative user experience is a patchwork of inconsistent interactions that frustrate users and slow down workflows - before you even consider the data pipeline challenges that also affect product viability.

This post is part of a series documenting the development of a design system for a national healthcare data platform. Previously: The Commission.

References

Bailey, J. A. (2021). Governmentality and power in 'design for government' in the UK, 2008-2017: an ethnographic study [Doctoral thesis]. Lancaster University.

Frost, B. (2016). Atomic design. Brad Frost. https://atomicdesign.bradfrost.com/

Herbert, J. (2023). Lessons learnt from government digital transformation. [Working paper].

Hobson, T. (2023). Sociotechnical imaginaries, the future and the Third Offset Strategy [Doctoral thesis]. Lancaster University.

Holliday, B. (2022). Multiplied: How the best companies create breakthrough products through collaborative multidisciplinary design. FutureGov.

Jackson, A. P. (2019). A brief history of military design thinking. Journal of Military and Strategic Studies, 19(3), 1-25.

Karp, A., & Zamiska, N. (2025). The technological republic. Crown.

Kimbell, L. (2015). Applying design approaches to policy making: Discovering policy lab. [Report]. University of Brighton.

Palantir. (2022, June 6). A day in the life of a Palantir Forward Deployed Software Engineer. Palantir Blog. https://blog.palantir.com/a-day-in-the-life-of-a-palantir-forward-deployed-software-engineer-45ef2de257b1

Palantir. (2024, January 23). Ontology-oriented software development. Palantir Blog. https://blog.palantir.com/ontology-oriented-software-development-68d7353fdb12

Scaman, Z. (2025, December). The Palantir model. Substack. https://zoescaman.substack.com/p/the-palantir-model

Wrigley, C., Mosely, G., & Mosely, M. (2021). Defining military design thinking: An extensive, critical literature review. She Ji: The Journal of Design, Economics, and Innovation, 7(1), 104-143.

Zweibelson, B. (2023). Beyond the pale: Designing military decision-making anew. Palgrave Macmillan.