Mapping a Technoimaginary: When There Is No 'As-Is' and the 'To-Be' Doesn't Exist

In previous posts, I've described what federated learning is and what it would actually require. Now I want to describe what happened when I tried to do the most basic thing a designer does when entering a new project: map the terrain.

Just Doing the Job

As a designer entering a complex multi-stakeholder project, my instinct was standard. Make things visible. Create shared representations that people can respond to. Understand what we're building before we try to build it.

I set out to answer what seemed like a foundational question: What do people mean when they talk about "data science" or "AI" or "machine learning" in this project?

The project involves multiple organisations - a Swedish coordination association, a Swedish university, a UK university, various municipal and regional partners - all collaborating on "data-driven" approaches to vocational rehabilitation. The Pathway Generator algorithm from Iceland. The promise of federated learning. An ESF-funded programme with milestones to hit.

Surely, I thought, there must be some shared understanding of what we're doing, even if the details vary.

So I did what designers do. I reviewed project documentation. I attended meetings. I talked to people individually. I asked variations of the same questions: What data exists? What questions might it answer? What systems capture information about rehabilitation? What outputs do you imagine?

Then I synthesised what I heard into concept maps. I adapted Schoppe's (2020) model of ML development into a process diagram showing the steps from understanding context through to deploying a working system. I wrote TypeScript interface definitions specifying the data structures the Pathway Generator would require. I drew architecture diagrams showing what federated learning would look like in the Swedish context.

None of this was provocative or adversarial. It was the kind of design work I'd been trained to do and hired to do. Understand the problem. Make the understanding visible. Share it with stakeholders.

What I didn't anticipate was that doing this competently would become a problem.

What the Mapping Found

The concept mapping didn't reveal a coherent shared understanding with variations at the edges. It revealed that "data science" was functioning as a floating signifier - a term that different stakeholders filled with entirely different content, enabling apparent agreement without actual alignment.

For some stakeholders, "data science" meant sophisticated machine learning: predictive models, algorithm-guided decisions, AI-powered tools. The Pathway Generator. Federated learning.

For others, it meant basic analytics: dashboards, reports, visualisations of service delivery patterns. Being able to see what they couldn't currently see.

For others still, it meant data collection: getting caseworkers to record information systematically. Having a database at all.

For a few, it seemed to mean something closer to "the digital transformation we keep hearing about but don't quite understand."

These aren't points on a spectrum. They're different things entirely. The gap between "we need a database" and "we need federated learning" isn't a gap in ambition - it's a gap in what world we're describing.

More significant than the divergence was what no one could articulate clearly. What data actually exists - people knew their organisations collected some information, but couldn't specify what, where, or in what format. What questions data would answer - "better outcomes" was invoked, but what outcome, measured how, for whom? What would change if "data science" succeeded - the operational implications were vague.

The concept map, rather than documenting what people agreed about, documented an absence.

The Moment of Recognition

This is when something shifted for me - though I couldn't fully articulate it at the time.

Service designers have two standard modes of mapping. We map the as-is: the current state of a service as it exists today, documented through research and observation. And we map the to-be: a projected future state, a vision of how a service could or should work once redesigned. Kalbach (2016) describes this as the foundational movement of experience mapping. Flowers and Miller (2023) are blunt about the sequence: understand current state first, then use that understanding to innovate.

Both modes presuppose something: that there is a current reality to document, and that the envisioned future bears some traceable relationship to it.

What I was doing was neither. There was no AI-supported rehabilitation service to map as-is. But I also wasn't designing a to-be in any conventional sense - I wasn't envisioning a future service grounded in research about current needs and capabilities. Instead, I was trying to make specific something that had already been promised but didn't exist. Something that multiple stakeholders talked about in the present tense despite its complete absence from material reality.

I've started thinking of this as a technoimaginary - borrowing from Jasanoff and Kim's (2015) work on sociotechnical imaginaries. A collectively performed vision of a technological future that has become organisationally real through funding bids, job descriptions, and consortium agreements, even though it has no material substrate whatsoever.

And here's the thing I keep returning to: nobody asked me to expose this. I was trying to help. I was trying to understand what we were building so I could contribute to building it. The exposure was a side effect of competence, not its goal.

How Specificity Dissolved the Imaginary

Looking back at the different artefacts I produced, I'm noticing a pattern. The level of specificity seems to determine whether the artefact supports the project or undermines it. And this wasn't a choice I made consciously - it was just what happened when I tried to do thorough design work at different scales.

At high abstraction - the job description, the ESF application, the consortium agreements - the project sounds coherent. "The tool uses machine learning and artificial intelligence to make predictions about the most favourable choices along the way, based on historical data." At this level, "historical data" is unspecified. "Machine learning" is invoked without requirements. Everything holds together because nothing forces the question of whether any of it is grounded.

At medium abstraction - the concept maps - categories of data become visible (psychological health, life situation, employment status) without yet forcing questions about how each would be gathered in Sweden. Stakeholders can engage with the structure. They can point at categories and say "yes, that's relevant" without having to answer whether the data exists. Things start to thin, but they hold.

At low abstraction - the TypeScript interface definitions - every variable forces a material question:

// Physical Health - unclear what protocol for gathering
mean_physical : number
stddev_physical : number
range_physical : number

The comment "unclear what protocol for gathering" appears repeatedly throughout the interface - for physical health, psychological health, social health, financial situation, self-discipline, self-reflection, creativity, technical skills, time management, empowerment, and resourcefulness. Each comment marks a place where a concrete assessment protocol should be and isn't.

// Hard to interpret this... 
// a boolean on whether they consider they are a victim of being bullied?? 
// - what protocol is used to assess this?
bullying : boolean

Even understanding the existing Icelandic system requires answering questions about assessment validity, clinical protocols, and ethical implications that no one here has addressed.

And then the architecture diagrams. When I mapped the Icelandic federated learning implementation - a working system with database, server, model, user feedback loop - and then drew the same diagram for the Swedish context, the Swedish version had question marks everywhere. "Sweden? Östergötland? Linköping?" as location headers. Database marked with "?". Model marked with "?". Users marked with "?". A crossed-out arrow between local implementation and federated aggregation.

That diagram isn't a technical architecture. It's a visualisation of material absence. And I drew it not to make a point, but because it was the honest answer to the question "what would this look like here?"

I didn't set out to produce artefacts at different levels of abstraction as some kind of methodological strategy. I was just doing different kinds of design work - concept maps for stakeholder alignment, TypeScript for technical specification, architecture diagrams for system understanding. Standard practice. But each successive layer of specificity peeled away more of the ambiguity that was holding the project together. The more specific I got, the more visible the absence became. And I couldn't be less specific without being dishonest.

A Workshop Diagnosis

During a project planning workshop, the group - a mix of Swedish practitioners and UK academics - was asked to articulate the challenges facing the collaboration.

What emerged was striking. Participants wrote things like:

  • "If a project is a group of people coming together to collaborate on a fixed problem for a fixed amount of time - is this really a project?"

  • "It is hard to see the shared objectives of the group"

  • "Any mention of 'real data' seems to provoke anxiety and stress"

  • "We are operating a 'technology-push' model that is not necessarily grounded in real needs"

  • "Is a coordination association actually a 'product-oriented' organisation?"

They named precisely what my concept mapping was revealing: no clear shared objective, no data to ground the work, a technology being pushed into a context that might not be able to receive it.

What strikes me is the self-awareness. The stakeholders know. They can articulate the problems when given the right format. But naming the problems doesn't seem to change anything. The workshop produced Post-it notes. Whether it will produce a change in direction, I can't yet tell.

The Planning Tool That Became an Audit

The "Extended and Idealised Model of a Design-Led Machine Learning Development Process" was meant to help. I adapted Schoppe's (2020) model to include design-led activities - identification of needs, frames and values, consequence scanning - that I thought should precede and accompany ML development. It was supposed to be a roadmap.

But each step in the process is a step that can't be taken without infrastructure, data, and capacity that I'm increasingly unsure exists here. "Collect Data" presupposes data to collect. "Create Data Models" presupposes data to model. "Evaluate Models" presupposes models to evaluate.

I built a planning tool. It functions as an audit.

This keeps happening. Every artefact I produce in good faith - trying to understand, trying to plan, trying to help stakeholders converge - ends up documenting why the thing we're supposed to be doing can't be done. The design work is technically competent. The synthesis is accurate. The visualisations are clear. And that's the problem. Accurate, clear design work in this context means accurate, clear documentation of impossibility.

A to-be map of a genuine future state is aspirational - it shows where you want to get to and what you'd need to build. A to-be map of a technoimaginary is inadvertently forensic - it shows the gap between what's been promised and what's possible. I keep trying to produce the former and accidentally producing the latter.

What I Don't Know Yet

I'm genuinely uncertain about what I'm watching.

Maybe this is normal. Complex multi-stakeholder projects often start with participants meaning different things by the same words. The divergence I documented might be a starting point rather than a verdict. Many successful projects begin in confusion.

Maybe I'm the problem. As a newcomer asking probing questions, I may have constructed the "absence" through my method of inquiry. A designer with different instincts - someone who stayed at the concept-map level, who didn't write TypeScript interfaces, who didn't draw architecture diagrams with honest question marks - might have produced artefacts that enabled productive ambiguity rather than forcing uncomfortable specificity. Perhaps the responsible design move was to be less thorough.

Maybe the abstraction was appropriate. Perhaps the project's high-level language should have remained at that level. Perhaps "data-driven vocational rehabilitation" is a phrase that needs to stay vague to enable collaboration, and my error was treating it as something that could be specified.

Maybe the timing was wrong. The practitioners and academics may be operating on different timescales. What looks like confusion to me might be appropriate tolerance for ambiguity in a developmental phase.

These are genuine possibilities. What I found was shaped by how I looked. The question I can't answer yet is whether the specificity I brought was premature - or whether it exposed something that would have surfaced eventually anyway, just later, after more money had been spent and more commitments made.

But I keep returning to the uncomfortable thought: I was hired to do design work for this project. I did design work for this project. And the design work, done honestly and competently, seems to be revealing that the project as conceived can't work. Nobody asked me to reach this conclusion. I reached it by trying to help.

What happens next - what the project does with this information, what happens to my role, what it means for the PhD - I don't know. I'm going to keep paying attention.


References

Flowers, E. and Miller, M. (2023). Your Guide to Blueprinting The Practical Way. Practical by Design.

Jasanoff, S. and Kim, S-H. (2015). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. University of Chicago Press.

Kalbach, J. (2016). Mapping Experiences: A Complete Guide to Creating Value through Journeys, Blueprints, and Diagrams. O'Reilly Media.

Schoppe, S. (2020). The Role of Design in Machine Learning. Salesforce UX Blog. Available at: https://medium.com/salesforce-ux/the-role-of-design-in-machine-learning-ae968ea90aac