My employment at the coordination association ends this month. The ESF funding wasn't renewed. Fifteen of us are being made redundant. The PhD continues, technically, but without the industrial partnership that was supposed to be its foundation.
This is my attempt to articulate what I learned - not what I was supposed to learn about federated learning, but what I actually learned about technology projects, design practice, and organisational life.
On Federated Learning
Federated learning is a real technology with real applications. Google uses it for keyboard prediction. Hospitals are piloting it for collaborative medical imaging research. The technical papers are sophisticated and the engineering is impressive.
But I think FL was invoked in this project for reasons that had little to do with FL's actual properties or requirements. As far as I can tell, it was invoked because:
- It sounded advanced and innovative
- It appeared to solve the privacy problem that blocks data collaboration
- It created a PhD-shaped research question
- It may have deflected attention from whether basic data science was possible
The gap between what FL is and what FL was imagined to do in this context is the gap between technology and technoimaginary - between material capability and projected promise.
Beckert's work on "fictional expectations" helps explain this gap. He describes how "imagined futures help to explain actors' willingness to commit themselves to endeavors despite the incalculability of outcomes and environmental pressures to conform to established behaviors" (Beckert, 2016, p. 37). The fictional expectation of FL - that it would enable Swedish organisations to benefit from Icelandic algorithms without the hard work of building data infrastructure - mobilised resources and created positions. Whether the expectation was achievable was a question the fiction didn't require anyone to answer.
FL cannot conjure data infrastructure into existence. It cannot create governance frameworks. It cannot build technical capacity in organisations that lack it. It cannot make possible what the preconditions for possibility don't support.
On Design Research in Impossible Contexts
I was hired as a designer. I used design methods: stakeholder synthesis, concept mapping, process visualisation, workshop facilitation. I produced design artefacts: maps, diagrams, frameworks, presentations.
By conventional measures, this work was competent. The artefacts were clear. The synthesis was accurate. The findings were valid.
But the work didn't enable what design is supposed to enable. It didn't help stakeholders converge on a solution. It didn't facilitate implementation. It didn't produce the collaborative alignment that design theory promises.
Instead, my design work exposed. It made visible the gap between what was promised and what was possible. It documented an absence - the absence of data, infrastructure, capacity, and governance that would be needed for "data science" in any form.
This exposure was useful, in a sense. The insight report that emerged contained true information. Someone who read it carefully would understand why the Pathway Generator couldn't be piloted. The organisation now has documentation of its own conditions.
But exposure is not the same as enablement. And I've come to think that design theory under-theorises what happens when design artefacts expose rather than enable - when they reveal inconvenient truths rather than productive possibilities.
On Organisational Responses to Exposure
When design artefacts expose problems, organisations can respond in several ways. The literature on whistleblowing, workplace ostracism, and what McDonald, Graham and Martin (2010) call "outrage management" suggests a broader taxonomy than I initially considered. Drawing on this research, I'd propose the following categories—roughly ordered from most benign to most adversarial.
Benign Responses
Recognition and correction: "Thank you for showing us this. Let's change course."
The response design theory imagines. Rare in my experience.
Dispute: "Your analysis is wrong. Here's why."
At least this is engagement. Dispute treats the exposure as a claim that can be evaluated. It respects the epistemic status of the work, even while rejecting its conclusions.
Co-optive Responses
Deflection: "Interesting findings. Anyway, about the next milestone..."
The exposure is acknowledged but not engaged. The conversation moves on. Ahmed (2019) describes how complaints can be stopped through conversations: "if those you speak to refuse to act on what you say, the path is blocked."
Collective silence: [no response]
Different from deflection, which at least acknowledges before moving on. This is the point raised in a workshop or meeting that falls into a void—met with "difficult or uncomfortable silences" (Open University, 2012), averted gazes, a collective non-acknowledgement. The exposure isn't disputed or deflected; it simply isn't picked up.
Smithson and Venette (2013) describe "stonewalling" as an image-defence strategy where "silence involves withholding a response and relinquishing control." But in group settings, it functions differently—the silence isn't strategic withdrawal but social signalling. Everyone present learns that this is not a point to engage with. Brown (2010), writing about design practice, warns: "Don't take silence for complacency, agreement, or buy-in." The silence might mean the opposite.
This response is insidious because it leaves no trace. There's nothing to dispute—no counter-argument was made. The point was simply... not received.
Incorporation: "Great work. This will go in the insight report."
The exposure becomes part of the official record without changing anything. The artefacts serve to authenticate a narrative rather than to enable change. This is what I've started calling design capture.
Adversarial Responses
Reinterpretation: "That's not what happened / what it means."
McDonald et al. (2010) identify reinterpretation as a key tactic: reframing events to minimise their severity or shift their meaning. The exposure is acknowledged but its significance is contested. "Yes, we lack data infrastructure, but that's actually an opportunity for innovation."
Procedural absorption: "We'll need to take this through the proper channels."
Official channels—grievance procedures, review boards, formal processes—can function to legitimise inaction rather than address concerns. As Ahmed (2019) observes: "If organizations can disqualify complaints because they take too long to make, they can also take too long to respond to complaints." The process becomes the point. McDonald et al. note that official channels "are more likely to work against low-level perpetrators who do not have the support of organizational allies"—suggesting that when exposure threatens those with power, procedural responses serve containment rather than correction.
Devaluation: "Consider the source."
Attacking the credibility, competence, or character of whoever produced the exposure. McDonald et al. (2010) found devaluation "common in cases of sexual harassment, openly and/or through rumors." Khan (2015) describes "smear campaigns" and "distortion campaigns" as tactical abuse strategies to "negatively affect a target's social reputation."
This doesn't require overt hostility. It can operate through implication: questioning methodology, suggesting inexperience, noting that the person "doesn't really understand the context."
Isolation: "We'll handle this internally."
Ostracism and exclusion—withdrawing support, excluding from meetings, limiting access to information. Williams and Nida (2016) note that ostracism "can clearly play a functional role" in organisations, enabling members to sanction those who threaten group cohesion.
Ellemers and de Gilder (2022) observe that "in the process of legally containing and averting responsibility for misbehavior in the workplace, those who express concern about behavioral violations can be perceived as disloyal troublemakers." The person who exposes becomes the problem.
Intimidation: "Think carefully about your next steps."
Threats—explicit or implied. Poor references, unwelcome assignments, damage to career prospects. McDonald et al. (2010) found intimidation "identified in 18 cases, including threats of reprisals (39%), physically intimidating behavior (3 cases)."
In academic-practitioner contexts, this might be subtler: implications about PhD progression, future collaboration opportunities, or professional reputation.
Cover-up: "This conversation didn't happen."
Concealment, secrecy, destroying evidence. The most adversarial response—the exposure is treated as something that must itself be suppressed.
What I Experienced
What I experienced was mostly deflection, collective silence, and incorporation. The findings weren't disputed; they were acknowledged and filed, or simply not acknowledged at all. In workshops and discussions, points about missing data infrastructure or governance gaps would land in the room and... stay there. Unacknowledged. The conversation would continue as if the point hadn't been made.
This is the insidious thing about collective silence: it leaves you questioning whether you spoke clearly, whether the point was valid, whether you understood the context. The absence of response becomes a kind of gaslighting—not through contradiction but through non-recognition.
The formal outputs were different. The insight report acknowledged the findings. They became part of the official record. They may have helped the organisation declare success on redefined terms. That's incorporation—design capture in action.
I also experienced something adjacent to isolation—a gradual exclusion from conversations where decisions were being made, a sense that the work that exposed problems was less welcome than work that confirmed progress.
Fotaki and Hyde's research on "escalation of commitment to failing strategies" illuminates these patterns. They observe that "organizational research offers many accounts of why organizations remain committed to failing strategies" (Fotaki & Hyde, 2014, p. 3). The mechanisms include sunk cost dynamics, identity investments, and what they call "organizational blind spots"—systematic ways of not-seeing that protect members from uncomfortable recognitions.
The taxonomy above helps me see that my experience was relatively benign. The literature documents far worse. But it also helps me understand the range of possible responses—and why recognition and correction, the response that design theory imagines, may be the exception rather than the rule.
I think design capture is possible because design artefacts are ambiguous in their function. A concept map can be a tool for strategic reorientation or evidence that strategic work happened. A process diagram can guide implementation or demonstrate that implementation was considered. The same artefact can serve different purposes depending on how it's used.
On Techno-Limerence
I've been trying to find language for the phenomenon I observed - the persistent attachment to a technology project despite accumulating evidence of its impossibility.
The term I keep returning to is techno-limerence: an infatuated attachment to an imagined technological future that persists regardless of material conditions. Like romantic limerence, it involves idealisation, obsessive focus, and resistance to disconfirming information.
Techno-limerence might explain why the project continued as long as it did. The vision of AI-assisted vocational rehabilitation was compelling. It promised efficiency, accuracy, better outcomes for vulnerable people. Who wouldn't want that?
The limerent attachment was not to the actual technology - the specific algorithms, data requirements, infrastructure needs - but to the imaginary of the technology. Beckert describes how expectations under uncertainty may be understood as "pretended representations of a future state of affairs" (Beckert, 2016, p. 37). The imaginary is immune to material objections because it operates in a different register. "Yes, but imagine if we could"...
Rip's work on technology expectations describes how the "possibility of inflated promises leading to disappointment" keeps recurring in innovation cycles, worrying those who advocate for new technologies (Rip, 2012, p. 8). The hype cycle is well-documented. But what interests me is not the cycle itself but the persistence of attachment even as the cycle turns. Organisations don't abandon fictional expectations just because evidence accumulates against them.
Techno-limerence is sustained by several mechanisms:
- Temporal displacement: The technology will work eventually, even if not now
- Abstraction: At high enough abstraction, any technology seems possible
- Sunk costs: Having invested this much, we can't admit it won't work
- Identity: We are the kind of organisation that does innovative things
Breaking techno-limerence might require forcing confrontation with material specificity - which is what my design artefacts attempted to do. Whether that's why they were absorbed rather than engaged, I'm not certain.
On Academic-Practitioner Collaborations
This project involved a Swedish coordination association, Swedish universities, and a UK research group. The collaboration was structured around knowledge transfer: academic expertise flowing into practitioner contexts.
What I observed was a different dynamic. The academics had developed a tool (the Pathway Generator) in one context (Iceland) and wanted to extend it to another (Sweden). The practitioners wanted innovation and the legitimacy that comes with academic partnerships. The funders wanted "data-driven" approaches to public services.
But no one seems to have done the due diligence to determine whether the transfer was feasible. The research-practice gap runs in both directions - academics may not understand practitioner contexts, and practitioners may not interrogate academic claims. This project seems to have stumbled on both.
As far as I can tell, the UK academics had not assessed Swedish data conditions. The Swedish organisations hadn't interrogated what "federated learning" would require. The universities facilitating the PhD hadn't verified that the industrial partner could support the research.
It seems like everyone assumed someone else had done the groundwork.
This is not malice. It's the structure of academic-practitioner collaborations in funding environments that reward ambition over feasibility. The incentives favour proposing bold things and figuring out the details later. By the time the details reveal impossibility, commitments have been made and face must be saved.
On the PhD
My PhD exists in a strange state. The industrial partnership is over. The original research question - exploring federated learning for vocational rehabilitation - is moot. The funding that was supposed to support five years lasted one.
But I have material. A year of observations. Design artefacts. Meeting notes. This series of blog posts. A case study of a technology project that promised one thing and delivered another.
The PhD I'll write is not the PhD I was hired to write. It won't be about implementing federated learning. It will be about why federated learning was proposed in a context that couldn't support it, what happened when design work made this visible, and what this reveals about technology projects in public sector contexts.
This is legitimate research. Understanding failure matters. Greenhalgh's work on technology implementation finds that "the overarching reason why technology projects in health and social care fail is multiple kinds of complexity occurring across multiple domains" (Greenhalgh, 2018, p. 4). Documenting the gap between technological promise and organisational reality has theoretical and practical value. The case study I've lived through could inform future projects.
But I'd be lying if I said this was the plan. The plan was to explore a genuinely interesting intersection of design and machine learning. What I got was a different kind of education.
What Comes Next
I'm leaving a job. I'm continuing a PhD on different terms. I'm carrying forward questions that this experience has sharpened:
- What is design's role in contexts where the foundations for design work don't exist?
- How do organisations maintain commitment to impossible technology projects?
- What happens when design artefacts expose rather than enable?
- How should we theorise the limits of making visible?
These questions connect to larger concerns about public sector digital transformation, about academic-practitioner collaborations, about the gap between technological imaginaries and material conditions.
The Swedish case was small - one coordination association, one ESF project, one failed PhD premise. But I suspect the patterns I observed operate at larger scales. The same dynamics of techno-limerence, goal displacement, and design capture may be at work in bigger, more consequential technology programmes.
That's a hypothesis I'll be testing. But that's for future posts.
References
Ahmed, S. (2019). What's the Use? On the Uses of Use. Duke University Press.
Beckert, J. (2016). Imagined Futures: Fictional Expectations and Capitalist Dynamics. Harvard University Press.
Brown, D.M. (2010). Communicating Design: Developing Web Site Documentation for Design and Planning (2nd ed.). New Riders.
Ellemers, N. and de Gilder, D. (2022). The Moral Organization. Cambridge University Press.
Fotaki, M. and Hyde, P. (2014). Organizational blind spots: Splitting, blame and idealization in the National Health Service. Human Relations, 68(3), 441-462.
Greenhalgh, T. (2018). How To Improve Success Of Technology Projects In Health And Social Care. Public Health Research & Practice, 28(3), e2831815.
Khan, R. (2015). Avoidant Abuse: A Primer.
McDonald, P., Graham, T. and Martin, B. (2010). Outrage Management In Cases Of Sexual Harassment As Revealed In Judicial Decisions. Psychology of Women Quarterly, 34(2), 166-180.
Open University. (2012). Managing People and Organisations. The Open University.
Rip, A. (2012). The Context of Innovation Journeys. Creativity and Innovation Management, 21(2), 158-170.
Smithson, J. and Venette, S. (2013). Stonewalling As An Image-Defense Strategy: A Critical Examination Of BP's Response To The Deepwater Horizon Explosion. Communication Studies, 64(4), 395-410.
Williams, K.D. and Nida, S.A. (2016). Ostracism, Exclusion, and Rejection. In D. Mashek and A. Aron (Eds.), Handbook of Closeness and Intimacy. Psychology Press.