Design theory rests on an assumption so foundational it's rarely examined: that making things visible enables change. Prototypes surface problems. Maps reveal terrain. Visualisations create shared understanding. From this visibility, alignment and action follow.
In my previous posts, I've described mapping "data science" across stakeholders and watching the project's milestones silently pivot as original goals proved unachievable.
Now I want to reflect on what happened when visibility increased - and why it didn't produce what design theory would predict.
The Orthodox View
Design literature consistently emphasises visibility's productive power. Howlett and Mukherjee argue that "prototyping is central to design thinking as a practically focused and tangible mechanism for soliciting feedback from users" (Howlett & Mukherjee, 2018, p. 24). Visualisation and materialisation are understood as core mechanisms through which design works - making abstract problems concrete so stakeholders can respond.
The mechanism seems clear: you make something visible, stakeholders respond, understanding improves, better decisions follow. Design artefacts create affordances for dialogue through their physical form, enabling stakeholders to engage through pointing, placing, and moving. Visibility creates shared ground for conversation.
This assumption runs deep. Wastell, writing about managers as designers in public services, argues that design involves making problems and ideas visible, creating frameworks to make visual sense of complex information (Wastell, 2011). The International Standards Organisation's guidance on human-centred design emphasises how scenarios, simulations, models and prototypes enable designers to communicate proposed designs to users and stakeholders (ISO, 2019). Visibility is design's core contribution.
What Actually Happened
My design work made things visible. The concept maps showed that stakeholders meant different things by "data science". The process diagrams showed the gap between idealised ML development and available infrastructure. The analysis documented why the Pathway Generator couldn't be piloted as planned.
This visibility was accurate. The insight report that emerged from it was, in its way, honest - it acknowledged that conditions for AI implementation didn't exist. The silent pivot I described was enabled partly by my work providing an alternative narrative: we investigated, we learned, we documented conditions.
But the visibility didn't produce what I'd expected. It didn't produce:
- A strategic reassessment of the project's direction
- Honest conversation about what had been promised versus what was possible
- Changes to how future projects would be scoped
- Clarity about my role or the PhD's viability
Instead, what followed was harder to name.
The Texture of Response
I want to be careful here, because what I'm describing was largely unspoken - which is part of what made it difficult.
There was no meeting where someone said "your analysis is wrong" or "we disagree with your findings". The concept maps weren't contested. The process diagrams weren't critiqued. The silence around the findings was, in some ways, an acknowledgment of their accuracy.
But accuracy didn't translate into engagement. Meetings continued. Reports were filed. The PhD was discussed in terms of "pivoting" and "finding a new direction". Life went on as if the analysis existed in a parallel track - noted but not integrated.
What I experienced was something like avoidance. Not hostile, exactly. More like... the organisation developing an immune response to uncomfortable information. The information was present, but it wasn't metabolised.
Concurrent with this was an organisational restructuring driven by funding pressures - the ESF application for continued funding had been rejected. Fifteen people would be made redundant, including me. Whether the restructuring was related to the project's difficulties, or was genuinely coincidental financial pressure, was never clear. Perhaps both.
Alternative Explanations
Before attributing this to organisational pathology, I should consider other interpretations.
Bandwidth constraints: Public sector organisations face relentless operational demands. Staff were managing service delivery, reporting requirements, and now impending redundancies. Perhaps the findings weren't ignored - they simply couldn't be prioritised amid more urgent pressures. Change management and communication are challenging regardless of organisational context.
Appropriate scope limitation: Perhaps the project's leadership understood the findings but judged them outside their authority to address. The structural issues I documented - data governance across multiple agencies, technical capacity gaps, coordination challenges - may have been correctly seen as requiring interventions at levels the project couldn't influence. Not engaging might have been realistic rather than defensive.
Different reading of the findings: I experienced the concept maps as documenting impossibility. Others may have read them differently - as useful context, as work in progress, as one input among many. What seemed to me like avoidance might have been reasonable disagreement about significance.
The researcher's position: I was a new staff member, a PhD student, an outsider with limited organisational history. My analysis may have carried less weight than I assumed it should. This isn't necessarily wrong - earned trust matters in organisations, and a one-year employee's dramatic conclusions might reasonably be treated with some scepticism.
Defensive Routines: A Theoretical Frame
That said, organisational theory does offer frameworks for understanding what I observed.
Chris Argyris spent decades studying what he called "defensive routines" - patterns that protect organisational members from embarrassment or threat, but also prevent learning. One synthesis describes the core dynamic: "individuals keep their premises and inferences tacit, lest they lose control" (Argyris, cited in Ramage & Shipp, 2020, p. 39). Information that threatens existing commitments gets neutralised rather than engaged.
Defensiveness can arise when it seems that undiscussable information might be surfaced - the risk of embarrassment or loss of face is substantial (Zuber-Skerritt & Wood, 2019). The key insight is that the information doesn't have to be contested - it can simply be rendered undiscussable. Acknowledged in principle, but not engaged in practice.
Wastell, whose work on "technomagic" I've drawn on elsewhere, describes how social defence mechanisms become the antithesis of genuine organisational learning (Wastell, 1999). Defensive routines don't require conspiracy or ill intent. They emerge naturally when people face information that threatens investments they've made - reputational, financial, psychological.
Fotaki and Hyde develop the concept of "organisational blind spots" to explain how organisations remain committed to failing strategies. They argue that "individual psychic processes of idealization, splitting, and blame contribute to the creation of social defences operating at group and organizational levels" (Fotaki & Hyde, 2014, p. 7). The organisation doesn't refuse to see - it develops systematic ways of not-seeing.
What I observed fits this pattern. The project had accumulated commitments:
- Funding had been secured on the basis of AI/ML promises
- Academic reputations were linked to the collaboration
- The job description (my job description) specified federated learning
- Monthly reports had declared progress toward now-abandoned milestones
Acknowledging that the foundational premise was wrong would threaten all of these. Easier to let the findings exist - technically acknowledged - while continuing as if they hadn't fundamentally changed what was possible.
The Designer's Position
This puts the designer in a difficult position.
Design work worked - in the sense that it successfully made visible what needed to be seen. The concept maps were clear. The diagrams were accurate. The synthesis was sound. By the standards of design practice, I did my job.
But the design work failed - in the sense that visibility didn't produce the outcomes design theory predicts. There was no productive dialogue. No strategic pivot based on new understanding. No "aha" moment where stakeholders aligned around a better path forward.
The visibility I created seems to have been absorbed into something like a defensive routine. It became part of the insight report - a successful deliverable. My work helped the organisation declare victory on redefined terms. Whether I wanted this or not, my design artefacts may have ended up legitimising a retreat from the original goals.
Boundary Objects and Exposure Devices
Design theory has concepts that help here. Bergman, Lyytinen and Mark define a boundary object as "an artifact or a concept with enough structure to support activities within separate social worlds, and enough elasticity to cut across multiple social worlds" (Bergman, Lyytinen & Mark, 2007, p. 5). Boundary objects succeed partly because they're ambiguous - different stakeholders can project different meanings onto them.
Boland and Collopy describe boundary objects as artefacts that serve "as an intermediary in communication between two or more persons or groups who are collaborating in work" (Boland & Collopy, 2004, p. 46). The flexibility is a feature, not a bug. It enables collaboration without requiring agreement.
But what I produced wasn't flexible in this way. The concept maps didn't enable multiple interpretations - they documented specific gaps. The process diagrams didn't support ambiguity - they showed absent steps. These artefacts forced confrontation with material specificity.
I've started thinking of this as the difference between boundary objects and what might be called exposure devices. Boundary objects maintain productive ambiguity. Exposure devices force confrontation with material reality. Both are design artefacts. They operate differently.
When institutional investment in a particular imaginary is high, boundary objects may enable continued collaboration while exposure devices might trigger defensive routines. I think my artefacts exposed rather than enabled - and perhaps the organisation responded accordingly. But I'm still working out whether this framing is right.
What Design Theory Doesn't Prepare You For
Design education prepares you for resistance in the form of disagreement. Stakeholders might push back on your prototypes. Users might reject your proposals. You iterate, you refine, you try again.
Design education doesn't prepare you for resistance in the form of non-engagement. For findings that are acknowledged but not acted upon. For artefacts that are praised and filed. For work that is technically successful but organisationally inert.
The assumption beneath design's visibility paradigm is that organisations want to see clearly. That accurate information, well-presented, will be welcomed and used. That design's contribution is to provide clarity that stakeholders are seeking.
But what if organisations have investments in not seeing clearly? What if accurate information threatens commitments that matter more than accuracy? What if the design artefact reveals something the organisation has reasons to avoid?
A Different Kind of Design Work
I'm starting to think that what I've done here is design work of a different kind than I was trained for.
Not design-for-implementation: creating artefacts that enable building something.
Rather, something like design-for-understanding: creating artefacts that reveal conditions, even when those conditions are inhospitable.
Or perhaps design-as-diagnosis: using design methods to surface what's actually happening, even when what's happening is that the project can't succeed.
This might be valuable work. Understanding why technology projects fail in public sector contexts matters. Documenting the gap between technological promises and material conditions has theoretical and practical significance. The case study I'm living through could inform future projects, future policy, future design education.
But it's not what I was hired for. And it requires a frame I'm still developing.
What Happens Now
My funding runs out in May. The organisation is restructuring - I'm hearing that fifteen people may be made redundant. The PhD's future is uncertain - technically still connected to the university, but without the industrial partnership that was supposed to sustain it.
I don't know what comes next. Whether the work I've done will matter beyond my own learning. Whether I'll be able to continue the PhD in some form, or whether this is the end of it.
What I'm increasingly convinced of is that design theory needs to reckon with its visibility assumption. Making things visible is not automatically productive. In contexts where visibility threatens commitments, something happens that neutralises even accurate, well-crafted design work. I'm still working out what to call that something, and what it means for design practice.
The limits of making visible may be the limits of design's theory of change. But I'm not certain yet. I need to think about this more.
References
Argyris, C. (1990). Overcoming Organizational Defenses: Facilitating Organizational Learning. Allyn and Bacon.
Bergman, M., Lyytinen, K. and Mark, G. (2007). Boundary Objects in Design: An Ecological View of Design Artifacts. Journal of the Association for Information Systems, 8(11), 546-568.
Boland, R. and Collopy, F. (2004). Managing as Designing. Stanford University Press.
Fotaki, M. and Hyde, P. (2014). Organizational blind spots: Splitting, blame and idealization in the National Health Service. Human Relations, 68(3), 441-462.
Howlett, M.P. and Mukherjee, I. (2018). Routledge Handbook of Policy Design. Routledge.
ISO (2019). ISO 9241-210:2019 Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems.
Ramage, M. and Shipp, K. (2020). Systems Thinkers (2nd ed.). Springer.
Wastell, D. (1999). Learning Dysfunctions in Information Systems Development: Overcoming the Social Defenses with Transitional Objects. MIS Quarterly, 23(4), 581-600.
Wastell, D. (2011). Managers as Designers in the Public Services: Beyond Technomagic. Triarchy Press.
Zuber-Skerritt, O. and Wood, L. (2019). Action Learning and Action Research: Genres and Approaches. Emerald.