In my previous posts in this series, I've been exploring design's relationship to intangible materials - Krippendorff's trajectory from products to discourses, and the metaphors that shape how we imagine AI systems. Today I want to focus on something that sits at the heart of vocational rehabilitation work: counterfactual thinking - the capacity to imagine what might have been, and what could yet be.
This matters because when we talk about "AI in vocational rehabilitation", we're often asking these systems to contribute to work that is fundamentally about imagining alternatives. A caseworker helping someone navigate a career transition isn't just processing data about their current situation. They're helping them envision possible futures, consider paths not taken, and imagine who they might become. Can AI systems participate meaningfully in this kind of work?
The Ladder of Causation
Judea Pearl, in The Book of Why, offers a framework that helps clarify what's at stake. He describes a "Ladder of Causation" with three rungs: seeing, doing, and imagining.
The first rung, seeing or observing, involves detecting regularities in our environment. This is what owls do when tracking prey, and what machine learning systems do when they find patterns in data. At this level, we ask questions like "What is the probability of X given that I observe Y?"
The second rung, doing, involves predicting the effects of deliberate interventions. This is where we ask "What happens if I do X?" - a fundamentally different question from observing correlations. Pearl's famous "do-operator" captures this distinction: there's a world of difference between observing that people who take a drug recover faster (correlation) and knowing that taking the drug causes recovery (causation).
But it's the third rung, imagining, that interests me most for vocational rehabilitation. This is the domain of counterfactual reasoning - asking "What would have happened if I had done things differently?" Pearl argues that this capacity for counterfactual thinking is what separates humans from other species:
"Counterfactuals are the building blocks of moral behavior as well as scientific thought. The ability to reflect on one's past actions and envision alternative scenarios is the basis of free will and social responsibility" (Pearl & Mackenzie, 2018, p. 7).
Why Counterfactuals Matter
Pearl connects counterfactual thinking to something profound about what makes us human. Around 50,000 years ago, something shifted - what some call the Cognitive Revolution. Humans acquired the ability to imagine things that don't exist, to consider alternatives, to ask "what if?"
"It is useless to ask for the causes of things unless you can imagine their consequences. Conversely, you cannot claim that Eve caused you to eat from the tree unless you can imagine a world in which, counter to facts, she did not hand you the apple" (Pearl & Mackenzie, 2018, p. 8).
This capacity for counterfactual imagination is what enables planning. Pearl uses the example of mammoth hunters: to succeed, they had to imagine different scenarios - what if we approach from this direction? What if we have more hunters? What if it rains? Planning requires holding a mental model of reality that can be manipulated, tested, altered.
And crucially, counterfactual thinking is what enables moral reasoning. Concepts like responsibility, blame, regret, and credit all depend on comparing what happened with what might have happened under different circumstances:
"Responsibility and blame, regret and credit: these concepts are the currency of a causal mind. To make any sense of them, we must be able to compare what did happen with what would have happened under some alternative hypothesis" (Pearl & Mackenzie, 2018, p. 22).
The Limits of Data
This is where AI systems face a significant limitation. Pearl is emphatic: data alone cannot support counterfactual reasoning.
"Data are profoundly dumb. Data can tell you that the people who took a medicine recovered faster than those who did not take it, but they can't tell you why" (Pearl & Mackenzie, 2018, p. 7).
This is a fundamental limitation, not a temporary one. No amount of data, no matter how "big", can answer counterfactual questions without a causal model. You cannot derive explanations from raw data. The system needs what Pearl calls a "push" - knowledge about causal structure that comes from outside the data itself.
"No machine can derive explanations from raw data. It needs a push" (Pearl & Mackenzie, 2018, p. 8).
This has implications for how we think about AI in contexts like vocational rehabilitation. When a caseworker helps someone consider alternative career paths, they're engaging in counterfactual reasoning: "If you had taken that training course, where might you be now? If you pursue this path, what might happen?" These are not questions that can be answered by pattern-matching on historical data, no matter how sophisticated the algorithm.
Speculative Imagination
The design theorists Dunne and Raby, in Speculative Everything, offer a complementary perspective. They distinguish between different kinds of futures: the probable (what's likely to happen), the plausible (what could happen), the possible (what might happen given different conditions), and the preferable (what we'd like to happen).
Most AI systems operate in the realm of the probable - predicting likely outcomes based on patterns in past data. But vocational rehabilitation often requires working in the realm of the possible and preferable - imagining futures that don't yet exist and might never have existed in the training data.
Dunne and Raby argue that imagination is precisely what enables us to escape the constraints of current reality:
"Speculating is based on imagination, the ability to literally imagine other worlds and alternatives" (Dunne & Raby, 2013, p. 11).
They note that we live in an era where "it is now easier for us to imagine the end of the world than an alternative to capitalism" - a phrase borrowed from Fredric Jameson. The capacity to imagine genuine alternatives has atrophied. Their response is to use design as a tool for speculation, for opening up imaginative possibilities rather than predicting probable outcomes.
"We believe that even nonviable alternatives, as long as they are imaginative, are valuable and serve as inspiration to imagine one's own alternatives" (Dunne & Raby, 2013, p. 16).
This resonates with what I'm starting to observe at SCÖ. There's enthusiasm about using "data science" and "AI" to improve vocational rehabilitation outcomes. But I'm wondering whether the core work - helping someone imagine a different life - might require precisely the kind of counterfactual imagination that Pearl argues data-driven systems lack.
Human-AI Teaming
If Pearl is right that counterfactual reasoning requires causal models that can't be derived from data alone, and if Dunne and Raby are right that imagination is what enables us to conceive genuine alternatives, then what role can AI systems play in work that depends on these capacities?
One answer is that AI systems can support the first and second rungs of Pearl's ladder - observing patterns, predicting probable outcomes - while humans provide the third-rung capacities of imagination and counterfactual reasoning. This is a division of cognitive labour: machines handle the data processing, humans handle the meaning-making.
But this framing may be too clean. In practice, the three rungs aren't separable. When a caseworker helps someone imagine alternative futures, they're drawing on knowledge of what's probable (rung one) and what interventions might work (rung two) as inputs to the imaginative work (rung three). The counterfactual question "What would happen if you pursued this path?" requires both empirical knowledge and imaginative projection.
Perhaps the more useful framing is Murray-Rust's metaphor of models as "collections of examples". An AI system trained on vocational rehabilitation data contains traces of many people's trajectories - but it doesn't contain the capacity to imagine alternatives that weren't in the training data. It can tell you what happened to people with similar profiles, but it can't imagine what might happen in genuinely novel circumstances.
This suggests that "human-AI teaming" in vocational rehabilitation isn't about dividing tasks between humans and machines. It's about using machine-generated patterns as raw material for human imagination - and being clear about the fundamental difference between pattern-matching and counterfactual reasoning.
The JANUS Pathway Generator, which the SCÖ project aims to adapt, illustrates this distinction concretely. The Pathway Generator is a generative algorithm in the computational sense: it generates possible rehabilitation pathways by enumerating combinations of interventions based on patterns in historical data. Given a patient profile, it can produce candidate pathways that statistical patterns suggest may lead to good outcomes. But this programmatic generativity operates within a predetermined state space - the set of interventions, conditions, and outcome measures that were coded into the Icelandic system. It cannot generate pathways involving interventions that weren't in the historical data, or imagine outcomes that weren't measured. Its generative capacity is fundamentally different from the generative capacity Schön describes when a metaphor transforms perception, or Dunne and Raby describe when speculation opens up genuinely new possibilities. The algorithm generates within a space; human generative thinking constructs new spaces.
Questions for SCÖ
At SCÖ, there's enthusiasm about using "federated learning" and "data science" to improve vocational rehabilitation outcomes. If Pearl is right, these technologies can only operate on the first two rungs of the ladder. They can observe patterns in data. They might, with appropriate causal modelling, predict effects of interventions. But can they imagine alternatives that weren't in the data?
This doesn't mean the technologies would be useless. But it might mean we need clarity about what they can and cannot contribute. If the core work of vocational rehabilitation involves helping people imagine who they might become - considering paths not taken, envisioning possible futures - is that work that technology can support? Or is it fundamentally human?
I'm starting to wonder whether my role here isn't primarily about designing an AI system, but about understanding what kind of work caseworkers actually do. What "AI assistance" could meaningfully mean depends on understanding the work itself. One question worth exploring: what counterfactual reasoning do caseworkers already do? What imaginative labour is already happening that we might not be seeing?
Pearl's distinction - between navigating within a given model and constructing the model itself - maps onto a broader question about what planning presupposes. If planning requires a state space (a model of what entities exist, what actions are possible, what transitions lead where), then constructing that state space is a different kind of work from navigating within it. The Pathway Generator navigates; the question of what should be in the state space - what matters, what to measure, what counts as a good outcome - requires the third-rung work of counterfactual imagination. This distinction between planning and design - between navigating known terrain and constructing the map - would become central to my later work.
References
Dunne, A., & Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. MIT Press.
Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.