The Silent Pivot

In earlier posts, I described what federated learning is, what it would require, and what happened when I tried to map "data science" in this context.

Now I want to document something I've been noticing in the project's status reports.

The Milestones

ESF-funded projects report progress monthly. These reports include milestones - concrete deliverables with target dates and status indicators. Green means on track. Amber means concerns. Red means problems.

I've been reviewing the project's monthly status reports. They tell a story, though I'm still working out what story.

May 2022 (the month I joined):

Milestone: "Pathway generator pilot igång" [Pathway generator pilot underway] Target date: September 2022 Status: GREEN

June 2022:

Milestone: "Pathway generator pilot igång" Target date: September 2022 Status: GREEN

The Pathway Generator - the Icelandic algorithm that federated learning was supposed to bring to Sweden - was scheduled to be piloting by September. Everything on track.

November 2022:

Milestone: "Arbete kring förutsättningar för AI igång" [Work on conditions for AI underway] Status: GREEN

Notice what changed. The milestone is no longer "Pathway Generator pilot". It's "work on conditions for AI". The deliverable has shifted from implementing something to investigating whether implementation is possible.

February 2023:

Milestone: "Insiktsrapport om AI förutsättningar klar" [Insight report on AI conditions complete] Target date: April 2023 Status: GREEN

Now the milestone is a report. Not a pilot. Not a working system. A document describing the conditions that would need to exist for AI to be implemented.

Everything remains GREEN.

What Am I Watching?

I think what I'm observing might be what organisational theorists call goal displacement. Hansen describes this as occurring when "there is a difference between the goals of the organization and the indicators measuring goal achievement, so the possibility arises for goal displacement away from the original objectives, towards the measurable goals" (Hansen, 2017, p. 233).

The pattern I think I'm seeing:

  1. Ambitious goal is set (Pathway Generator pilot by September)
  2. Goal proves unachievable (no data infrastructure, no governance framework, no technical capacity)
  3. Rather than acknowledge this, the goal is redefined (investigate conditions for AI)
  4. The redefined goal is achieved (insight report completed)
  5. Success is declared

As far as I can tell, no one says "we failed to deliver what we promised". The goalposts move. The metrics adapt. The status stays GREEN.

But I want to be careful here. I'm describing a pattern I think I'm seeing, but I might be wrong about what it means.

Alternative Explanations

Before I attribute this entirely to goal displacement, I should consider other interpretations.

Legitimate learning: Perhaps the pivot represents appropriate adaptation. Projects should respond to new information. Discovering that the original goal was premature, and shifting to groundwork that makes future implementation possible, could be responsible project management. The insight report could be genuine preparatory work for something that happens later.

Resource constraints: I'm hearing that ESF funding may not be renewed for 2023 at the level expected. The organisation is already facing financial pressure. Perhaps the pivot reflects realistic assessment of what's achievable with available resources, not evasion of accountability.

Complexity genuinely discovered: Greenhalgh's research on health technology projects finds that "the overarching reason why technology projects in health and social care fail is multiple kinds of complexity occurring across multiple domains" (Greenhalgh, 2018, p. 4). It's possible the project team genuinely didn't understand the complexity initially, and the pivot represents honest recognition of conditions they couldn't have anticipated.

These explanations aren't mutually exclusive. The pivot can be both a legitimate response to discovered complexity and a mechanism that protects stakeholders from acknowledging that original promises were unrealistic.

What the Insight Report Contains

I should describe what the "insight report" actually is, since it has become the project's official deliverable.

As far as I can tell. It's a presentation. Some slides with screenshots, given by one of the academics who has been leading this work. A description of meetings that happened. A gesture toward "data maturity" concepts borrowed from elsewhere. An acknowledgment that more work would be needed before AI could be implemented.

Despite working on the project I've never seen the report itself. It doesn't seem to exist as a formal document. The slides were shared in a meeting I wasn't invited to, but I watched a recording later.

The February 2023 monthly update describes a visit from the UK academic partners as "two very rewarding days", some photos of this visit appear in the presentation for the board, the visit apparently focused on "the insight report on data maturity and readiness". The report - whatever form it takes - seems to validate the collaboration. It demonstrates that something happened. Whether a formal document exists beyond the slide deck is unclear to me, and maybe doesn't matter.

The Accountability Structure

What interests me is the accountability structure that seems to enable - perhaps even encourage - this kind of pivot.

ESF reporting measures activity, not outcome. Did meetings happen? Were reports produced? Were staff employed on the project? These are the metrics. Whether a Pathway Generator pilot actually occurred - whether any rehabilitation client benefited from AI-assisted decisions - is not what the reporting captures.

Academic incentives similarly reward activity. Papers can be written about investigations that went nowhere. Conferences accept presentations about "lessons learned". A PhD can pivot from "implementing federated learning" to "studying why federated learning couldn't be implemented". The currency of academic careers - publications, presentations, grant applications - doesn't require that projects achieve their stated aims.

Organisational reputation is protected by the silent pivot. No one has to say "we promised something we couldn't deliver". The pivot happens in the background. The original promises fade from memory. The insight report becomes the thing we were always trying to produce.

I don't think this is necessarily cynical. Public sector organisations face genuine accountability pressures from multiple directions. The pivot may be a survival strategy in an environment that punishes acknowledged failure more harshly than quiet redefinition.

Who Notices?

I notice because I've been reading the monthly reports sequentially, watching the milestones transform. But would anyone else?

The board members who receive these reports have many projects to oversee. They see GREEN status and move on. They're not tracking whether "Pathway Generator pilot" has become "insight report on AI conditions".

Board members and politicians visit occasionally. They see presentations, have conversations, return home. The gap between what was promised in the initial funding call and strategy for the project I'm working on and what's being delivered or how far short of what was originally promised it is visible to them, presumably.

The ESF administrators process reports against criteria. The criteria are about process compliance, not outcome achievement.

The clients of vocational rehabilitation services - the people who were supposed to benefit from AI-assisted pathways - are not party to any of this. They don't know a Pathway Generator was promised. They won't know it wasn't delivered.

My Position

I'm in an odd position. My job description says I was hired to work on federated learning and AI-assisted decision tools. The project I'm actually in has pivoted to producing an 'insight report'. The insight report itself doesn't exist, at least as far as I know, in any publicly accessible form.

On one hand, this pivot is intellectually honest. The original premise was wrong. The conditions don't exist. An insight report documenting this is more truthful than pretending to pilot something that can't work.

On the other hand, the pivot hasn't been acknowledged as a pivot. It's happened silently. The status reports still show GREEN. The narrative remains one of progress toward goals, not retreat from impossible ones.

I've contributed to the insight report. My concept mapping work fed into its analysis. In some sense, I'm complicit in the pivot - my work has helped make it possible to declare success on redefined terms.

But I also feel like I'm watching something that should be named go unnamed. The project promised one thing and delivered another. That gap deserves acknowledgment, not concealment through metric management.

Uncertain Conclusions

I'm genuinely uncertain what to do with these observations.

Maybe this is just how projects work - you propose ambitiously, discover constraints, adapt, and call it learning. Maybe I'm being naive about how public sector programmes operate.

Or maybe I'm watching a pattern that matters - a way that accountability structures allow impossible promises to be made and then quietly abandoned, with no one responsible and no lessons learned for next time.

I don't know yet. I'm going to keep paying attention to what happens next.


References

Greenhalgh, T. (2018). How To Improve Success Of Technology Projects In Health And Social Care. Public Health Research & Practice, 28(3), e2831815.

Hansen, M.B. (2017). Performance management and evaluation. In B. Greve (Ed.), Handbook of Social Policy Evaluation. Edward Elgar.