
By Gareth Morrell
Head of Research and Insights
Impatient Health
Time to read: 6 minutes
A few weeks ago, we took our kids from Southampton to the Isle of Wight, their first time on a ferry. They loved the adventure. But as we sailed out of the estuary, my youngest looked to the horizon and asked,
“How do they know where to go?”
“Good question. GPS”, I replied.
“What about before that existed?”
and to that, much like so many questions my kids ask, I did not know the answer. So I looked it up, and of course right back in the day, sailors didn’t have GPS. They relied on the stars. Not perfect, but better than drifting.
This got me thinking about areas of pharma where we’re scared to make progress because it’s too hard or we fear the imperfect. Take medical affairs, I know we believe we have impact. On HCPs. On patient outcomes. On the business. But belief isn’t enough. And in any other part of the life sciences world, that wouldn’t fly.“We believe our new immunotherapy reduces the risk of death in bladder cancer patients by 56%”, said nobody to the FDA, ever. Yet in medical affairs, that’s often where the conversation stops. So why is it so hard to measure impact?
The Challenges
Of course, compliance makes things harder (it always does); but it shouldn’t be an excuse. Maybe even fear of what the results might show is holding us back – we don’t want to be told that our medical education programme isn’t actually having any impact on clinical practice. But I’m assuming that we’re all grown-up enough to understand that we need to measure to improve. And that we’re willing to work within the compliance restrictions.
There are three other, more methodological reasons that also make this difficult:
- Short-termism: We want to see impact straight away. Ain’t gonna happen as real impact takes time.
- Patchy data: We’re not always able to ask the right questions, at the right time, in the right way. So we’re often left with scraps or nothing at all.
- Attribution: Even if something changes, how do we prove we were the reason? Healthcare is noisy and complex.
Clinical trials don’t face these challenges. But other fields do. Public policy, for example, routinely grapples with messy, indirect, long-term impact:
- Do free school meals improve attendance and attainment?
- Does food labelling change behaviour and outcomes?
- Do traffic calming measures reduce fatalities on the road?
To answer these questions, they work with imperfect data. They iterate. They test, learn and adapt.
That’s what we need in medical affairs: less waiting for the perfect dataset, more confidence and creativity in using what we have, shifting our mindset to prioritise continuous improvement over certainty. Because navigating impact is like navigating the sea. Even if the stars don’t give you perfect answers, they help you take your first steps forward.
So let’s stop drifting, and get in the boat.
Solutions – Defining what we want to change
First things first. We need a map, right? Their knowledge of the stars helped early sailors start to draw maps of their surroundings, and identify the milestones they needed to pass to get to their destination. In medical affairs, you know enough to draw the map, but need the right tools – we can do the same using Theories of Change.
‘Theories of change? That sounds boring…’, I hear you sigh. They’re only boring if you don’t try it – you know, a bit like chess or knitting. ToCs have been used for decades in public policy evaluation to great effect – no UK government initiative can be approved these days without one. Simply put, they map the logical sequence of events from your activities to your intended impact, i.e. the change you really want to make happen (this bit is sometimes called a logic model). And along the way, you explain your assumptions and rationale, work out sensible metrics for each step and be totally open and honest with yourself about all the factors not in your control that can also influence the final outcome.
So far, so good? Nobody jumped overboard yet? Good, let’s continue.
Second, sailors needed to understand the tides and conditions that could knock them off course. In medical affairs, this is grasping the headwinds that swirl around human behaviour (doctors are humans too…). Using a framework like COM-B can help us break up human behaviour and the choices we make into manageable chunks.
‘COM-B? Isn’t that a type of gas boiler’, I hear people over 40 in the UK saying. No, it’s better. COM-B splits behavioural influences into Capability, Opportunity and Motivation. It’s basically saying that if we have the skills and knowledge (Capability), the right environment (Opportunity), and a good enough incentive (Motivation) to do something, we’re most likely gonna do it. If one of these is lacking, we probably won’t. Breaking down the behaviour change we’d like to see amongst HCPs using this framework can really help us target our medical activities in the most impactful way.
Solutions – measuring the right things
Finally, we’ll need data to add details to the maps we’ve drawing. That means working out exactly how to measure.
At the start of this article, we emphasised the importance of measuring change, i.e. outcomes, not just outputs. Think about MedEd, for example. Counting attendees on your course is easy, but they could all think the course was rubbish. Even if they liked it, that’s not the goal. We want to deepen knowledge and improve clinical practice. Because of this real impact takes time. Knowledge gains after a course are great, but do they last? A follow-up in a month might show changes in practice, but will they stick?
This raises two key points we need to consider when selecting the right things to measure:
- Impact takes time, but we don’t always have it. While we want to try and measure our ultimate impact, we still need leading indicators, those shorter-term outcomes that are necessary steps towards our goals. For example, course satisfaction isn’t the goal of MedEd, but it’s often a precondition for learning.
- Metrics are context-specific. Clients often ask for inspiration on new metrics, new KPIs, but without a clear goal, that’s impossible. There’s no universal dashboard here; metrics need to link to specific changes we want to see.
And yes, some outcomes are too expensive or impractical to measure. We can’t survey every HCP forever. But data advances give us new tools and possibilities. For MedEd, if your goal is scientific collaboration, track co-authorship among alumni vs. non-alumni. For scientific comms, analyse whether your terminology shapes the wider discourse.
But these metrics only matter if they align with what you’re trying to achieve. You only know that if you build a theory of change. And you’ll only do that if you’re willing to embrace uncertainty, look up at the stars, and set sail.
If your sea legs need some practice, contact me and we can discuss further.