You give me fever with your policy

This item is over 2 years old; some content may no longer be current
FREE READ
+Opinion
In print
FREE READ

You give me fever with your policy

Tim Tenbensel 2022

Tim Tenbensel

4 minutes to Read
PreviousNext
woman crossroads CR SIphotography on iStock
Policy decisions aren’t easily decided by systematically reviewing two different directions [image: Si Photography on iStock]

There’s not much use in thinking of policy initiatives as analogous to clinical interventions, writes Tim Tenbensel

Moving away from one-size-fits-all approaches is essential if we are to make progress in reducing health inequities

I’m sure most readers could easily name pop songs of any era that use medical and health tropes to catch the attention of listeners.

Peggy Lee’s “Fever” or Robert Palmer’s “Bad Case of Loving You” spring to mind for me. Medicine has also provided a wellspring of metaphors for thinking about policy in general, and health policy in particular.

Journalists and academic researchers commonly write of “policy prescriptions” and “policy interventions”. And, as I have noted in a previous column, “evidence-based policy” is the most obvious modern example of medicine providing the basis of a powerful, but ultimately mislead­ing, metaphor.

Underlying each of these phrases is the assumption that policy initiatives are analogous to clinical interventions. This idea has exerted a powerful hold on how we think about whether policy initiatives are successful or not. But, for many reasons, it is inappropriate to treat policy initiatives as analogous to clinical interventions.

The research base for clinical effectiveness and safety has been underpinned by the ideal that it is possible to demonstrate that something is effective, independent of context. In other words, the aim of gold-standard clinical research is to determine whether a particular clinical intervention procedure is superior to an alternative, once all other factors are controlled for.

The aim is to produce context-free knowledge which should be applicable everywhere.

Take two much-debated health-policy measures – pay-for-performance in healthcare, and introduction of a tax on sugar-sweetened beverages. There have been many experiments of these types of policy measures interna­tionally, and each is associated with a burgeoning research base.

It is very rare that policy initiatives can be tested under experimental conditions. Sometimes particular policies, such as pay for performance in healthcare (P4P), can be trialled or piloted – say, for a subset of primary care doctors.

The best that can be hoped for here is some sort of quasi-experiment. It is highly unlikely that policymakers can create comparable intervention sites and control sites. So, even if P4P showed some effect, there would always be doubt: perhaps this was because the intervention group was different from the control group.

More generally, the growing international research base on these topics throws up some examples in which P4P or sugar taxes are found to work, others where they have no effect, and others where there are adverse effects.

Too many differences

If you have had some postgraduate research training, at this point you may well think that a systematic review could lead to a more definitive answer. Following that track, you would take all the studies into P4P, assess them in terms of quality, take the best-quality studies, and calculate the aggregate effect of P4P across them all.

The big problem here is the assumption that all P4P initiatives are the same. However, specific P4P policies have arisen from many different contexts to deal with many different sorts of problems. When you start to look in detail, there is no such thing as a context-free P4P initiative. It makes no sense to treat England’s Quality and Outcomes Framework and New Zealand’s PHO Perfor­mance Programme as the same. They are quite different in terms of scale, design, health-system structure and history.

The other reason we can’t treat all instances of P4P or sugar tax as the same is that all policy initiatives are developed and implemented in a social context. This means they involve people and groups interacting, making sense of things, reflecting on experience, and anticipating how others might act. People and groups have histories of experience. All of these things have a significant effect on how policy initiatives are interpreted and implemented.

When University of Otago professor of general practice Tim Stokes and colleagues looked at how the HealthPath­ways approach that was developed in Canterbury was implemented in the southern district, they found that it played out very differently. Managers and clinicians in the south interpreted the programme quite differently from their Canterbury counterparts. And this was based on different local histories and structures.

The catch question

An alternative way of thinking about evaluating policy initiatives is proving to have more traction and usefulness to health services and policy researchers. The approach is known as “realist evaluation”, and it has its own catch­phrase or “catch question”. Instead of simply asking, “Does this intervention work?”, realist evaluators ask: “What works, for whom, in what circumstances and why?”

The great advantage of asking these questions is that they take into account the inevitable variation in the way groups of human beings think and behave. So, a sugar tax might encourage food companies to reformulate their products, but this would depend on other features of the taxation system, and the history of relationships between food producers, retailers and government agencies.

Realist evaluation puts context squarely in the fore­ground. Simply asking, “What works, for whom, in what circumstances and why?” encourages policymakers to move away from one-size-fits-all approaches. And that is essential if we are to make progress in reducing health inequities and address the needs of different populations and different parts of the country. It is also potentially highly compatible with a Tiriti o Waitangi approach because it makes it possible to ask: “What works for Māori in what circumstances and why?”

The realist-evaluation question could also help to reframe the ongoing debate between hardcore adherents of evidence-based medicine, and those health practitioners who dismiss it as cookbook medicine.

If the agenda of clinical research could be shifted more towards the question of “What works, for whom, in what circumstances and why?” it would validate many of the reasonable concerns that many clinicians have regarding some of the excesses of the evidence-based medicine movement, and open up some middle ground between the camps.

The main benefit of realist evaluation is that it allows for more realistic expectations about how research can inform and contribute to better-quality health policy, instead of expecting researchers to provide answers to unanswerable questions.

By the way, if you are looking for a biomedical metaphor for policy that is more appropriate, my choice would be “mutations” because they are things that develop, survive and prosper in specific contexts and environments. But I’m still looking for a good song that draws on mutations as a metaphor.

FREE and EASY

We're publishing this article as a FREE READ so it is FREE to read and EASY to share more widely. Please support us and the hard work of our journalists by clicking here and subscribing to our publication and website