Pharmacist prescribers Linda Bryant and Leanne Te Karu discuss positive polypharmacy for heart failure. Current evidence shows the intensive implementation of four medications offers the greatest benefit to most patients with heart failure, with significant reductions in cardiovascular mortality, heart failure hospitalisations and all-cause mortality
Roll up, roll up, sign up for the latest leany meany fad – but will it work?
Roll up, roll up, sign up for the latest leany meany fad – but will it work?

POLICY PUZZLER
Tim Tenbensel does a shout-out to the rough road trod by evaluators of new policies and initiatives
Many initiatives are implemented in environments in which there are critics and doubters ready to pounce on negative or equivocal findings
Health sector workplaces in the 21st century are subject to a constant stream of initiatives, interventions and policies originating from central government, DHBs, PHOs, professional colleges and beyond.
Innovation is valorised, and organisations and their staff are constantly exhorted to devise or sign up to new ways of doing things. In this cacophony, whose job is it to sort out “what works” from the dustbin of failed initiatives?
“Evaluators” are the people in your neighbourhood whose job it is to answer these questions, and they can be found in habitats ranging from large and small consultancies to government agencies and universities. The challenges health services and policy evaluators face are formidable, however, and not for the faint-hearted.
It is tempting to think of health policy and programme initiatives as analogous to clinical interventions in that it should be feasible to test the efficacy of an initiative.
Rigorous clinical research, in which the gold standard of double-blind randomised controlled trials applies, may be highly suited to determining the efficacy and safety of pharmaceutical products and devices, but these methods are next to useless in evaluating health services and policy.
Health services and policy evaluators rarely, if ever, have such a luxury of designing experimental conditions. They get the conditions they are given, full of confounding influences such as epidemiological trends, predictable and unpredictable obstacles, and potentially conflicting policies.
Perhaps the next best thing is quasi-experimental conditions or natural experiments – such as a new initiative introduced in some settings and not others, finding “non-intervention” settings that match “intervention” settings.
For this to really work, though, we would need to stop people in the experimental setting communicating with anyone else, lest the non-experimental settings start adopting elements of the experimental approach.
In any case, the experimental logic – controlled or otherwise – is based on a very shaky assumption that initiatives are the “same thing” wherever they are implemented. However, most health services and policy initiatives involve human beings – individually and collectively – making sense of things, communicating with each other, and deciding where (and where not) to put their limited energy.
Context matters, most notably history, organisational cultures, and interpersonal and inter-organisational relationships. Once reflective and reflexive humans are involved, the very same “initiative” gets interpreted, picked apart and put back together very differently across diverse settings, usually with varying effects. In response, an important tradition has emerged in evaluation research that is known as “realist evaluation”, in which practitioners do not ask whether or not something works, but instead ask “what works, for whom, in what circumstances?”
Good evaluation involves keeping the focus on both outcomes and processes, and stitching together different types of information. It requires quantitative information about health-service utilisation and outcomes over extended time periods, and this information is unlikely to be complete or of optimal quality (the most common issue is lack of baseline data).
It also requires qualitative information about attitudes, values and behaviours of organisations and the clinicians and managers working in them. It can be challenging to synthesise time frames, as changes in outcomes typically take years to detect, whereas the most important processes to track are those that happen very early in the implementation period.
Evaluation also entails an element of political risk. Many initiatives are implemented in environments in which there are critics and doubters ready to pounce on negative or equivocal findings. Organisations that fund evaluations are putting their neck on the line when it may be easier to do nothing.
One of the trickiest aspects of evaluation is setting the terms of relationship between the client and the evaluators. Last year, at an information event for potential bidders for an evaluation contract, I was perturbed to hear public officials say “we don’t want you to come up with your recommendations, we just want you to answer our questions”. But what if the funders are not asking important questions, such as “what are the unanticipated consequences”? Should evaluators push back? How much responsibility do they have to speak truth to power? All initiatives are based on theories about how things work, although often these are not articulated. One of the key contributions of evaluators is to make policymakers’ implicit theories explicit, and stress test these assumptions.
These challenges mean that policymakers are often frustrated with evaluation. The time it takes to carefully judge success and failure usually exceeds the time frames and tenures of those who need to know. To make matters worse, the findings of many evaluations are often highly equivocal (and why wouldn’t we expect this), and few decision-makers have an appetite for the message that “more research is necessary”.
An alternative way of answering whether or not an initiative works is through inbuilt performance measurement and management – where success is clearly defined, and real-time feedback is developed. The six national health targets of 2009–2017 are examples of this approach. But performance management is prone to gaming and a host of “hitting the target but missing the point” side effects.
Ultimately, performance management is no substitute for robust evaluation. What is most needed in our health sector organisations is the widespread development of an evaluation mindset that supports a disciplined approach to figuring out what works (and when and how), avoiding both the Charybdis of frothy enthusiasm for anything novel, and the Scylla of ingrained cynicism.
Tim Tenbensel is associate professor, health policy, in the School of Population Health at the University of Auckland
We've published this article as a FREE READ so it can be read and shared more widely. Please think about supporting us and our journalism – subscribe here