Imagine a systematic review of antibiotic effectiveness in treating patients with fever. The review aggregates findings of studies where the drugs were administered to people with fevers stemming from malaria, viral infections, as well as bacterial infections. I suspect the review would find the drug ineffective, even if it cured every person with a bacterial infection. It sounds silly, I know.
However, when we review effectiveness evidence for interventions to improve health service delivery in developing countries, we often aggregate in equally nonsensical ways. That is, we ask “does P4P work?” or “does demand-side financing work ?” Surely we must scrutinize these studies in meaningful categories; surely we must ask what was ailing the patient (service delivery sub-system and group of would-be service users) treated?
Reviews would tell us much more if they broke down health service intervention studies into meaningful “illness” categories. We might discover that, on average, vouchers don’t work, but they work every time low-demand for services is the main problem. Similarly, we might find that CCTs with health-service use conditionality are highly effective in communities with excess health services capacity, but don’t work at all in communities where the biggest barrier to service use is….that there are no services! Efforts in Nicaragua to increase poor household’s use of health and education services are illustrative. A CCT program was implemented to increase households' use of services. Regalia and co-authors’ evaluation found that the CCT intervention failed to increase use. Fortunately, their study revealed why: services simply weren’t available. The program added a component to contract private providers to go out into communities and provide services, and then the CCT program “worked." The story continues however. Ten months later, the CCT funds ended. The diligent researchers went back and assessed what was going on with households yet again. What did they find? Utilization stayed the same after the CCTs were gone. In reality, the CCT didn’t “work" because the Nicaraguan program incorrectly diagnosed the patient with low-demand illness when they really were suffering from “no-supply-syndrome”. Contracting with private providers turned out to be the right “medicine” for this patient.
To develop interventions to improve health services and service use applying various RBF or other approaches, we must accurately diagnose the patient (e.g. to understand what is truly driving the observed service problems). We have no panaceas – that is, we have no medicines that effectively treat all service provision illnesses.
Assuming patients (e.g. service delivery sub-systems) are being accurately diagnosed, we urgently need evaluations and evidence reviews to shed light on what are the best treatment options for specific illnesses. Reviews that synthesize evidence for all health service interventions applied to treat a particular service delivery illness, like low-patient demand for cost reasons, would be much more useful than systemic reviews of specific interventions. Our medical colleagues have these easily available, and can turn to them once they know they are looking at a patient with, say, MDR-TB.
Reviews that lump together intervention evaluations where an intervention was applied to treat a variety of illnesses underlying service problems obscure rather more than they enlighten. Reviews which synthesize evidence across interventions in terms of how well they work for specific syndromes or causes of service delivery problems would be extremely valuable.