The Institut de Formation et de Recherche Démographiques(IFORD) [Institute for Demographic Training and Research] was hired to conduct the impact evaluation baseline survey on the performance-based financing (PBF) program in Cameroon, carried out by the Ministry of Health in 14 health districts and covering a total population of close to 2.6 million. The survey preparation period (October 2011 to February 2012) was followed by the data collection period (March to June 2012). The data entry process, which took place in parallel with the collection exercise, was conducted between April and August 2012.
We, the members of the IFORD survey firm, had made sufficient progress with the baseline survey data analysis when we were invited to attend the Fourth Annual Impact Evaluation Workshop. This workshop is organized annually by the World Bank with the aim of providing technical support to teams whose results-based financing (RBF) programs and impact evaluations are funded by the Health Results Innovation Trust Fund (HRITF) and implemented through collaboration between the World Bank and the governments of the countries involved. Although the workshop was largely geared toward RBF teams working in the health field, it is our view that it offered the advantage of accommodating, in a broader sense, survey firms involved in impact evaluation processes.
My initial objectives prior to the workshop. As the representative of a survey firm associated with an African development population sciences training and research institution, participation in this important workshop was initially aimed at achieving several objectives. First, I wanted to share about IFORD’s positive and negative experiences with conducting the baseline survey. I also wanted to learn from others and build IFORD’s capacity to carry out impact evaluations of RBF projects, in particular in the technical and evaluation methodology areas. It was also my wish to assimilate enough information to allow me to better position the institution to do evaluation follow up work. Lastly, I also wanted to use this opportunity or forum to help enhance IFORD’s visibility. After these five days
L’Institut de Formation et de Recherche Démographiques (IFORD) a été retenu, après un appel d’offre international, comme firme devant conduire l’enquête de base de l’évaluation d’impact du programme de Financement Basé sur la Performance (FBP) au Cameroun mis en œuvre par le Ministère de la Santé dans 14 districts de santé, couvrant une population totale de près de 2,6 millions d’habitants. Après la préparation de l’enquête qui a eu lieu d’octobre 2011 à février 2012, la collecte des données s’est poursuivie de mars à juin 2012. La saisie des données qui se déroulait parallèlement à la collecte a eu lieu d’avril à août 2012.
Nous, membres du cabinet d’enquêtes de l’IFORD, étions suffisamment avancés dans la phase d’analyse des données de l’enquête de base lorsque nous avons reçu l’invitation au Quatrième Atelier Annuel sur les Résultats et l’Evaluation d’Impact. Cet atelier est organisé annuellement par la Banque mondiale. Son objectif est de fournir un appui technique aux équipes dont les programmes de Financement Basé sur les Résultats (FBR) et leurs évaluations d’impact sont financés par le Health Results Innovation Trust Fund (HRITF), et mis en œuvre au m
oyen d’une collaboration entre la Banque mondiale et les Gouvernements des pays concernés. Cet atelier a eu le mérite, à notre humble avis, bien que prioritairement destiné aux équipes de FBR en matière de santé, d’inviter plus largement les firmes d’enquête impliquées dans le processus d’évaluation d’impact.
Mes objectifs initiaux avant l’atelier. En tant que représentant d’un cabinet d’enquête rattaché à une institution de formation et de recherche en sciences de la population pour le développement en Afrique, l’adhésion à cet important atelier visait initialement plusieurs objectifs. D’une part, je souhaitais partager l’expérience de l’IFORD en matière de conduite de l’enquête de base, aussi bien sur ce qui a marché que ce qui n’a pas fonctionné. Je voulais également apprendre des autres et renforcer les capacités de l’IFORD dans la mise en œuvre des évaluations d’impact des projets de FBR, notamment sur les aspects techniques et méthodologiques de l’évaluation. Il était également question de tirer suffisamment de leçons me permettant de mieux positionner l’institution pour la suite de l’évaluation. Enfin, j’envisageais aussi d’utiliser cette opportunité, cette tribune, pour contribuer à une plus grande visibilité de l’IFORD. A la fin de ces cinq jours de travail, d’échange, d’apprentissage au sein de treize autres équipes soutenues par le HRITF, qu’avais-je tiré de l’atelier, avais-je atteint mes objectifs? Plus important encore, qu’en est-il des conséquences de l’atelier sur la suite de l’évaluation d’impact du FBP au Cameroun ? Quel a été l’impact de l’atelier sur la capacité de l’IFORD à progresser dans la mise en œuvre d’évaluations d’impact : avait-on renforcé nos capacités méthodologiques, avait-on renforcé notre positionnement stratégique en tant que cabinet d’études qualifié pour ces études complexes?
Poor financing, uncoordinated donor interventions, and a dependence on user fees, in a nation where 71.3% live below the national poverty line, pose a major challenge for health financing in the Democratic Republic of Congo (DRC).
Results-based approaches, like Pay-for-Performance (P4P) offer one way to potentially increase access to health services for the poor, by providing an incentive for health workers to improve their performance and the level of service delivery.
Can P4P work under extreme conditions, such as the DRC context?
Mayo-Ine Health Center lies in Fufore district in Adamawa State in North-East Nigeria. One year ago it was a typical health center in rural Nigeria. Years of neglect had left their mark. The fence was damaged, the roof caving in places, windows broken, and equipment gone. Medical waste was scattered in the backyard, some of it half burnt. Goats were searching the waste, nibbling on edible bits of carton. The center had no running water. Its latrines were defunct. Essential drugs were out of stock and vaccines were rarely available. There had not been supervision from the district for a long time and staff were demoralized and on strike.
The population had gotten used to the situation and was rarely using the facility. In December 2011, just four women delivered at Mayo-Ine, and on average it saw 4 patients per day. The few patients that came were prescribed expensive treatments with drugs which the health workers had bought and sold against a hefty mark-up, making any treatment very expensive. People preferred the local drug vendor who would sell drugs cheaply by the tablet – which fitted their budget better - and consulted with traditional healers.
There’s been considerable discussion about how costly PBF is and whether it provides value for money. How much money per capita is required for it to be successful? The common wisdom is that it takes about an additional 3 dollars per capita per year in a low-income country for PBF to be successful, but the justification for this amount is not particularly strong. There are very few studies which compare different levels of financing, so it’s difficult to know what would be required. Given all the other factors that can influence the success of PBF besides the pure incentive effect or the amount of money available at the health facility level, it’s not possible to be definitive about the amount needed. It may be that a dollar or $1.50 per capita per year might have a similar effect to $3. It may be that the smaller amounts are 80% as effective as investing twice as much.
Navigating traffic is rarely an exciting experience. The city bustles with over 13 million inhabitants and an overwhelming number of gawking tourists. Thankfully, our driver—despite a few hair-raising near collisions—had it under control.
Ours was largely an RBF for health crowd. Comprised of RBF project coordinators, survey representatives, Ministry of Health counterparts, and task team leaders from fourteen World Bank country teams across the globe, we were in Istanbul to participate in the week-long Fourth Annual Results and Impact Evaluation Workshop. This was my first time at the yearly workshop where country teams share their experiences, and learn about different program designs with a focus on what does and does not work in a particular country context. N39QMVFJZVCH
As we all know, RBF is being used in many settings to improve health care utilization and subsequently, healthcare outcomes. A range of evaluations have been conducted and are ongoing to assess the programs and learn about their impact. Apart from the clear lack of strong quantitative evidence that includes credible estimates of the counterfactual (i.e. what would have happened without RBF), an additional theme has emerged – it’s not enough to understand whether RBF works and which outcomes it affects in a particular context, but we need more information about how to interpret the results. Why did some indicators move and others not? What was the impact of the context and non-RBF parts of the health system? And conversely, how did RBF change the health system?
Performance Based Financing (PBF) is an approach where health facilities are paid for the quantity and quality of services they provide (please see the RBF glossary). There is evidence from a randomized controlled trial in Rwanda and some routine operational data that suggests that PBF is working. Hence, there is much discussion about what explains the success (so far) of PBF in low and middle income countries. It probably comes down to about eight basic ideas. It’s tough to remember all these but maybe we can use an acronym to remember them – SMASHING.
On October 3rd, I sent out a survey asking people what was the biggest, most embarrassing, dramatic, funny, or other oops mistake they made in an impact evaluation. Within a few hours, a former manager came into my office to warn me: “Christel, I tried this 10 years ago, and I got exactly two responses.”
I’m happy to report I got 49 responses to my survey. My initial idea was to assemble a “top 10” of mistakes, so I promised the 10 winners they would get a small prize. Turns out, assembling a top 10 was a bit tricky, but here’s my attempt at classifying the information I got.
#1 - A first batch of comments were stories of random, funny things that happened in impact evaluations – here’s one that got me cracking up in my office on a Friday afternoon:
Imagine a systematic review of antibiotic effectiveness in treating patients with fever. The review aggregates findings of studies where the drugs were administered to people with fevers stemming from malaria, viral infections as well as bacterial infections. I suspect the review would find the drug ineffective, even if it cured every person with a bacterial infection. It sounds silly, I know.
However, when we review effectiveness evidence for interventions to improve health service delivery in developing countries, we often aggregate in equally nonsensical ways. That is, we ask “does P4P work?” or “does demand-side financing work ?” Surely we must scrutinize these studies in meaningful categories; surely we must ask what was ailing the patient (service delivery sub-system and group of would-be service users) treated?