Optimized charitable giving, evidence-based medicine, and the risk of thinking we can measure everything

by Alan Cohen

GiveWell logo, taken from there website.

The GiveWell logo, taken from their website.

I read an interesting blog post this morning on Wonkblog about how some people are getting jobs on Wall Street in order to save the world: the idea is to make as much money as quickly as possible, live on next to nothing, and then use the saved money to save the world more efficiently than one could by joining the Peace Corps or becoming a doctor.

The post discussed a website/organization called GiveWell that takes a very hard-nosed, analytical approach to how we should most efficiently use our charitable dollars to do good in the world. The ballet or the symphony is nice, but by buying bed nets to prevent malaria you could be saving children’s lives for very little money, so guess which GiveWell recommends you to donate to? They choose a small number of top charities among a large number they review, and they are very careful not to make claims that the non-top charities are not useful, only that there is very good evidence that the top charities are useful. I am truly impressed with the thoughtfulness of the approach and the quality of the research they seem to have done.

But – and there’s always a but – it struck me that there is a limit to this approach to charitable giving, and it is strikingly similar to a limitation of evidence-based medicine that I’ve been bumping into recently. Evidence-based medicine (EBM) is a movement that started in the early 1990s to systematically gather data on what works and what doesn’t in medical practice, and to publish clinical practice guidelines summarizing these results based on rigorous research. Have a three-year-old patient presenting symptoms of pneumonia? Consult the guidelines and find out what to prescribe, at what dose, in what form, and see the level of clinical trial evidence supporting this recommendation. There are now very sophisticated systems in place that tell researchers exactly how to conduct randomized controlled trials, how to summarize the evidence, and how that evidence should be evaluated for incorporation into the guidelines.

All this sounds great, and anyway who could be against using “evidence” to support decision making? But there are a number of major limits to the EBM approach and to the randomized controlled trials (RCTs) on which it is largely based:

  1. Each patient has a unique set of circumstances, but EBM generally uses data based on population averages, so it is hard to know if the general recommendation applies to your patient. For example, maybe the benefits of statins for lowering cholesterol are not the same in 90-year-olds as they are in younger people.
  2. RCTs are good for answering one clear yes-no question, not for understanding complex systems. Human physiology and psychology are highly complex systems, however. For example, we can do an RCT to see if drug A at dose X is better than drug B at dose Y, but we cannot do an RCT to see what the best treatment is including all possible doses and combinations of X and Y, nor how that may interact with all the other medications a patient may be taking.
  3. RCTs usually have a sample size and time frame calculated to measure the main effect (drug A versus drug B) and a few common side effects in the short term. Five years is considered extremely “long-term” for an RCT, even though many patients will remain on drugs such as statins and beta-blockers for the rest of their lives. We really have no idea what the effects of these drugs are in the true long-term, and for many treatments we do not even have good data on rare but serious side effects in the short term.
  4. EBM and RCTs give us a false sense of confidence that we know everything we need to. If I have guidelines that tell me what to do for a given patient, I may not think through the problem more carefully. However, if I were to think through it more carefully, I might realize that my patient is quite different from the sorts of patients normally included in studies, and that the guidelines suggest exactly the wrong thing in this case.

There are other problems with EBM, but this list will do for now. As a result of these concerns, there are some doctors who are beginning to rebel against the EBM system and feel it is a bit tyrannical. (If a doctor does not follow the guidelines and something goes wrong, she can be sued.) What is needed, many feel, is a more flexible system that uses the available evidence in better combination with biological knowledge and theory, clinical experience, and considered judgment – in other words, a more individualized approach to medicine.

Perhaps by now you have made the parallel with charitable giving: if we define effective giving as saving  lives in the short term, we may be able to effectively identify some top-performing charities. However, many charities do valuable work that will have longer-term effects that are harder to quantify. Arguably, saving children’s lives in the short run is less important than helping countries develop stable, sustainable societies in the long run, but charities playing the long game cannot be evaluated.

All of this brings us back to the parable of the policeman who comes across a drunk man on his hands and knees under a street lamp. “What are you doing?” asks the policeman. “Looking for my keys,” responds the man. “Did you lose them here?” asks the policeman. “No,” responds the man, “but this is where the light is.”

Measurement is like light. As soon as we have a tool to quantify something – like medical effectiveness or charitable giving – we tend to think that everything we need to know can be understood through the measurement. We would do well to remember that there may also be important things that we cannot measure so well, in the dark.

None of this is meant as a critique of GiveWell, which seems to do an excellent job of avoiding making this mistake and acknowledging its limits. However, there will inevitably be users of the site or copycat organizations that do not take this to heart, and I worry that in a few years we may see a drop in support for important charities that do hard-to-quantify work. Two charities I give to – Partners in Health and the International Conservation Fund of Canada – would likely fall into this category.

Do you agree? Leave some thoughts in the comments…