maketheworldworkbetter

Statistically informed ideas on how to make the world work better.

Category: Science – general

The costs of too much choice: How the science of evolutionary development justifies Obamacare

One of the more difficult and technical fields one could choose to study is Evo-Devo, or the evolution of development. Briefly, it is the field that studies how genetic programs determine the developmental process, how these programs evolve, and how the types of programs available constrain the directions evolution can take. For example, if humans were to evolve wings (an essential impossibility for many reasons), Evo-Devo lets us make the clear inference that we would not evolve them as sprouting from our shoulders like angels, but rather as modifications of our arms. Why? Because in all tetrapods (i.e., eptiles, amphibians, birds, and mammals) there is a developmental program to produce four limbs. Limbs can be lost (snakes, whales) and modified for flight (bats, birds), but they cannot be added.

One of the key insights to emerge from Evo-Devo is that developmental programs are highly organized. They have evolved ways to facilitate future evolution, called evolvability. They achieve this using mechanisms known as gene regulatory networks, compartmentalization, and canalization. While the details of these mechanisms are beyond the scope of this post, they have in common that they are ways to facilitate long-term evolution at the cost of flexibility. That is, they standardize the developmental process to give consistent results, but limit the forms that can be arrived at. Again, tetrapod limbs are a good example: if tetrapod limbs were not the result of a fairly standardized genetic module, we would be able to evolve them anywhere any time – the nose could become a hand, we could evolve rows of wings up and down our backs, etc. However, the  result would be chaos. It would be too easy for a minor mutation to mess up development, too easy for the final form to depend too heavily on what gene combinations one has (image if parents regularly “accidentally” gave birth to children with 6 or 10 limbs, just because of  how their genes got combined…), and too hard to control the evolution of limbs as the environment changed and a specific sort of form became necessary. In other words, we gave up flexibility for stability and predictability.

How does all this relate to Obamacare?  Read the rest of this entry »

Optimized charitable giving, evidence-based medicine, and the risk of thinking we can measure everything

GiveWell logo, taken from there website.

The GiveWell logo, taken from their website.

I read an interesting blog post this morning on Wonkblog about how some people are getting jobs on Wall Street in order to save the world: the idea is to make as much money as quickly as possible, live on next to nothing, and then use the saved money to save the world more efficiently than one could by joining the Peace Corps or becoming a doctor.

The post discussed a website/organization called GiveWell that takes a very hard-nosed, analytical approach to how we should most efficiently use our charitable dollars to do good in the world. The ballet or the symphony is nice, but by buying bed nets to prevent malaria you could be saving children’s lives for very little money, so guess which GiveWell recommends you to donate to? They choose a small number of top charities among a large number they review, and they are very careful not to make claims that the non-top charities are not useful, only that there is very good evidence that the top charities are useful. I am truly impressed with the thoughtfulness of the approach and the quality of the research they seem to have done.

But – and there’s always a but – it struck me that there is a limit to this approach to charitable giving, and it is strikingly similar to a limitation of evidence-based medicine that I’ve been bumping into recently. Read the rest of this entry »

What crime statistics, standardized tests, and scientific researchers have in common

I had thought about calling this post “Cohen’s law for predicting distortions in incentivized systems.” Tongue-in-cheek of course – it’s approximate, and thus not really a law. And I don’t like the self-aggrandizing habit of naming a law after oneself. And it would have been a dry title, and you probably wouldn’t be reading this. Nonetheless, this post is about the single most important thing that everyone designing public policy should understand. It is about the principle that makes most public policy fail (or work less well than intended).

Most public policy is designed to achieve certain goals – lowering crime, improving education, advancing scientific knowledge, improving health care etc. And most of the time, these goals are achieved by trying to get the right people to do the right things: police to arrest criminals, teachers to teach well, researchers to perform well, doctors to treat patients well, etc. In order to encourage this, most policy incorporates some form of incentives: tax structure, salary scales, rewards for good performance, and so forth. Police departments are judged by their crime statistics, and in turn find ways to pressure their officers to deliver these stats. In US education policy, No Child Left Behind was supposed to implement standards to encourage schools and teachers to perform better. Researchers who are productive are more likely to get funded for their next research grant. And so forth.

Read the rest of this entry »

Prediction and truth: two ways to measure scientific progress

Samuel Arbesman has a really interesting piece in Slate today about scientific progress and computers that may make discoveries that humans can’t understand. One of the issues he raises is that science sometimes overhauls what is considered as “truth,” but nonetheless we continue to make progress, despite the fact that everything we believe today may be considered false tomorrow.

Perhaps the classic example of this is Newton’s laws of motion, which were overturned by Einstein. Einstein showed that Newton’s laws were good approximations at low speeds, but that as objects approached the speed of light they broke down; Einstein’s special relativity theory proposed equations that are valid at all speeds. Einstein not only proposed better equations; those equations implied a different understanding of the universe, an understanding that allowed scientists to pursue new avenues of inquiry.

This example is famous, and shows two important principles. First, as science progresses, prediction gets better and better. In terms of predictive power, Einstein’s contribution was an incremental one, not a revolutionary one. Newton’s laws still apply nearly perfectly at most speeds experienced in daily life, though many cosmological and sub-atomic phenomena can only be predicted with Einstein’s equations.

Second, in terms of our fundamental understanding of what the universe is like, Einstein’s theory was revolutionary. But there is always the possibility that it will be completely overturned by the next revolutionary theory. In this sense, the paradigm shift of Einstein is perhaps most useful in that it opened up new avenues for research, not in that it is a better approximation of the truth itself.

A relativist or a critic of science can always claim, correctly, that the current scientific consensus risks being overturned by the next big discovery, and that therefore scientific claims to understand the truth of the universe are weak. What is harder for a relativist to critique is the improvement in predictive capacity that has been achieved over the course of science history. That should count for something, even if the “truth” of science is at risk of being overturned.

However, the discussion above is predicated on a model of science that originates largely in physics. It is striking to me how many lay discussions of the philosophy of science assume that all science is like physics, that the best scientific design is always a controlled experiment, and that Karl Popper’s idea of falsification of hypotheses is supposed to be the one true scientific method (as mentioned in the Arbesman article, to my chagrin).

In fact, there has been no major paradigm shift in biology since Darwin, despite many in physics. And unlike in physics, where it seems the next great theory might overturn everything we think, in biology it is clearer and clearer that the work that remains is to iron out the details, not find the next grand theory. Yes, evolutionary theory has been gradually refined, and we have largely rejected some spurious ideas such as widespread group selection. Yes, biology is still informed by theory, and sometimes a creative new theory can change a field. But the fields that get changed are narrower and narrower, as the high level theories become more strongly confirmed.

Likewise, controlled experiments are rare in fields such as ecology and evolution, where it is generally impossible to rewind history: most of our knowledge of these fields is based on observational data and on a gradual accumulation of evidence rather than clear yes-no, up-or-down tests of hypotheses.

We never know what tomorrow will bring, and in a formal sense no aspect of scientific theory can be considered to be 100% proven beyond all doubt. (Perhaps God is faking our data for reasons unknown to us, after all…). But it seems less and less likely that there are any impending scientific revolutions outside physics. So, while we can certainly measure scientific progress as an increase in our predictive power (regardless of the underlying truth), it is also probable that in many domains we are not too far off from an accurate description of “truth,” even if we will never know for certain.

Thoughts on science, truth, and prediction? Leave them in the comments and I’ll respond…

Public access to scientific findings: a mixed blessing

cihrvs.Elsevier

 

The Washington Post is reporting that the Obama administration is ordering greater public access to publicly funded research. This sounds good, but what exactly does it mean, and why wasn’t it done already? In fact, most research funded by the US government through grants to independent researchers (e.g. such as me, at universities) is already required to be made available to the public within 12 months of publication. The same is true for health research here in Canada, via the Canadian Institutes of Health Research (CIHR).

Here’s how the system works now: As a researcher, I want to publish my results. This helps other researchers (and the public) learn what I’ve done, and it helps my career. I do so by submitting an article to a peer-review journal, of which there are many. I make the choice based on how important I think my findings are (and thus how ambitious I can be in shooting for a top journal, which will be more widely read and look better on my CV). The journal may or may not accept my article, and will probably require extensive revisions. The process generally takes about 6 months from the initial submission, assuming the first journal accepts, and it requires a lot of time in terms of revising, formatting, submitting, proofreading, and so forth. This is excluding the time to prepare the original article.

Once my article is published, it is available online at the publisher’s website or in print at any library that subscribes to the journal. Most people now read online. Anyone can access the article’s abstract and some other data about it, but for most journals, only individuals affiliated with a subscribing university can access the full article. Anyone not affiliated can access for around $30, an exorbitant fee. Nearly all of the fees associated with publication are sustained by university library subscriptions. Publishers make enormous profits because they have a monopoly on each journal, and those who pay for the journals are not those who publish in them, removing the normal market forces from the system.

Some journals, however, are open-access, which means anyone anywhere can read the article online for free, but the researcher must generally pay a publication fee of $1000-2000. Most journals that are not open-access offer an option to publish articles as open-access, for a higher fee.

Since the public pays for research through government grants, the argument goes that the public should have access to the results. As a consequence, many funding agencies have started requiring that their grantees make their results public within a year of journal publication. This sounds great, and it is definitely an admirable goal, but the transition is difficult, as illustrated by my recent experience.

I had just had an article accepted in Mechanisms of Ageing and Devleopment (MAD), a good journal in my field and the best one for this particular article. It is a rather technical article unlikely to be of interest to people outside academia, but important as a base for my future research and to describe a method many others are likely to want to use. I had just become aware of CIHR’s new policy that the results had to be publicly available within 12 months, and I thought this was a good thing. So I started looking into what this meant for my upcoming publication.

MAD is owned by Elsevier, the largest academic publishing house. I started poking around their website, and I found that I could pay $3000 (from my grant money, not my pocket) to publish open access. It was not clear if I had the right to make my article public in some way without paying this. (It’s a lot of money, enough to support a summer research student who could complete a project and learn a lot, a much better use in this case than granting theoretic access to a public unlikely to read the article.) There was some information about different levels of access, “green,” “gold,” “white,” etc. It seemed that MAD was green, which meant that I was allowed to submit a non-formatted version of the article to a public archive after 12 months. However, another webpage said that Elsevier had no agreement with CIHR, which meant that I could not submit to an archive. It wasn’t clear.

So I wrote to both CIHR and Elsevier asking for clarifications. They both sent me links to various websites which were not very helpful or which I’d already seen. I finally talked by phone to someone at CIHR, who was very friendly and helpful but did not seem to know what to do. After repeated conversations with her, she suggested that I should try to negotiate the publication contract each time I published an article, an obvious non-starter since I have no leverage in such a negotiation, and since whatever journal functionaries will just want to get rid of my questions as quickly as possible.

Elsevier in fact responded in just this way. I was never able to get more out of them than an effective “just pay the $3000 and stop bothering us.” Eventually, I gave up and paid because I had little hope of finding something else out with more time, and I’d already put a lot of time into the question. But things won’t be any clearer next time around…

I strongly believe that all research findings should be publicly available, and that there should not be any private publishing houses such as Elsevier. Financing for publication should come from an international consortium of governments that agree to pay proportional to the amount their countries produce. It could even be that every researcher is required to pay open access fees for every article from grant money, and that these fees are slightly elevated in order to allow individuals without grant money to publish for free.

But we are a long way from this system. In the meantime, MAD was the journal I needed to publish my article in. Elsevier had a monopoly on MAD, and extorted $3000 from me (much more than the real publication cost, I’m sure). CIHR’s rules, implemented with good intentions but without the necessary agreements with publishers, meant that $3000 of their money (and mine) got wasted. And we are not, in practice, much closer to a true open-access world. In the meantime, I have one more expense and more administrative headaches that will slow down my scientific productivity…

One other quirk reported by the Washington Post: the Obama administration will require government researchers to publish their findings. This also sounds good, but is not workable in practice. Publishing takes an enormous amount of time and effort, and I publish perhaps one tenth of what I could or what I would like to based on time constraints. I have lots of old results sitting in folders that I don’t have time to write up. Forcing me to publish all my results (even the ones I deem less important) would slow down my productivity in generating the truly important results.

So, while well-intentioned, the new rules seem as likely to do harm as to do good….

Idea: A better way to allocate grants for research

There is a paradox that most granting agencies face when they fund research: Researchers have a strong incentive to ask for as much money as possible even if they don’t really need it. One reason is that researchers are evaluated by both their institutions and their peers based on the number of grant dollars obtained (rather than on the quality of their research or their productivity). Another reason is that it’s always better to have a little too much money in the budget than not enough – we might as well estimate high.

These incentives are at work all the time in subtle ways. They mean that researchers ask for more money than they need. There is thus a slow inflation in the accepted “cost of doing business” and in the amounts viewed as normal. There is no reason at all for researchers to try to get by on a shoestring. Perhaps most insidiously, there is an incentive for young researchers to choose research topics that require more money for the same amount of productivity. Several years ago at the University of Michigan, I heard that one of the deans wanted to hire fewer faculty in ecology and evolution and more in lab-based biology because the latter brought in larger grants.* I often see colleagues in developing countries finding cheap and inventive ways to do the same things we do here, just on a fraction of our budgets.

In addition, most granting agencies demand detailed budgets for submitted projects. These budgets require lots of time to prepare, lots of time for peers to review, and are mostly made up anyway – people say in the budget what they need to say to get funded, knowing they can use the money as they wish later. I have even been told by CIHR representatives that I should fudge my budget because it is the best way to get funded!

One reason these problems persist is that in most cases successful grants are chosen with regard to quality but without regard to value. In other words, if there is $1 million left in the budget and a $1 million grant is ranked just above 10 grants that are $100,000, the $1 million grant will be funded and none of the next 10 will be, even if the total value of the next ten projects is 9.5 times that of the $1 million project.

I’ve just devised a system that I think elegantly solves this problem and substantially reduces paperwork. Here is a summary:

1)      For competitions, the funding agency decides a priori on a number of funding tiers (say 7, ranging from $80,000/year to $1 million per year). Each tier has a number of slots available based on the total budget, with many slots available at the lowest tiers and very few at the highest.

2)      Each researcher does not fill out a budget, but simply selects a tier he or she wishes to be considered for. If the grant is awarded, this is the amount that will be received – no more, no less.

3)      All applications are ranked based on quality, without regard to the budget tier chosen.

4)      Funded applications are chosen starting at the lowest tier and choosing the best candidates who applied for that tier, until the number of slots allotted for that tier are filled.

5)      At each higher tier, any applications not chosen for the lower tiers are retained in the competition, and will still be funded at their requested (lower) tier if they rank sufficiently high among the higher-tier applications. In other words, at each tier the highest-ranked applications are funded, including those at that tier and those not selected at lower tiers.

6)      Any remaining funds (due, for example, to lower-tier candidates winning higher-tier slots) can be given to the remaining candidates in order of their rank, irrespective of tier.

 

This system has a number of clear advantages:

1)      It removes a huge amount of work from both writing and reviewing grants – everything related to budgets, which are usually fudged anyway.

2)      It gives a strong incentive to researchers to conduct their research on the smallest budget possible, since the chances of obtaining a grant are much larger for those with smaller budgets.

3)      It gives a way to fund larger-budget projects or programs when justified, and lets the competition incentivize the researchers to make the determination about necessity.

4)      It gives an incentive to young researchers to choose relatively cost-effective fields of study, improving the cost efficiency of research for decades to come.

5)      It rewards overall candidate quality and frugality simultaneously, in such a way that budget does not become a perverse incentive to do other than the best research.

 

What do you think? Would it work? What are the challenges?

 

*Caveat: I heard this story second-hand years ago, and may have misremembered something in the details. But there was unquestionably a perception that expensive research was prioritized by someone higher up.

What Todd Akin gets right about rape and evolution

OK, I’m about to piss a lot of people off. Here goes…

US Senate candidate Todd Akin said, “It seems to me, from what I understand from doctors, [pregnancy from rape is] really rare. If it’s a legitimate rape, the female body has ways to try to shut that whole thing down.”

Todd Akin is an idiot. Tood Akin is ignorant. Todd Akin is insensitive to women. Todd Akin is a religious lunatic. All true. But Tood Akin is also (a little bit) right about rape, and about evolution (which I presume he doesn’t believe in).

Hear me out. I’m not saying no one gets pregnant from rape. I’m not saying we can distinguish between “legitimate” and “illegitimate” rape legally or morally. But I do think that women’s own perceptions of rape can be more or less severe, and that there are biological mechanisms to reduce the probability of pregnancy following a rape.

There is only one fundamental law of biology, and it is this: whenever you think you understand something, it’s more complex than that. Almost every major principle of biology has later been shown to have exceptions or caveats. One example of a simplistic principle is that once sperm are released, nothing the woman does affects the probability of fertilization. In fact, it is well-established that muscular contractions during a woman’s orgasm help draw in sperm and increase the probability of fertilization. During rape, no orgasm. No orgasm, no contractions. No contractions, lower probability of fertilization.

I do not know if other potential mechanisms for avoiding pregnancy from rape have been discovered, but I would bet they exist. It’s not hard to imagine that acute psychological trauma could release hormones that would reduce the probability of implantation, for example. Natural selection should fairly strongly favor avoidance of pregnancy via rape – the psychological trauma caused by rape is itself presumably the result of strong evolutionary pressure on women to make them avoid rape as much as possible.

And if psychological trauma does reduce the chances of pregnancy (I’m not saying it does, I’m saying it might), it is likely that the degree of trauma affects the probability of getting pregnant: more trauma, less probability of getting pregnant. So then we have the question: is all rape equally traumatic? I have no idea. I’m not a woman and I can’t say. But my guess (please, women, correct me if I’m wrong) is that it would be possible to imagine various rape scenarios, some more traumatic and some less. I’m not saying the different scenarios are morally different or legally different; I’m saying that the woman would perceive them as more or less traumatic. Todd Akin was 100% wrong to try to distinguish legitimate and illegitimate rape, but many commentators may have also been wrong to assert that, biologically speaking, all rape (or all insemination) is equivalent.

So let’s criticize Tood Akin for the many things he is guilty of, not for being wrong about biology when the biology is not necessarily well-known, and when his statement contains at least a grain of truth. And let’s take pleasure in the fact that even this religious nutcase has inadvertently invoked evolution in his understanding of rape!

What we really know about nutrition, part 2

In the last installment of this series on nutrition and science, I argued that epidemiological studies on associations between diet and health are extremely difficult to conduct well. But aren’t there other ways we can get information on how to eat well? Yes. We know the biochemistry of molecules such as vitamin E relatively well, and we know a lot of what they do and why they are essential in the body. We have lots of animal studies on nutrition, both in mice as model organisms, and for domesticated animals such as dogs, chickens, cows, and so forth. These animal studies do what human studies can’t: they perform controlled experiments to identify causal relationships.

In addition to these basic science studies, there is also a lot of circumstantial evidence about nutrition. For example, there are studies that measure the nutritional content of hunter-gatherer diets. There are studies comparing the intake of different nutrients across species. And there are some lessons from basic evolutionary principles that can be applied to nutrition.

What can we learn from all this? The take-home message is that the details are very complicated and poorly understood, but there are some broad brush recommendations we can make. In my opinion, most of the basic science does little more than generate confusion about the details, and the most useful lessons come from the circumstantial evidence.

Read the rest of this entry »

Individuals vs. Systems: An underlying philosophy for this blog

 vs.

The posts on this blog have been, and will continue to be, on a variety of topics, but there are a few underlying principles that infuse my outlook, and will thus infuse many of the posts. Perhaps the most important of these principles is that systems are more important than individuals, and this post is meant to explore that principle and some of its ramifications.

Modern Americans have built their national identity around the philosophy of individualism, and it seeps into American thinking in many ways. The core American values of Democracy, Charity, Capitalism, and Liberty/Freedom are all centered around individualism in one way or another. The most obvious is Freedom, which is interpreted in a modern context as the freedom of individuals to do what they want, regardless of governmental or societal norms. (It’s worth noting that this was not always the case. David Hackett Fischer, in his excellent book Paul Revere’s Ride, makes a strong argument that liberty and freedom for communities was a much stronger value in Revolutionary times. There’s also a growing literature on how too much freedom and choice can paralyze us – see, for example, this great TED talk.)

Read the rest of this entry »

A pharmaceutical insider’s take on a National Institute of Pharmaceutics

I have a friend who works as a scientist in the pharmaceutical industry, and he recently emailed me with some thoughts on my post regarding a National Institute of Pharmaceutics. It’s clear that he knows a lot more about what he’s saying than I do, and a really interesting idea emerges at the end. The rest of this post is his email (slightly edited for context), and I respond in a comment:

“I’ve been following your blog and I read your NIP idea with a lot of interest, as someone making a living as a storm trooper for the evil big pharma empire, and as someone that has also been wondering if there is better way to develop and distribute new drugs.

“My biggest concern is how we’ll decide who will get funding. People carefully move a drug candidate from one development stage to the next, especially since it is going to cost a lot more money at the next stage than all the money spent so far combined. Those decisions are made by each company. If we now move this responsibility to the NIP, are we sure that they will do at least as good a job as current pharma CEOs do?

Read the rest of this entry »