maketheworldworkbetter

Statistically informed ideas on how to make the world work better.

Month: February, 2013

The benefits of a Mediterranean diet: thoughts on the new study

olive oil

A new study in the New England Journal of Medicine claims that a Mediterranean diet (lots of nuts, fish, olive oil, and fruits and vegetables, not too much dairy, red meat etc.) can dramatically lower cardiovascular disease events and mortality. My opinions on this study are a bit schizophrenic – it confirms what I’ve been saying for a while, but I don’t trust the methods. In the end, I think the study is largely correct, but somewhat by luck.

Read the rest of this entry »

Prediction and truth: two ways to measure scientific progress

Samuel Arbesman has a really interesting piece in Slate today about scientific progress and computers that may make discoveries that humans can’t understand. One of the issues he raises is that science sometimes overhauls what is considered as “truth,” but nonetheless we continue to make progress, despite the fact that everything we believe today may be considered false tomorrow.

Perhaps the classic example of this is Newton’s laws of motion, which were overturned by Einstein. Einstein showed that Newton’s laws were good approximations at low speeds, but that as objects approached the speed of light they broke down; Einstein’s special relativity theory proposed equations that are valid at all speeds. Einstein not only proposed better equations; those equations implied a different understanding of the universe, an understanding that allowed scientists to pursue new avenues of inquiry.

This example is famous, and shows two important principles. First, as science progresses, prediction gets better and better. In terms of predictive power, Einstein’s contribution was an incremental one, not a revolutionary one. Newton’s laws still apply nearly perfectly at most speeds experienced in daily life, though many cosmological and sub-atomic phenomena can only be predicted with Einstein’s equations.

Second, in terms of our fundamental understanding of what the universe is like, Einstein’s theory was revolutionary. But there is always the possibility that it will be completely overturned by the next revolutionary theory. In this sense, the paradigm shift of Einstein is perhaps most useful in that it opened up new avenues for research, not in that it is a better approximation of the truth itself.

A relativist or a critic of science can always claim, correctly, that the current scientific consensus risks being overturned by the next big discovery, and that therefore scientific claims to understand the truth of the universe are weak. What is harder for a relativist to critique is the improvement in predictive capacity that has been achieved over the course of science history. That should count for something, even if the “truth” of science is at risk of being overturned.

However, the discussion above is predicated on a model of science that originates largely in physics. It is striking to me how many lay discussions of the philosophy of science assume that all science is like physics, that the best scientific design is always a controlled experiment, and that Karl Popper’s idea of falsification of hypotheses is supposed to be the one true scientific method (as mentioned in the Arbesman article, to my chagrin).

In fact, there has been no major paradigm shift in biology since Darwin, despite many in physics. And unlike in physics, where it seems the next great theory might overturn everything we think, in biology it is clearer and clearer that the work that remains is to iron out the details, not find the next grand theory. Yes, evolutionary theory has been gradually refined, and we have largely rejected some spurious ideas such as widespread group selection. Yes, biology is still informed by theory, and sometimes a creative new theory can change a field. But the fields that get changed are narrower and narrower, as the high level theories become more strongly confirmed.

Likewise, controlled experiments are rare in fields such as ecology and evolution, where it is generally impossible to rewind history: most of our knowledge of these fields is based on observational data and on a gradual accumulation of evidence rather than clear yes-no, up-or-down tests of hypotheses.

We never know what tomorrow will bring, and in a formal sense no aspect of scientific theory can be considered to be 100% proven beyond all doubt. (Perhaps God is faking our data for reasons unknown to us, after all…). But it seems less and less likely that there are any impending scientific revolutions outside physics. So, while we can certainly measure scientific progress as an increase in our predictive power (regardless of the underlying truth), it is also probable that in many domains we are not too far off from an accurate description of “truth,” even if we will never know for certain.

Thoughts on science, truth, and prediction? Leave them in the comments and I’ll respond…

Public access to scientific findings: a mixed blessing

cihrvs.Elsevier

 

The Washington Post is reporting that the Obama administration is ordering greater public access to publicly funded research. This sounds good, but what exactly does it mean, and why wasn’t it done already? In fact, most research funded by the US government through grants to independent researchers (e.g. such as me, at universities) is already required to be made available to the public within 12 months of publication. The same is true for health research here in Canada, via the Canadian Institutes of Health Research (CIHR).

Here’s how the system works now: As a researcher, I want to publish my results. This helps other researchers (and the public) learn what I’ve done, and it helps my career. I do so by submitting an article to a peer-review journal, of which there are many. I make the choice based on how important I think my findings are (and thus how ambitious I can be in shooting for a top journal, which will be more widely read and look better on my CV). The journal may or may not accept my article, and will probably require extensive revisions. The process generally takes about 6 months from the initial submission, assuming the first journal accepts, and it requires a lot of time in terms of revising, formatting, submitting, proofreading, and so forth. This is excluding the time to prepare the original article.

Once my article is published, it is available online at the publisher’s website or in print at any library that subscribes to the journal. Most people now read online. Anyone can access the article’s abstract and some other data about it, but for most journals, only individuals affiliated with a subscribing university can access the full article. Anyone not affiliated can access for around $30, an exorbitant fee. Nearly all of the fees associated with publication are sustained by university library subscriptions. Publishers make enormous profits because they have a monopoly on each journal, and those who pay for the journals are not those who publish in them, removing the normal market forces from the system.

Some journals, however, are open-access, which means anyone anywhere can read the article online for free, but the researcher must generally pay a publication fee of $1000-2000. Most journals that are not open-access offer an option to publish articles as open-access, for a higher fee.

Since the public pays for research through government grants, the argument goes that the public should have access to the results. As a consequence, many funding agencies have started requiring that their grantees make their results public within a year of journal publication. This sounds great, and it is definitely an admirable goal, but the transition is difficult, as illustrated by my recent experience.

I had just had an article accepted in Mechanisms of Ageing and Devleopment (MAD), a good journal in my field and the best one for this particular article. It is a rather technical article unlikely to be of interest to people outside academia, but important as a base for my future research and to describe a method many others are likely to want to use. I had just become aware of CIHR’s new policy that the results had to be publicly available within 12 months, and I thought this was a good thing. So I started looking into what this meant for my upcoming publication.

MAD is owned by Elsevier, the largest academic publishing house. I started poking around their website, and I found that I could pay $3000 (from my grant money, not my pocket) to publish open access. It was not clear if I had the right to make my article public in some way without paying this. (It’s a lot of money, enough to support a summer research student who could complete a project and learn a lot, a much better use in this case than granting theoretic access to a public unlikely to read the article.) There was some information about different levels of access, “green,” “gold,” “white,” etc. It seemed that MAD was green, which meant that I was allowed to submit a non-formatted version of the article to a public archive after 12 months. However, another webpage said that Elsevier had no agreement with CIHR, which meant that I could not submit to an archive. It wasn’t clear.

So I wrote to both CIHR and Elsevier asking for clarifications. They both sent me links to various websites which were not very helpful or which I’d already seen. I finally talked by phone to someone at CIHR, who was very friendly and helpful but did not seem to know what to do. After repeated conversations with her, she suggested that I should try to negotiate the publication contract each time I published an article, an obvious non-starter since I have no leverage in such a negotiation, and since whatever journal functionaries will just want to get rid of my questions as quickly as possible.

Elsevier in fact responded in just this way. I was never able to get more out of them than an effective “just pay the $3000 and stop bothering us.” Eventually, I gave up and paid because I had little hope of finding something else out with more time, and I’d already put a lot of time into the question. But things won’t be any clearer next time around…

I strongly believe that all research findings should be publicly available, and that there should not be any private publishing houses such as Elsevier. Financing for publication should come from an international consortium of governments that agree to pay proportional to the amount their countries produce. It could even be that every researcher is required to pay open access fees for every article from grant money, and that these fees are slightly elevated in order to allow individuals without grant money to publish for free.

But we are a long way from this system. In the meantime, MAD was the journal I needed to publish my article in. Elsevier had a monopoly on MAD, and extorted $3000 from me (much more than the real publication cost, I’m sure). CIHR’s rules, implemented with good intentions but without the necessary agreements with publishers, meant that $3000 of their money (and mine) got wasted. And we are not, in practice, much closer to a true open-access world. In the meantime, I have one more expense and more administrative headaches that will slow down my scientific productivity…

One other quirk reported by the Washington Post: the Obama administration will require government researchers to publish their findings. This also sounds good, but is not workable in practice. Publishing takes an enormous amount of time and effort, and I publish perhaps one tenth of what I could or what I would like to based on time constraints. I have lots of old results sitting in folders that I don’t have time to write up. Forcing me to publish all my results (even the ones I deem less important) would slow down my productivity in generating the truly important results.

So, while well-intentioned, the new rules seem as likely to do harm as to do good….

Idea: A better way to allocate grants for research

There is a paradox that most granting agencies face when they fund research: Researchers have a strong incentive to ask for as much money as possible even if they don’t really need it. One reason is that researchers are evaluated by both their institutions and their peers based on the number of grant dollars obtained (rather than on the quality of their research or their productivity). Another reason is that it’s always better to have a little too much money in the budget than not enough – we might as well estimate high.

These incentives are at work all the time in subtle ways. They mean that researchers ask for more money than they need. There is thus a slow inflation in the accepted “cost of doing business” and in the amounts viewed as normal. There is no reason at all for researchers to try to get by on a shoestring. Perhaps most insidiously, there is an incentive for young researchers to choose research topics that require more money for the same amount of productivity. Several years ago at the University of Michigan, I heard that one of the deans wanted to hire fewer faculty in ecology and evolution and more in lab-based biology because the latter brought in larger grants.* I often see colleagues in developing countries finding cheap and inventive ways to do the same things we do here, just on a fraction of our budgets.

In addition, most granting agencies demand detailed budgets for submitted projects. These budgets require lots of time to prepare, lots of time for peers to review, and are mostly made up anyway – people say in the budget what they need to say to get funded, knowing they can use the money as they wish later. I have even been told by CIHR representatives that I should fudge my budget because it is the best way to get funded!

One reason these problems persist is that in most cases successful grants are chosen with regard to quality but without regard to value. In other words, if there is $1 million left in the budget and a $1 million grant is ranked just above 10 grants that are $100,000, the $1 million grant will be funded and none of the next 10 will be, even if the total value of the next ten projects is 9.5 times that of the $1 million project.

I’ve just devised a system that I think elegantly solves this problem and substantially reduces paperwork. Here is a summary:

1)      For competitions, the funding agency decides a priori on a number of funding tiers (say 7, ranging from $80,000/year to $1 million per year). Each tier has a number of slots available based on the total budget, with many slots available at the lowest tiers and very few at the highest.

2)      Each researcher does not fill out a budget, but simply selects a tier he or she wishes to be considered for. If the grant is awarded, this is the amount that will be received – no more, no less.

3)      All applications are ranked based on quality, without regard to the budget tier chosen.

4)      Funded applications are chosen starting at the lowest tier and choosing the best candidates who applied for that tier, until the number of slots allotted for that tier are filled.

5)      At each higher tier, any applications not chosen for the lower tiers are retained in the competition, and will still be funded at their requested (lower) tier if they rank sufficiently high among the higher-tier applications. In other words, at each tier the highest-ranked applications are funded, including those at that tier and those not selected at lower tiers.

6)      Any remaining funds (due, for example, to lower-tier candidates winning higher-tier slots) can be given to the remaining candidates in order of their rank, irrespective of tier.

 

This system has a number of clear advantages:

1)      It removes a huge amount of work from both writing and reviewing grants – everything related to budgets, which are usually fudged anyway.

2)      It gives a strong incentive to researchers to conduct their research on the smallest budget possible, since the chances of obtaining a grant are much larger for those with smaller budgets.

3)      It gives a way to fund larger-budget projects or programs when justified, and lets the competition incentivize the researchers to make the determination about necessity.

4)      It gives an incentive to young researchers to choose relatively cost-effective fields of study, improving the cost efficiency of research for decades to come.

5)      It rewards overall candidate quality and frugality simultaneously, in such a way that budget does not become a perverse incentive to do other than the best research.

 

What do you think? Would it work? What are the challenges?

 

*Caveat: I heard this story second-hand years ago, and may have misremembered something in the details. But there was unquestionably a perception that expensive research was prioritized by someone higher up.