An Open Letter to Canada’s Health Minister

by Alan Cohen

July 8, 2016


Hon. Jane Philpott, 

Minister of Health

House of Commons

Ottawa, Ontario, Canada K1A 0A6

Dear Dr. Philpott,

I was very pleased to see that concerns of the medical research community are being expressed, and that you are convening a panel to explore next steps. There are many serious issues to be addressed. However, while I thank the writers of the original open letter for getting this process going, and while agreeing with the general sentiment of frustration expressed in that letter, I (and I suspect many others of members of the medical research community) do not agree with the specifics, and in particular with the suggestion to return to a system of face-to-face peer review. While I cannot claim to speak for others, I suspect that many signatories of the letter agreed with the frustration expressed more than the solution proposed.

Broadly, I would raise four points in response to the open letter.

First, most of the current frustration and problems are not a result of the online peer-review system, as suggested in the open letter. I have participated in this new system three times as a reviewer and three times as an applicant in various other competitions. The system is far from perfect, but so was the old system. Even before the reforms, all researchers I know complained stridently about the arbitrariness of the review and funding process. The uproar following the current Project competition is mostly due to the backlog of unfunded grants due to cancelled competitions and money directed to the Foundation Scheme, which created a huge volume of applications for this competition, which in turn created severe problems (a) managing the competition and (b) recruiting enough reviewers. The old review system would probably have made things worse because all of those reviewers would have had to commit to travel to Ottawa. The backlog is a fait accompli, so the task now is to find solutions going forward, not simply to complain that things have worked out poorly.

Second, the old face-to-face system was not much better than online peer-review in terms of fairness and quality, but is much more burdensome in terms of cost, time, and effort. The sad fact of the matter is that it is impossible to have any truly fair system because reviewing is subjective. For example, I have run simulations comparing the disagreements in reviewer assessments for some panels I have served on with random assignment of grant ranks. Actual reviewers were in as much disagreement about quality as if ranks had been randomly assigned to applications. Participants in face-to-face committees have the impression of consensus and fairness, but given the discrepancy in scores from one competition to the next, this likely largely due to the social dynamics of the committee giving a false sense of accomplishment rather than a true indication of review quality. Given the impossibility of fair or objective rankings, we should favor a reviewing system that minimizes time/effort/costs of reviewing and grant-writing. This was exactly the objective of the online reviewing system. I think this system is still superior to the old one, particularly when costs are considered, despite some needs for fine-tuning. Another advantage of the new system is that it does not force grants to fit into pre-defined categories, a major problem with the old system for those of us breaking boundaries and pushing in new directions.

Grant evaluation burden versus quality trade-offThe problem with diminishing marginal returns in time and effort reviewing grants. No system functions very well and there is always discrepancy of opinion, so at a certain point we are better off admitting that funding is largely arbitrary than investing more time and money pretending it is not.

Third, the core problem is the burden on researchers of looking for funding, not the fairness of the reviewing system, which can never be perfect. Too much pressure to obtain funding creates bad science, science designed to attain metrics of success (metrics that are far from perfect) rather than to produce good science. Most researchers succeed in getting funded eventually, and the lower the success rates the more work (and politicking) is needed to get funded. CIHR’s new Foundation Scheme represents a step in the right direction, but the reform was not bold enough and did not finance enough researchers.  Roughly 80% of researchers should get funding through such a scheme. The flip side is that a mechanism is needed to control the number of researchers that can apply, otherwise universities will hire more and more until success rates are low again. Such a mechanism could include decreased funding to universities that have low success rates, forcing the universities to maintain minimum candidate quality during recruitment.

Fourth, stability of the funding system is paramount, as correctly pointed out in the letter. Alain Beaudet’s metaphor of changing an airplane’s engine in mid-flight is the right one: if many top researchers lose their funding and thus their labs as a result of changes in funding, they will not necessarily come back in 2-3 years when funding is available again. But much of the damage is done here. The challenge is thus to mitigate this damage where possible, and to ensure stability going forward. In order to achieve this, I recommend a simple but dramatic solution. Distribute the money allocated for the next several project competitions equally among the losers of the current competition (expected to be ~90% of applicants) and other current holders of CIHR funds with expiring grants, with no further application required. The principle is that everyone gets a little bit, no one gets a lot. This would allow all or most labs to survive a few years while the wrinkles in the system get ironed out and a definitive system gets put in place. While this goes against the instinct of many researchers to always want to evaluate quality before funding, the fact is that most of the researchers would eventually get funded anyway.

As you can see, while frustration is generalized, there is no consensus among researchers about the solutions. There were also very good reasons for CIHR to implement the reforms – the old system was broken as well – and it is not clear that we are worse off now, or that CIHR has performed too badly in this, despite a number of specific criticisms I could share. For these reasons, I encourage you strongly to ensure a diversity of perspectives represented in the panel you convene, not just the loudest critical voices. This should include young researchers, senior researchers, basic scientists, health services researchers, researchers whose subjects fall between fields, researchers who signed the open letter, researchers who did not, and experts from within CIHR itself, who will have a critical perspective that the researchers themselves do not.


Alan A. Cohen

Professeur agrégé/Associate Professor

Département de médecine de famille/Department of Family Medicine

Université de Sherbrooke/University of Sherbrooke

Centre de recherche sur le vieillissement

Centre de recherche du CHUS

3001 12e Ave N

Sherbrooke, QC J1H 5N4

819-821-8000 x72590