COST and MSCA, part of a study to improve quality of the proposals evaluation system


As one of the two most important research funding programmes of the European Union, COST and the Marie Skłodowska-Curie Actions (MSCA), have been subject to a scientific study entitled “Does reviewing experience reduce disagreement in proposals evaluation? Insights from Marie Skłodowska-Curie and COST Actions”. Published in April 2021, the article analysed more than 50 000 proposals to the two prestigious organisations and explored the reasons of reviewers’ disagreements when evaluating a proposal. The study was conducted between 2014 and 2018 by 5 researchers and collaborators from different European universities and research-funding organisations:

  • Marco Seeber from the Department of Political Science and Management of the University of Agder (Kristiansand, Norway)
  • Jef Vlegels from the Department of Sociology of Ghent University (Belgium)
  • Elwin Reimink from COST (Brussels, Belgium)
  • Ana Marusic from the Department of Research in Biomedicine and Health of the University of Split, School of Medicine (Split, Croatia)
  • David G. Pina from the Research Executive Agency of the European Commission (Brussels, Belgium)

Zooming on the reliability of evaluations at proposal stage, the study is the result of an academic collaboration partnering with COST and the Research Executive Agency (REA). Here, we present the study’s final conclusions.

Ensuring reliability and minimising biases in evaluations: how experience improves quality

The study explores why reviewers tend to strongly disagree when scoring the same research proposal and aims to address the gap by exploring which reviewers’ characteristics most affect disagreement. “The evaluation of scientific proposals is a complex and fascinating process, with significant margins of improvement”, comments Prof. Marco Seeber, co-author of the article. “The collaboration between funding agencies and scholars is essential to improve our understanding and its functioning, to realize the principles of transparency and strive for continuous improvement that underpin European institutions”.

To ensure the quality of proposals and minimise biases, evaluation processes typically involve multiple evaluators for the same proposal. However, these evaluators (or reviewers) can disagree, sometimes even strongly, on their perspective on the proposal. While some disagreement is natural, severe disagreement can be harmful because it threatens the ability to identify the best proposals, thus causing the evaluation process to lose quality. The article explores the hypothesis that experience in evaluating proposals in a given program – namely having evaluated many proposals in the past and evaluating many proposals in the same call – improves the accuracy of evaluation and in turn reduces disagreement.

The study assumes that through repeated evaluations of a programme’s proposals, reviewers improve their knowledge of three points of reference when judging and scoring a proposal:

  • the objectives and evaluation criteria of a specific programme
  • the quality of proposals in that specific context
  • how other reviewers judge and score the same proposals.

Indeed, reliability increases with the number of proposals evaluated in the past whereas as to the number of proposals evaluated in a call, positive effects become remarkable above 10-15 proposals.

The COST evaluation system: a tailor-designed system to ensure quality

For COST, the study serves as a critical analysis of its open call process, called SESA (Submission, Evaluation, Selection and Approval). The SESA system, which started in 2015, has been tailor-designed to the unique bottom-up and interdisciplinary nature of COST Actions. In a typical SESA call, more than 400 proposals for COST Actions are submitted, of which around 40 are selected (in Horizon Europe, the number of selected proposals will be raised substantially). A COST Action must typically contribute to the advancement and development of scientific, technological, economic, cultural or societal knowledge in Europe. The next collection date for the Open Call Action proposal will be the 29 October 2021 at 12:00 noon (CET).

The quality of the evaluation is essential to the excellence and ultimate impact of the resulting Actions – that is why COST continuously monitors the evaluation process. This article contributes to insights and perspectives on the SESA evaluation process, helping COST in going forward with research evaluation over the Horizon Europe Framework Programme. “Quality of service, including evaluation processes, is of primary importance to COST. This study allows us to reflect on our work and to improve where necessary and possible”, concludes Dr Elwin Reimink, Data and Impact Analysis Officer at COST. “We would also like to extend our recognition to the REA for their cooperation. COST believes that collaboration always leads to better results, and cooperation between funding agencies can be a fruitful method for achieving better results for the research community.”