Responsible Use of Metrics
Caution
Research metrics are commonly used for purposes such as:
- Locating the most important research being done in a specific field
- Identifying the top journals in a field for publications
- Identifying an author’s research impact in the respective field(s), frequently driven by promotion and tenure
- Evaluating and benchmarking research outputs to support decision-making by the University administration
We have to be very cautious on the idea of citation being a fundamental indicator of research impact, as it is not synonymous with “quality”. One single metric, or a few measures only indicates a certain aspect of research performance, where it should not be representing the overall impact of a researcher or research output.
We should understand that citation counts and researcher level (h-index) metrics are inherently biased, for example:
- Age bias –
- The H-index provides senior researchers with a clear advantage when compared to junior researchers, even after they stop being active in research (Aubert Bonn & Bouter, 2023).
- Discipline bias –
- Papers in Medicine receive a high number of citations, while those in Social Science, Mathematics, or the Humanities may not be that high (Mingers & Leydesdorff, 2015).
- Gender bias –
- Women in research teams are significantly less likely than men to be credited with authorship (Ross et al., 2022); Men have a higher increase in the rate at which they publish in a journal (self-publishing behaviour) soon after becoming its editor (Liu et al., 2023).
- Geographic bias –
- Abstracts attributed to HIC (high-income countries) sources are considered more relevant and more likely to be recommended to a colleague than those attributed to LIC (low- or middle-income countries) sources (Skopec et al., 2020).
- Status bias –
- The same paper showing that it is written by a prominent researcher is less likely to be rejected in the peer review process than showing that it is by a little-known author (Huber et al., 2022).
Best practice
In The Leiden Manifesto for Research Metrics (Hicks et al., 2015), ten principles are advocated to guide research evaluation:
- Quantitative evaluation should support qualitative, expert assessment.
- Measure performance against the research missions of the institution, group or researcher.
- Protect excellence in locally relevant research.
- Keep data collection and analytical processes open, transparent and simple.
- Allow those evaluated to verify data and analysis.
- Account for variation by field in publication and citation practices.
- Base assessment of individual researchers on a qualitative judgement of their portfolio.
- Avoid misplaced concreteness and false precision.
- Recognize the systemic effects of assessment and indicators.
- Scrutinize indicators regularly and update them.
Conclusion
As stated in The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management (Wilsdon, 2015, p. 139):
Quantitative evaluation should support – but not supplant – qualitative, expert assessment.
Further readings
References
- Aubert Bonn, N., & Bouter, L. (2023). Research Assessments Should Recognize Responsible Research Practices. Narrative Review of a Lively Debate and Promising Developments. In E. Valdés & J. A. Lecaros (Eds.), Handbook of Bioethical Decisions. Volume II: Scientific Integrity and Institutional Ethics (pp. 441-472). Springer International Publishing. https://doi.org/10.1007/978-3-031-29455-6_27
- Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. https://doi.org/10.1038/520429a
- Huber, J., Inoua, S., Kerschbamer, R., König-Kersting, C., Palan, S., & Smith, V. L. (2022). Nobel and novice: Author prominence affects peer review. Proceedings of the National Academy of Sciences, 119(41), e2205779119. https://doi.org/10.1073/pnas.2205779119
- Liu, F., Holme, P., Chiesa, M., AlShebli, B., & Rahwan, T. (2023). Gender inequality and self-publication are common among academic editors. Nature Human Behaviour, 7(3), 353-364. https://doi.org/10.1038/s41562-022-01498-1
- Mingers, J., & Leydesdorff, L. (2015). A review of theory and practice in scientometrics. European Journal of Operational Research, 246(1), 1-19. https://doi.org/10.1016/j.ejor.2015.04.002
- Ross, M. B., Glennon, B. M., Murciano-Goroff, R., Berkes, E. G., Weinberg, B. A., & Lane, J. I. (2022). Women are credited less in science than men. Nature, 608(7921), 135-145. https://doi.org/10.1038/s41586-022-04966-w
- San Francisco Declaration on Research Assessment. (2012). https://sfdora.org/about-dora/
- Skopec, M., Issa, H., Reed, J., & Harris, M. (2020). The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis. Research Integrity and Peer Review, 5(1), 2. https://doi.org/10.1186/s41073-019-0088-0
- Wilsdon, J. (2015). The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management. SAGE Publications. https://doi.org/10.4135/9781473978782