Skip to Main Content

Research Metrics

Overview

Overview

Research impact measurement is a controversial topic as there are no agreed standards. The most common measurement tools are research metrics, which are statistical analysis on the impact of published academic research, aiming to provide quantitative indicators for measuring and monitoring impacts of research output. They are widely applied in academia but often inappropriately. However, if research metrics are used appropriately, they can be very useful for research assessment.

 

In this guide, a range of key metrics and tools widely used for metrics will be introduced. It is important to note that in these tools, the publications included in the analysis are mainly journal articles.

Research Metrics

Research Metrics

There are two main types of Research Metrics:

 

Citation Metrics (Bibliometrics)

Bibliometrics measure the attention level by analyzing the number of times that a piece of publication was cited by other researchers, reflecting impacts among the academic community. The analysis can be conducted at article, author, journal, or institutional level.

 

Alternative Metrics (Altmetrics)

Altmetrics aggregate the attention level that a piece of publication received in areas other than the scholarly publishing community, such as social media including X (formerly Twitter), Facebook, blog posts, and and other platforms. It counts the number of times a piece of publication has been viewed, downloaded, exported to a citation manager, mentioned and shared on social media sites or media. It is often used to indicate the immediate impact of a piece of work and serve as a reference for possible intentions of citations.

 

Responsible Use of Metrics

Responsible Use of Metrics

Caution

Research metrics are commonly used for purposes such as:

  • Locating the most important research being done in a specific field
  • Identifying the top journals in a field for publications
  • Identifying an author’s research impact in the respective field(s), frequently driven by promotion and tenure
  • Evaluating and benchmarking research outputs to support decision-making by the University administration

We have to be very cautious on the idea of citation being a fundamental indicator of research impact, as it is not synonymous with “quality”. One single metric, or a few measures only indicates a certain aspect of research performance, where it should not be representing the overall impact of a researcher or research output.

 

We should understand that citation counts and researcher level (h-index) metrics are inherently biased, for example:

 

  • Age bias –
    • The H-index provides senior researchers with a clear advantage when compared to junior researchers, even after they stop being active in research (Aubert Bonn & Bouter, 2023).
  • Discipline bias –
    • Papers in Medicine receive a high number of citations, while those in Social Science, Mathematics, or the Humanities may not be that high (Mingers & Leydesdorff, 2015).
  • Gender bias –
    • Women in research teams are significantly less likely than men to be credited with authorship (Ross et al., 2022); Men have a higher increase in the rate at which they publish in a journal (self-publishing behaviour) soon after becoming its editor (Liu et al., 2023).
  • Geographic bias –
    • Abstracts attributed to HIC (high-income countries) sources are considered more relevant and more likely to be recommended to a colleague than those attributed to LIC (low- or middle-income countries) sources (Skopec et al., 2020).
  • Status bias –
    • The same paper showing that it is written by a prominent researcher is less likely to be rejected in the peer review process than showing that it is by a little-known author (Huber et al., 2022).

 

 

Best practice

 

In The Leiden Manifesto for Research Metrics (Hicks et al., 2015), ten principles are advocated to guide research evaluation:

  1. Quantitative evaluation should support qualitative, expert assessment.
  2. Measure performance against the research missions of the institution, group or researcher.
  3. Protect excellence in locally relevant research.
  4. Keep data collection and analytical processes open, transparent and simple.
  5. Allow those evaluated to verify data and analysis.
  6. Account for variation by field in publication and citation practices.
  7. Base assessment of individual researchers on a qualitative judgement of their portfolio.
  8. Avoid misplaced concreteness and false precision.
  9. Recognize the systemic effects of assessment and indicators.
  10. Scrutinize indicators regularly and update them.

 

Conclusion

As stated in The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management (Wilsdon, 2015, p. 139):

Quantitative evaluation should support – but not supplant – qualitative, expert assessment.

 

 

Further readings

 

 

References

  • Aubert Bonn, N., & Bouter, L. (2023). Research Assessments Should Recognize Responsible Research Practices. Narrative Review of a Lively Debate and Promising Developments. In E. Valdés & J. A. Lecaros (Eds.), Handbook of Bioethical Decisions. Volume II: Scientific Integrity and Institutional Ethics (pp. 441-472). Springer International Publishing. https://doi.org/10.1007/978-3-031-29455-6_27
  • Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. https://doi.org/10.1038/520429a
  • Huber, J., Inoua, S., Kerschbamer, R., König-Kersting, C., Palan, S., & Smith, V. L. (2022). Nobel and novice: Author prominence affects peer review. Proceedings of the National Academy of Sciences, 119(41), e2205779119. https://doi.org/10.1073/pnas.2205779119
  • Liu, F., Holme, P., Chiesa, M., AlShebli, B., & Rahwan, T. (2023). Gender inequality and self-publication are common among academic editors. Nature Human Behaviour, 7(3), 353-364. https://doi.org/10.1038/s41562-022-01498-1
  • Mingers, J., & Leydesdorff, L. (2015). A review of theory and practice in scientometrics. European Journal of Operational Research, 246(1), 1-19. https://doi.org/10.1016/j.ejor.2015.04.002
  • Ross, M. B., Glennon, B. M., Murciano-Goroff, R., Berkes, E. G., Weinberg, B. A., & Lane, J. I. (2022). Women are credited less in science than men. Nature, 608(7921), 135-145. https://doi.org/10.1038/s41586-022-04966-w
  • San Francisco Declaration on Research Assessment. (2012).  https://sfdora.org/about-dora/
  • Skopec, M., Issa, H., Reed, J., & Harris, M. (2020). The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis. Research Integrity and Peer Review, 5(1), 2. https://doi.org/10.1186/s41073-019-0088-0
  • Wilsdon, J. (2015). The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management. SAGE Publications. https://doi.org/10.4135/9781473978782