Overview
Research impact measurement is a controversial topic as there are no agreed standards. The most common measurement tools are research metrics, which are statistical analyses on the impact of published academic research, aiming to provide quantitative indicators for measuring and monitoring impacts of research output. They are widely applied in academia but often inappropriately. However, if research metrics are used appropriately, they can be very useful for research assessment.
Quality is a multi-faceted phenomenon that encompasses various elements like originality, robustness, and informativity. Consideration of outcomes and benefits may involve many different stakeholders.
When new knowledge could bring about harm, risk, cost or other negative effects, the concept of research performance is not only multi-dimensional and ambiguous, but also charged with conflict.

In this guide, a range of key metrics and tools widely used for metrics will be introduced. In these tools, the publications included in the analysis are mainly journal articles.
Reference
OECD. (2010). Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings. OECD Publishing. https://doi.org/10.1787/9789264094611-en
Research metrics are commonly used for purposes such as:
We have to be very cautious on the idea of citation being a fundamental indicator of research impact, as it is not synonymous with “quality”. One single metric, or a few measures only indicates a certain aspect of research performance, where it should not be representing the overall impact of a researcher or research output.
We should understand that citation counts and researcher level (h-index) metrics are inherently biased, for example:
In The Leiden Manifesto for Research Metrics (Hicks et al., 2015), ten principles are advocated to guide research evaluation:
- Quantitative evaluation should support qualitative, expert assessment.
- Measure performance against the research missions of the institution, group or researcher.
- Protect excellence in locally relevant research.
- Keep data collection and analytical processes open, transparent and simple.
- Allow those evaluated to verify data and analysis.
- Account for variation by field in publication and citation practices.
- Base assessment of individual researchers on a qualitative judgement of their portfolio.
- Avoid misplaced concreteness and false precision.
- Recognize the systemic effects of assessment and indicators.
- Scrutinize indicators regularly and update them.
Conclusion
As stated in The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management (Wilsdon, 2015, p. 139):
Quantitative evaluation should support – but not supplant – qualitative, expert assessment.