Overview
Research impact measurement is a controversial topic as there are no agreed standards. The most common measurement tools are research metrics, which are statistical analysis on the impact of published academic research, aiming to provide quantitative indicators for measuring and monitoring impacts of research output. They are widely applied in academia but often inappropriately. However, if research metrics are used appropriately, they can be very useful for research assessment.
In this guide, a range of key metrics and tools widely used for metrics will be introduced. It is important to note that in these tools, the publications included in the analysis are mainly journal articles.
Research Metrics
There are two main types of Research Metrics:
Citation Metrics (Bibliometrics)
Bibliometrics measure the attention level by analyzing the number of times that a piece of publication was cited by other researchers, reflecting impacts among the academic community. The analysis can be conducted at article, author, journal, or institutional level.
Alternative Metrics (Altmetrics)
Altmetrics aggregate the attention level that a piece of publication received in areas other than the scholarly publishing community, such as social media including X (formerly Twitter), Facebook, blog posts, and and other platforms. It counts the number of times a piece of publication has been viewed, downloaded, exported to a citation manager, mentioned and shared on social media sites or media. It is often used to indicate the immediate impact of a piece of work and serve as a reference for possible intentions of citations.
Research metrics are commonly used for purposes such as:
We have to be very cautious on the idea of citation being a fundamental indicator of research impact, as it is not synonymous with “quality”. One single metric, or a few measures only indicates a certain aspect of research performance, where it should not be representing the overall impact of a researcher or research output.
We should understand that citation counts and researcher level (h-index) metrics are inherently biased, for example:
In The Leiden Manifesto for Research Metrics (Hicks et al., 2015), ten principles are advocated to guide research evaluation:
- Quantitative evaluation should support qualitative, expert assessment.
- Measure performance against the research missions of the institution, group or researcher.
- Protect excellence in locally relevant research.
- Keep data collection and analytical processes open, transparent and simple.
- Allow those evaluated to verify data and analysis.
- Account for variation by field in publication and citation practices.
- Base assessment of individual researchers on a qualitative judgement of their portfolio.
- Avoid misplaced concreteness and false precision.
- Recognize the systemic effects of assessment and indicators.
- Scrutinize indicators regularly and update them.
Conclusion
As stated in The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management (Wilsdon, 2015, p. 139):
Quantitative evaluation should support – but not supplant – qualitative, expert assessment.