Metrics provide a valuable method to quantify the attention that publications attract. However, it is important to acknowledge the limitations of metrics and use them responsibly.
There has been increased focus in recent years on the responsible use of metrics. The consensus of all discussion is that metrics should support, not supplant, expert judgement and that a balanced, fairer approach should be used, based on a combination of qualitative and quantitative data.
The best decisions are taken by combining robust statistics with sensitivity to the aim and nature of the research that is evaluated. Both quantitative and qualitative evidence are needed; each is objective in its own way. Decision-making about science must be based on high-quality processes that are informed by the highest quality data. (Leiden Manifesto)
The following international frameworks are the most widely recognised advocates for the responsible use of metrics for assessment.
Signed in 2012, the San Francisco Declaration on Research Assessment (DORA) recognizes the need to improve the ways in which the outputs of scholarly research are evaluated.
Several themes run through the declaration, including:
Published in Nature in 2015, the Manifesto includes the following 10 principles:
Published in 2015, The Metric Tide argues that responsible metric use can be understood in terms of five dimensions:
CoARA was established in 2022, and aims to build on the work of DORA and the Leiden Manifesto to establish a common direction for research assessment reform.
The agreement is based on several shared principles, 10 commitments, and a 1- and 5-year timeframe for reform.
|Use more than one metric
No single metric should be used to indicate the impact of a publication. It is important to use a range of metrics to provide a complete picture of the impact of research.
|Citation count does not indicate quality
For example, a low-quality journal article may attract more citations because of flaws in its research, whilst a high-quality article goes uncited because it is not published in a journal indexed by a citation database. Publications may also be cited because they include a well-known author rather than for the quality of the research reported.
|Multiple databases and tools
|Metrics can be sourced from a range of databases and tools, and there will be overlap between each source. Variations in metrics between databases can occur because of differences in the number and variety of publication sources and time periods indexed in each database. It is important that researchers check across a range of databases and use those that best reflect their need.
Citation-based metrics are time dependent, and should only be used when comparing publications of similar age.
Self-citations can heavily influence the number of citations a publication attracts.
Any metric calculated by an average will be susceptible to being skewed by outliers (either very highly cited publications or poorly cited publications).
Different types of publications will attract more citations than others. For example, review articles typically attract more citations than articles reporting original research.
Research in some disciplines tends to be cited more slowly, or does not attract high citation counts, making it difficult to measure and compare impact across disciplines.
Most of the publications indexed in citation-based databases tend to be journal articles. While articles are the primary publication type for science-based disciplines, the lack of indexing of other publication types, such as books, book chapters, and conference papers discriminates against humanities and some social science disciplines.
|Journal level metrics
Journal level metrics, such as the journal impact factor or SJR rank are calculated at the journal level, and should not be used to measure the quality of an individual article.
|When presenting analysis based on data from a metrics source, include the date the data was collected, and filters applied.
Over 21,000 individuals and 156 countries have signed DORA. Several Australian institutions (including the NHMRC) are signatories of DORA and are committed to ensuring that a responsible approach to metrics is used in grant applications and evaluations.
DORA includes specific recommendations for funding agencies:
Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.
The National Health and Medical Research Council (NHMRC) is a signatory of the DORA principles. The 2018 Guide to NHMRC Peer Review states:
Peer reviewers should take into account their expert knowledge of their field of research, as well as the citation and publication practices of that field, when assessing the publication component of an applicant’s track record. Track record assessment should take into account the overall impact, quality, and contribution to the field of all of the published journal articles from the grant applicant, not just the standing of the journal in which those articles are published. It is not appropriate to use publication metrics such as Journal Impact Factors or the previous 15 Excellence in Research for Australia (ERA) Ranked Journal List when assessing applications.
The NHMRC recommends that reviewers: