Statement on the Responsible Use of Research Metrics

Lancaster University is a research-intensive institution, supporting multi and inter-disciplinary collaboration and producing real-world impact through high-quality research. At Lancaster we see the value in quantitative metrics supporting decision-making, contextualised with expert qualitative assessment. Quantitative research metrics provide valuable insight into the research landscape, are of key importance to national exercises including their provision as supplementary evidence for the Research Excellence Framework (REF) for select Units of Assessment. Metrics are also used in recruitment, promotion, and world rankings.

The policy context around preventing the misuse of research metrics includes the Leiden Manifesto [1], the San Francisco Declaration on Research Assessment (DORA) [2] and the Agreement on Reforming Research Assessment [3] as part of the Coalition for Advancing Research Assessment (CoARA). The University has been a signatory of DORA since 2019, signalling our commitment to the responsible use of research metrics. DORA does not prohibit the use of metrics, rather it asks metrics to be used appropriately, to balance a range of qualitative and quantitative indicators to assess research on its own merits. Many research funders, publishers, institutions and individuals are DORA signatories.

Scope

This statement is applicable to all researchers, and research support staff, at all career levels. The principles focus on traditional metrics and the quantitative elements of alternative research metrics, ensuring these are being implemented responsibly. This is not intended as complete guidance on individual quantitative metric use cases. Principles should be used as institutional guidance, with Departments leading on specialist guidance that complements the overall principles set by the institution.

Principles of Responsible Metrics

Our nine principles are underpinned by elements of the Leiden Manifesto, DORA and institutional priorities:

1. Recognising disciplinary differences

There is no one size fits all for research evaluation. We recognise variation in authorships norms, publication and citation practices across disciplines. Research metrics are not used in isolation, with field-weighted or normalised measures encouraged to allow cross-disciplinary comparisons help contextualise other metrics. Raw counts (such as citation count) are not comparable across disciplines.

2. Recognising diversity

As an inclusive research community, we recognise that diversity of people, and career pathways, can impact research metrics. Factors may include, gender bias, a career break, parental leave and/or language bias (locally relevant research may not be in English language). High cost to publish in some journals presents another barrier, which increases the effect of structural inequalities. We believe in citation justice and encourage researchers to give credit where credit is due.

3. Ensure importance of all research outputs, extending to software and datasets

Lancaster University is committed to Open Research, championing the idea that all types of scholarly outputs should be shared as freely as possible and as early as possible in the research process across all disciplines, both within and beyond academia. This includes data, software, exhibitions, policy reports and other outputs. (See: Principles of Open Research).

4. Basket-of-metrics approach

Metrics support, not replace, expert judgment. No single metric should be used in isolation for research assessment purposes. A basket-of-metrics approach is taken to mitigate the limitations of individual metrics. We require qualitative contextualisation (discipline norms, career stage) alongside multiple complementary quantitative metrics that add clear value to research assessment. Adding an excessive number of quantitative metrics is not good practice, increasing risk of misplaced concreteness and false precision.

5. Appropriate understanding of the uses and limitations of metrics

Metrics do not measure research quality, they measure reach and impact, and should not be used as a proxy for quality. A core understanding of limitations of data source, including gaps in coverage, language bias and name ambiguity is essential.

H-index is a particularly prominent author-level metric that combines productivity and citation impact of an individual scholar. However, this metric depends on the underlying database and fails to account for career stage and individual circumstances. This metric should be used cautiously.

6. Journal-based metrics not to be used for appointment and promotion considerations

Research should be assessed on its own merits. Journal Impact Factor (JIF) should not be used as a proxy for quality, as this metric is a measure of the Journal and does not include any review of individual articles. Journal prestige is not a substitute for research quality. Appointment and promotion decisions should not be based on publication venue. A basket-of-journal-metrics may be noted, but researchers who have chosen an appropriate journal will not be penalised based on journal metrics.

We acknowledge that international standards and expectations for academic appointment and promotion differ across countries and disciplines, as implementation of research assessment reform is happening at different rates. Appropriate guidance may be sought directly from Departments to manage the implementation gap and ensure you are aware of contextual differences.

7. Alternative metrics (Altmetrics)

Complementing traditional research metrics, altmetrics capture online attention such as social media, news, policy and patent mentions. These are measures of attention, not quality. We support the use of altmetrics to evidence impact and engagement. We subscribe to the Altmetric Explorer Platform to enable our researchers to explore non-traditional metrics. This is a particularly good option for Early Career Researchers as they build their research portfolio.

8. Transparent about appointment and promotions criteria

Criteria should be published in advance, with a clarity on expectations from researchers and how metrics may be used. Qualitative and quantitative metrics should be combined to provide the fullest view for researcher assessment. Panels provide balanced, expert judgement with metrics supplied to supplement, not to be used as a substitute.

9. Data and analysis should be verified by the individual(s) being analysed

Individuals should have the opportunity to review and verify any data used in their assessment, prior to any decision-making. This ensures data accuracy (outputs correctly attributed), promotes transparency and fairness and avoids data manipulation. Data source and extraction dates should be included to account for changes in databases over time. The use of researcher profiles, ORCID IDs and integrating these platforms is articulated as a key action in ensuring data accuracy, increasing confidence in metrics and reducing administrative burden at the time of collating promotions cases.

Last reviewed: February 2026

Next review: February 2027