Research Intelligence Services

We provide intelligence that can inform you of the academic impact and influence of your research, using publication data,citations analysis and altmetrics. This can help with choosing where to publish, finding collaborators and plays a supporting role in funding bids, benchmarking of current practice and evidencing your research strengths. We use Scopus and SciVal data to provide this information and can also deliver training sessions on these tools.

Our services can help with:

  • Deciding on a Journal to Publish In

    Research intelligence can help decide where to publish research papers by identifying relevant journals that you can then shortlist for consideration, and it is recommended that you complete this process yourself rather than use whitelists. ThinkCheckSubmit has useful checklists that can help you to decide: http://thinkchecksubmit.org/check/ To summarise the process, you can begin searching for journals that cover your research area in a number of ways:

    Publications You Cite: the journals you cite regularly in your research.

    Researchers You Know: where the researchers you collaborate with or are prominent in your field publish.

    Topic Search: searching on databases like Scopus or Google Scholar using keywords and filters, or use topics and topic clusters in SciVal.

    From the shortlist of journals found in this way, you can then begin to evaluate them. Any evaluation of a journal should firstly include:

    Audience and Reputation: whether the journal is widely read and well regarded in your field.

    Access and Indexing: whether the journal publishes under the open access model you require or is indexed in the databases you want to appear on.

    https://service.elsevier.com/app/answers/detail/a_id/11274/c/10547/supporthub/scopus/

    Peer Review: whether the form of peer review and criteria used meets your needs.

    Editorial Board: whether the editorial board contains leading researchers you are interested in submitting work to.

    Journal Metrics can help make final decisions on evaluations of where to publish when all of the above has been exhausted. These include:

    Journal Impact Factor (JIF): This takes the citations received in 1 year to documents published in that journal over the past 2 years, divided by the number of documents in that journal in the last 2 years. Documents published earlier in the yearly cycle will have more time to accrue citations, and this metric favours review articles which are more highly cited.

    http://help.incites.clarivate.com/incitesLiveJCR/glossaryAZgroup/g8/4346-TRS.html

    https://www.metrics-toolkit.org/journal-impact-factor/

    CiteScore: This is very similar to the JIF however the scope is increased from 2 years to 3, and it will therefore retain the limitations of the JIF.

    https://service.elsevier.com/app/answers/detail/a_id/14880/c/10547/supporthub/scopus/

    Source Normalized Impact per Paper (SNIP): This takes the journal’s citation count per paper and divides it by the expected number of citations for a paper in that field. Subjects that generally receive less citations will mean a single citation carries more weight using this metric.

    https://service.elsevier.com/app/answers/detail/a_id/14884/supporthub/scopus/

    Scimago Journal Rank (SJR): Here citations are ‘weighted’ according to the ranking of the journal, and then the average number of these weighted citations over one year are divided by the number of documents published in the previous 3 years. Journal rankings tell very little about the quality of the individual papers published in them.

    https://service.elsevier.com/app/answers/detail/a_id/14883/c/10547/supporthub/scopus/

    https://www.scimagojr.com/

  • Evidencing your Research Strengths

    Citations data can be used as evidence as part of career development and funding bids and is partially used to support the REF and institutional KPIs. This particular usage of bibliometrics has a supporting role only and is always used responsibly as set out in the responsible metrics section below. You could use some of the metrics detailed here, or other methods, to demonstrate your research strengths.

    Author Metrics can help showcase an individual’s or a group of individual’s strengths.

    Document Count: Depending on the context, a simple count of the number of items published might be appropriate.

    h-index: This can be applied to the full scholarly output of an author and means that (h) of the collection’s articles have been cited (h) times, and prevents a single highly cited document or a large number of poorly cited documents skewing the results.

    https://service.elsevier.com/app/answers/detail/a_id/11214/supporthub/

    https://www.metrics-toolkit.org/h-index/

    Document Metrics can help showcase a paper’s or a group of paper’s strengths.

    Citation Count: Depending on the context, a simple count of the number of citations since publication might be appropriate.

    https://www.metrics-toolkit.org/citations-articles/

    Field Weighted Citation Index (FWCI): This takes the number of citations received by a document and divides it by the expected number of citations for a document of the same type and age in the same discipline. A FWCI of 1 is average, and this can be effected by a document being published early in the yearly cycle and having longer to accrue citations.

    https://service.elsevier.com/app/answers/detail/a_id/14894/supporthub/

    https://www.metrics-toolkit.org/field-normalized-citation-impact/

    Percentiles: This is useful for a group of publications, and shows the number of documents that were in the top x% of most cited publications worldwide, most commonly used is the 10% measure.

    https://www.metrics-toolkit.org/highly-cited-papers-and-highly-cited-labels/

    Altmetrics: This group of metrics looks at alternative sources of attention for your research papers and includes news articles, tweets, views and downloads, and may help support you in certain contexts.

    https://plumanalytics.com/

    https://www.metrics-toolkit.org/altmetric-attention-score/

  • Finding Researchers to Collaborate With and Finding New and Emerging Areas of Research

    A key use of citations and publication data is in identifying researchers within a particular field with a view to collaboration, for example, to fill a gap in expertise in a research group or to find an appropriate expert for a peer review process.

    Explore current collaborationIntelligence on current collaboration at the university is available, and can be filtered in SciVal by country, sector and number of authors where top collaborating institutions along with lists of co-authors can be seen.

    SciVal subjectsSciVal allows filtering of data by subject classification, and you can choose to use the All Science Journal Classification (ASJC) or the Time Higher Education Classification (THE). The ASJC has 27 top level and 334 lower level subject areas. Lists of authors in each subject area can then be seen.

    https://service.elsevier.com/app/answers/detail/a_id/15181/supporthub/

    https://www.timeshighereducation.com/sites/default/files/the_2019_world_university_rankings_methodology_pwc.pdf

    Topics and Topic ClustersThese are very different to the subject classification, and use collections of documents with common themes to produce around 96000 ‘topics of prominence’. The aim is to show the growing or declining momentum of the Topic via an analysis of citation networks, and the Topics themselves are further grouped into 1500 Topic Clusters.

    https://www.elsevier.com/solutions/scival/releases/topic-prominence-in-science

    https://www.brighttalk.com/webcast/13819/281741

    Lists of authors in each Topic area can be seen, however, the main aim of Topics is to support researchers to find new and emerging areas of research. Although citations data always works in retrospect, a method of calculation that looks at ‘emergence potential’ was developed, and this looks at citation clusters that not only have high growth rates but are new in co-citation to provide a prediction for the future potential in a Topic.

  • Increasing Research Visibility and Attracting Citations

    There are actions you can take to maximize the visibility of your research which will increase your chances of being cited, although none are a substitute for quality.

    Use ORCID and other Researcher IDs: ORCID is a universal researcher identifier that is integrated into many prominent research databases, publisher’s metadata and institutional level systems. Registering for an ORCID ID will ensure you are distinguished from other authors who have the same name as you, and that all versions of your name (initials used etc.) are associated with you. Other researcher IDs include Scopus Author ID and Web of Science’s ResearcherID.

    Publish open accessPublishing your research under a green or gold open access model will remove barriers to accessing your papers and increase citability. You can get more advice on this process from the open access team.

    Share research dataSimilarly, sharing the data from your research can increase awareness of your work. You can get more advice on this process from the research data management team.

    KeywordsUsing keywords and phrases effectively, by choosing them well and placing them in the title and repeatedly in the text, will rank your papers higher in search engine results. This is part of a wider process called search engine optimization.

    Social MediaUsing academic community social media platforms, such as ResearchGate or Academia.edu can increase the profile of your work. Similarly, general social media platforms such as Twitter and LinkedIn will also help in this area.

  • Responsible Metrics

    Using research intelligence well means acceptance that metrics only tell part of the story, and several caveats are widely accepted.

    Disciplinary variationThere are wide differences as to the citation behaviour and trends between disciplines, and using ‘field weighted’ metrics is often not enough to allow for this, for example, humanities researchers tend to publish books and so many research articles remain uncited. Full consideration of the disciplinary context is advised.

    Database differencesThe same metric can be different in different databases, for example, your h-index might be different in Scopus and Google Scholar. This is because each database has different coverage as to the publications it indexes, so the source of data is an important consideration.

    Metric limitationsEach metric will have limitations because no methodology behind their calculation could ever be perfect. For example, the disciplines that generally receive fewer citations will mean a citation has more weight when using the SNIP metric. A consideration of what the particular metric is measuring is therefore also important.

    BiasBias in citation practices can take many forms, for example, in medicine positive rather than neutral or negative correlations are more likely to be cited. An awareness of the behaviours that lead to citing or not citing a paper can provide many insights.

    DORA

    Leiden Manifesto