Skip to Main Content

Predatory Journals and Publishers

This guide serves to provide information on issues in predatory publishing.

What are journal metrics? Using journal metrics to evaluate journals


Journal metrics are used to measure, compare, and often rank research and scholarly publications. They can also be referred to as journal rankings, journal importance, or a journal's impact. Scholars and researchers can compare scholarly periodicals using journal metrics.

The Journal Impact Factor, created in the 1950s and available through Thompson Reuters' Journal Citation Reports, is the original citation impact metric. Other free journal metrics that have recently been created include CiteScore, Eigenfactor, Google Scholar Metrics, SCImago Journal & Country Rank (SJR), and Source Normalized Impact per Paper (SNIP).

Various Journal Metrics

In 1972, Eugene Garfield designed the JIF in order to rank journal according to the extent to which their articles were cited. The JIF measures the total number of citations in a given year made to all content the journal has published in the two previous years. This number is then divided by the total number of citable items published by the journal within the same time-span. Nowadays the JIF is calculated by Thomson Reuters for over 10,000 journals and is published in its yearly Journal Citation Reports

Journal Impact Factor refers to the frequency by which an article in a journal may be cited in a given year. A higher impact factor is an indication that a journal is more influential in its field of study. 

Predatory journals often publish fake impact factors, or do not list one at all. You can check a journal title's impact factor via Journal Citation Reports, available through Web of Science. 

 

 

The Eigenfactor corrects for excessive self-citation by not including self-citations at all.

The Eigenfactor that was developed by Carl and Ted Bergstrom also measures citations over 5 years. It takes the quality of citations into account by giving more weight to citations from highly cited journals. To counter practices of excessive self-citation, Eigenfactor does not include this kind of citations at all.

                                                      

 

Source : Exposing the predators. Methods to stop predatory journals

The Source Normalized Impact per Paper (SNIP).
The SNIP and SJR can be calculated more than once in a year, which makes them less vulnerable to editorial manipulation
In 2010, Henk Moed introduced the Source Normalized Impact per Paper (SNIP).12 This metric takes into account that some fields attract more authors and thus citations than others. It therefore compares the journal’s citation impact to the citation potential in its field. This makes it possible to directly compare any journal to another. SNIP uses a time-frame of 3 years and only looks at citations from and to peer-reviewed articles.
 

Scimago Journal Rank (SJR)- Scopus

The SJR can be calculated more than once in a year, which makes them less vulnerable to editorial manipulation
Another well-known metric is the Scimago Journal Rank (SJR). This metric uses the database of Scopus, which indexes about 20,000 journals from all aca demic fields.
Like many other metrics, the SJR is based on the Google PageRank algorithm. It measures citations over three years and gives more weight to citations from highly cited journals. It also takes into account the thematic closeness of the citing and the cited journals.
The SJR ignores self-citations above 33% and is especially advantageous to new journals.
The SCImago Journal & Country Rank is a publicly available portal that includes the journals and country scientific indicators developed from the information contained in the Scopus® database (Elsevier B.V.). These indicators can be used to assess and analyze scientific domains. Journals can be compared or analysed separately. Country rankings may also be compared or analysed separately. 
                                          
 

The Journal h-index - uses Scopus

The Journal h-index gives extra information on journals by adding the number of highly cited articles.
The Journal h-index can only be used for comparisons within academic fields. It can be gamed by authors through self-citations and by editors through increasing review articles. It is disadvantageous to new journals and does not take into account that journals with high values can change their citations. Neither does it provide information on the number of exceedingly cited items, which makes it hard to compare the prestige of journals with similar values.
The Journal h-index also uses Scopus, but can use other databases as well. This metric is calculated by taking the least number of publications in a journal, each of which is cited at least h times. It can be calculated over one or more years and bears information on the number of highly cited articles. The journal h-index is dependent on a journal’s age, its visibility and the degree unto which articles can be cited.
 

Google metric – uses The Journal h-index, together with some variants

Google Scholar Metrics are even better accessible uses The Journal h-index, together with some variants, is also used by the metric system Google introduced in 2012.
Google’s database consists of about 40,000 journals, conference proceedings, collections and series from repositories like arXiv in several languages and from several places and disciplines.
Google metrics are accessible to everyone and thus more transparent than those of Thomson Reuters and Scopus.
However Google Scholar Metrics only indexes journals that have published at least 100 papers over the last five years. This could be disadvantageous for new journals. The metrics are also prone to data manipulation, as is shown by the example of computer scientist Ike Antkare. Although he doesn’t exist, Antkare is listed by citation metrics that use Google Scholar’s database as one of the most influential scientists, thanks to the use of auto-citations within his computer-generated articles.
Lastly, Google Scholar Metrics don’t exclude self-citations and give preference to publications with a high number of articles.
 

Misleading and fake metrics

Check the following website for list of misleading and fake metrics:

1. https://beallslist.net/misleading-metrics/

2. https://predatoryjournals.com/metrics/