0 votes
in Php by (300 points)

The integrated impact indicator revisited 

We propose the I3* indicator as a non-parametric alternative to the journal impact factor (JIF) and h-index. We apply I3* to more than 10,000 journals. The results can be compared with other journal metrics. I3* is a promising variant within the general scheme of non-parametric I3 indicators introduced previously: I3* provides a single metric which correlates with both impact in terms of citations (c) and output in terms of publications (p). We argue for weighting using four percentile classes: the top-1% and top-10% as excellence indicators; the top-50% and bottom-50% as shock indicators. Like the h-index, which also incorporates both c and p, I3*-values are size-dependent; however, division of I3* by the number of publications (I3*/N) provides a size-independent indicator which correlates strongly with the 2- and 5-year journal impact factors (JIF2 and JIF5). Unlike the h-index, I3* correlates significantly with both the total number of citations and publications. The values of I3* and I3*/N can be statistically tested against the expectation or against one another using chi-squared tests or effect sizes. A template (in Excel) is provided online for relevant tests.

Citations create links between publications; but to relate citations to publications as two different things, one needs a model (for example, an equation). The journal impact factor (JIF) indexes only one aspect of this relationship: citation impact. Using the h-index, papers with at least h citations are counted. One can also count papers with h2 or h/2 citations (Egghe 2008). This paper is based on a different and, in our opinion, more informative model: the Integrated Impact Indicator I3.

The 2-year JIF was outlined by Garfield and Sher (1963; cf. Garfield 1955; Sher and Garfield 1965) at the time of establishing the Institute for Scientific Information (ISI). JIF2 is defined as the number of citations in the current year (t) to any of a journal’s publications of the two previous years (t − 1 and t − 2), divided by the number of citable items (substantive articles, reviews, and proceedings) in the same journal in these two previous years. Although not strictly a mathematical average, JIF2 provides a functional approximation of the mean early citation rate per citable item. A JIF2 of 2.5 implies that, on average, the citable items published 1 or 2 years ago were cited two and a half times. Other JIF variants are also available; for example, JIF5 covers a 5-year window.Footnote1

The central problem that led Garfield (1972, 1979) to use the JIF when developing the Science Citation Index, was the selection of journals for inclusion in this database. He argued that citation analysis provides an excellent source of information for evaluating journals. The choice of a 2-year time window was based on experiments with the Genetics Citation Index and the early Science Citation Index (Garfield 2003, at p. 364; Martyn and Gilchrist 1968). However, one possible disadvantage of the short term (2 years) could be that “the journal impact factors enter the picture when an individual’s most recent papers have not yet had time to be cited” (Garfield 2003, p. 365; cf. Archambault and Larivière 2009). Bio-medical fields have a fast-moving research front with a short citation cycle, and JIF2 may be an appropriate measure for such fields but less so for other fields (Price 1970). In the 2007 edition of Journal Citation Reports (reissued for this reason in 2009) a 5-year JIF (JIF5, considering five instead of only two publication years) was added to balance the focus on short-term citations provided by JIF2 (Jacsó 2009; cf. Frandsen and Rousseau 2005).Footnote2

The skew in citation distributions provides another challenge to the evaluation (Seglen 1992, 1997). The mean of a skewed distribution provides less information than the median as a measure of central tendency. To address this problem, McAllister et al. (1983, at p. 207) proposed the use of percentiles or percentile classes as a non-parametric tilt indicators (Narin 1987Footnote3; see later: Bornmann and Mutz 2011; Tijssen et al. 2002). Using this non-parametric approach, and on the basis of a list of criteria provided by Leydesdorff et al. (2011), two of us first developed the Integrated Impact Indicator (I3) based on the integration of the quantile values attributed to each element in a distribution (Leydesdorff and Bornmann 2011).

Since I3 is based on integration, the development of I3 presents citation analysts with a construct fundamentally different from a methodology based on averages. An analogy that demonstrates the difference between integration and averaging is given by basic mechanics: the impact of two colliding bodies is determined by their combined mass and velocity, and not by the average of their velocities. So, it can be argued that the gross impact of the journal as an entity is the combined volume and citation of its contents (articles and other items); but not an average. Journals differ both in size (the number of published items) and in the skew and kurtosis of the distribution of citations across items. A useful and informative indicator for the comparison of journal influences should respond to these differences. A citation average cannot reflect the variation in both publications and citations but an indicator based on integration can do so.

One route to indexing both performance and impact via a single number has been provided by the h-index (Hirsch 2005) and its variants (e.g., Bornmann et al. 2011a, b; Egghe 2008). However, the h-index has many drawbacks, not least mathematical inconsistency (Marchant 2009; Waltman and Van Eck 2012). Furthermore, Bornmann et al. (2008) showed that the h-index is mainly determined by the number of papers (and not by citation impact). In other words, the impact dimension of a publication set may not be properly measured using the h-index. One aspect that I3 has in common with the h-index is that the focus is no longer on impact as an attribute but on the information production process (Egghe and Rousseau 1990; Ye et al. 2017). This approach could be applied not only to journals but also to other sets of documents with citations such as the research portfolios of departments or universities. In this study, however, we focus on journal indicators.

Please log in or register to answer this question.

Welcome to Reubro Q&A, where you can ask questions and receive answers from other members of the community.
...