- © 2013 Mineralogical Society of America
We examine the nature and temporal trends of science journal publishing, and seek to explain why some journals have higher Journal Impact Factors (JIF) than others. The investigation has implications for how we assess the importance of scientific contributions. National Laboratories run by the U.S. Department of Energy, for example, compare JIF across disciplines, while some academic institutions look at JIF when evaluating publication records. Problematic to these policies are several results, which have long been known in the medical and biological sciences, and are shown here to apply to the Earth sciences as well. In particular, citations are distributed almost logarithmically in any given issue of a journal, and so JIFs say nothing about the actual number of citations acquired by any given paper. In the area of mineralogy and petrology, for example, 25% of articles in a typical issue will capture >50% of all citations that accrue to that issue. For some issues the asymmetry is greater; we use such citation asymmetry to develop a classification for journals as “super elite,” “elite,” “influential,” and “minor.” We also find that JIFs are inherently larger for large disciplines, in part because as the size of a discipline increases (as measured by total papers published), the top journals benefit to a greater extent than other journals. For this and other reasons, JIF cannot be compared across disciplines. A heretofore unknown and disconcerting result is the incredible growth in JIFs for commercially published journals compared to their society-published counterparts—a growth that coincides with the advent of electronic distribution models (e.g., bundling) that were instituted by commercial publishers at the beginning of the 21st century. Journals, which only a decade ago had similar JIFs, and were viewed as being scientifically equivalent, now have very different JIFs. These contrasts may nucleate feedback loops (as authors look to higher JIF journals in which to publish) that threaten the health of society-published journals. Our analysis however, shows that in spite of growing contrasts in JIF, many society-published journals still provide a greater value (JIF/cost) compared to their commercially published counterparts. While we acknowledge that citations and citation rates can be useful tools to compare scientific influence and importance, the results of this and other bibliometric studies cause us to conclude that in the evaluation of science and scientists, it is a grave error to substitute numerical values for human judgment. And if professional societies are to continue to play a significant role in science publication, it is incumbent upon scientists—now more than ever—to send their best works to society-published journals.
The Journal Impact Factor (JIF) is perhaps the most familiar of the indices by which journals are quantitatively evaluated. Nearly every journal reports its JIF on its home page, while mostly ignoring various other journal rankings (e.g., SCImago, Journal-Ranking.com), even when such rankings may support a more positive view of the journal. Eugene Garfield, the founder of The Institute of Scientific Information, took part in developing the JIF for the goal of better determining which journals should be included in the citation database that was then being developed in the early 1960s (Garfield 2006; Archambault and Lariveière 2009). Since its development, however, the JIF has been used to evaluate journal performance and the publication records of scientists. Problems have been noted with the JIF for some time (Seglen 1997a, 1997b). But the use of JIF as an evaluation tool has continued undeterred, to the point where the editors of Nature, noting a great disparity in citation rates between disciplines, and between articles within a discipline within their journal, have characterized JIF-based evaluations as “unhealthy” (Nature Publishing Group 2005).
This article attempts to address several issues, including why JIFs vary from journal to journal, both within and across disciplines (for convenience, we accept disciplinary divisions of SCImago), how citations are distributed within specific journal issues, how electronic distribution strategies currently affect JIF, and how such strategies impact society-published scientific journals in particular. In this context, we highlight American Mineralogist: An International Journal of Earth and Planetary Materials (Am Min), and other journals in the areas of mineralogy, petrology, and geochemistry, as well as some journals in closely allied and more distal fields.
Data and methods
To examine citations rates and Journal Impact Factors (JIFs) we do not use proprietary data, but only data freely available on the web. JIFs are taken from both Thomson-Reuters (http://thomsonreuters.com) and SCImago (http://www.scimagojr.com). The Thomson-Reuters database for JIFs for select journals has been compiled by Alex Speer at the Mineralogical Society of America office from information made public by their publishers since 1990, and his database ranges to 2011; the SCImago database extends only from 1999–2011, but as of July 2012, their 2011 JIFs are very low compared to historical values for all journals, and much lower than reported by Thomson-Reuters. Unlike Thomson-Reuters, though, SCImago reports, for each journal in each year, the total numbers of papers, references, citations, and self-citations, among other data. Thus we make use of SCImago data, but exclude their citation reports for the year 2011.
Distributions of citations within journal issues are also evaluated; citations are determined using ISI-Web of Science, in June 2012. Am Min is compared to several journals in the Earth sciences (Appendix Table 11), and also to a few journals at the margins of this discipline. Specific Earth science journals are selected because they are either very similar in their intended scope compared to Am Min (e.g., Contributions to Mineralogy and Petrology, or Geochimica et Cosmochimica Acta) or they illustrate a range of JIF values (e.g., Earth and Planetary Science Letters, Petrology), or allow a comparison of society and commercially published Earth science journals. Where citations for specific articles are tabulated, only the year of issue is identified, so as to maintain author anonymity.
The Journal Impact Factor and its calculation
The JIF of a journal in any given year represents the total number of citations received in that year by all articles published in the prior two years. So, for example, if a journal published a total of 625 citable documents in the years 2010 and 2011, and during the year 2012 those 625 papers received a total of 1530 citations, then the JIF for this journal in 2012 is 1530/625 = 2.448.
Why look at citations of papers published over a two-year period, and not one year or three years or eight? Garfield (2006) argues that a one-year time span would emphasize “rapidly developing” fields, whereas a two-year span de-emphasizes such fields, but is still representative of recent journal performance. Garfield (2006) also argues that rankings based on longer-term citation rates are effectively the same as those based on 2 yr rankings. But this is not true for all journals. Thomson-Reuters reports JIFs for the top 10 journals in Mineralogy for the years 2009, 2005–2009, and 1981–2009 (see Appendix Table 21 for the top 10 journals in this field). To support Garfield’s (2006) view, Contributions to Mineralogy and Petrology (CMP) ranked third in 2009 (JIF = 3.50), second in the years 2005–2009 (JIF = 7.22), and second again for 1981–2009 (JIF = 38.39). In contrast, however, Am Min ranked sixth in 2009 (JIF = 1.86) and sixth again in 2005–2009 (JIF = 4.63), but third in 1981–2009 (JIF = 22.15). Does this represent a long incubation time for Am Min papers, or a decline in the journal’s influence?
The disparity of journal influence
There can be no doubt that, at some level, citations matter, and that widely cited papers and journals, at least in a general sense, can be fairly characterized as being widely influential. But how great is the disparity between journals that are in the upper echelons of a discipline compared to those journals nearer the bottom? As a test, we rank journals by total citations over three years (using 2010 figures from SCImago) for the disciplines of Chemistry, Earth and Planetary Sciences, Medicine, Physics and Astronomy, and the sub-disciplines of Environmental Chemistry and Geochemistry and Petrology. We then compare cumulative sums of citations against cumulative sums of numbers of journals in each field.
Remarkably, the broadly defined disciplines yield cumulative summation curves that can hardly be distinguished (Fig. 1); we use these curves to arbitrarily divide journals into four classes: (1) the “super-elite” are the top 4% (3.6–4.4% range across disciplines) that garner 50% of all citations; (2) the “elite,” or top 12% (11.7–13.4% across disciplines) that garner 75% of all citations; (3) the “influential,” or top 26% (25.3–27.2% across disciplines) that accrue 90% of all citations; (4) the “minor journals” that form the bottom 74% and accrue 10% of all citations. As noted, these divisions are nearly independent of size. To illustrate, SCImago lists just 70 000 documents published in Earth and Planetary Sciences but 550 000 documents in Medicine (153 000 in Physics and Astronomy and 137 000 in Chemistry) and yet both curves are nearly identical at the scale of Figure 1, defining an apparent “law of constant attention span.” However, a subtle size-effect is discernable (Fig. 1, inset panel). At the elite end of the spectrum, the Earth and Planetary Sciences curve is distorted because SCImago treats the family of seven Journal of Geophysical Research (JGR) journals as one, so yielding a concentration of citations in “one” journal. But the Earth Sciences curve is at least as asymmetric as the others above the sixth percentile, whereas Medicine is generally the most asymmetric in this range, with Chemistry and Physics and Astronomy being intermediate, and close to one another. Substantiating this view, asymmetry, although still substantial, is much lower for the two sub-disciplines (Fig. 1) that we examined: in Environmental Chemistry and Geochemistry and Petrology, 90% of citations are garnered by 40% of all journals, and in Geochemistry and Petrology, 10% of all journals account for half of all citations. Apparently, as a field grows, the number of elite and super-elite journals does not grow at the same rate, and so with respect to citations, the top journals benefit more in larger disciplines than in smaller disciplines. In the Earth and Planetary Sciences, Am Min is in an elite class, falling within the top 8% of all journals in Earth and Planetary Sciences; Am Min falls in the top 18% of all journals in Geochemistry and Petrology, which garner 2/3 of all citations in that sub-discipline. As we show later, though, these size effects and underlying asymmetries limit one’s ability to sensibly compare JIFs across disciplines, especially for the top journals in a given field.
Is JIF affected by citation habits or journal size?
One problem of the JIF relates to “citation density” (Garfield 2006), or the number of citations per paper. In the field of Mathematics, for example, it is the habit to cite fewer references per paper compared to Biology (Moed 2010). With fewer citations per paper on average, the total citations that any paper accumulates over a given time are correspondingly few; JIFs for journals that publish such papers will thus be lower (Moed 2010). Table 1 illustrates citation rates for two hypothetical communities, C1 and C2. Each publishes its papers in one of three journals, a small journal that publishes 5 papers/year, a medium-sized journal at 10 papers/year, and a large journal with 20 papers/year. In C1, authors habitually cite exactly 3 citations per article, whereas in C2, the habit is to cite 10 citations per article; citations are random and there is no cross-citation between C1 and C2. Two aspects (Table 1) are important:
Within any given community, the size of the journal makes no difference to the citation rate. The largest journal (publishing the most papers) garners the most citations, and perhaps might be considered more influential on that basis. But citation rates (e.g., JIFs) are not intrinsically higher for larger journals.
JIFs are higher for communities with high citation densities; as might be expected for random citations, citation rates approximate the habitual citations per article in each community. This issue has motivated various journal-ranking schemes to correct for citation density (e.g., Moed 2010; SCImago, Journal-Ranking.com).
The Conversation Curve—The influence of journal size on JIF
Although journal size has no intrinsic effect on JIF (Table 1), there is in actuality, a journal size effect (Figs. 2a and 2b), at least for some top journals. Figure 2a compares JIFs to the number of “citable documents” (see SCImago) published over a 3 yr period (CD3) (which excludes, for example, book reviews, or letters to the editor). For each journal, the 12 data points represent JIF vs. CD3 for each of the years 1999 to 2010 (so for Am Min in 2005, Figure 2 shows the JIF calculated as citations in 2005 to papers published in Am Min in 2004 and 2003, against CD3, which is the total number of papers published in Am Min in the years 2004, against CD3, which is the total number of papers published in Am Min in the years 2003, and 2002). No doubt, a better horizontal axis would be the number of articles over a 2 yr period, but only the 3 yr figure is provided by SCImago. Eighty of 81 journals listed by SCImago in the area of “Earth and Planetary Sciences–Geochemistry and Petrology,” for the years 1999–2010 are shown (Figs. 2a and 2b) [the Journal of Geophysical Research (JGR) plots off scale in Figs. 2a and 2b, publishing 6100–7700 papers per year in the years 1999–2010, with a mean JIF of 2.54; see Fig. 3].
These relationships (Fig. 2) suggest at least two classes of journals:
Class 1: Panel discussions
These journals publish a roughly constant number of manuscripts per year, mostly <400/year, with JIF vs. CD3 slopes that are nearly vertical (as JIFs move up or down). Examples are review volumes, such as Elements or Reviews in Mineralogy and Geochemistry, and journals that may be selective, such as CMP, the Geological Society of America Bulletin (GSAB), and the Journal of Metamorphic Geology (JMG). Also represented are journals such as Physics and Chemistry of Minerals and Petrology, which also each publish <400 papers/year, but garner fewer citations/paper compared to CMP, GASB, or JMG. This class should perhaps also be subdivided into sub-classes: (a) review journals and (b) non-review journals (that emphasize publication of new results).
Class 2: Open conversations
These journals publish a variable number of papers (mostly >400/year) with higher JIF at higher CD3. Together with some Class 1 journals, these form what is here termed the Conversation Curve. Many journals that fall on this trend not only follow the inter-journal pattern, but also show a parallel internal trend, most notably, Earth and Planetary Science Letters (EPSL), Geochimica et Cosmochimica Acta (GCA), Acta Geophysica Sinica, and the Journal of Volcanology and Geothermal Research (JVGR). Except for Acta Geophysica Sinica, most have y-axis intercepts that approach y ≥ 0.
By their nature, of course, the intercepts of internal correlations of JIF vs. CD3 (e.g., EPSL in Fig. 2a), should be zero, since with no articles, no citations should be acquired. That a journal such as GCA or EPSL should attain a positive y-axis intercept is perhaps another measure of their influence. A negative y-axis intercept (e.g., Acta Geophysica Sinica), is on the other hand, highly undesirable. Am Min falls on this Conversation Curve, but without a clear internal trend.
Figure 2b shows that many other journals in the Earth and Planetary Sciences fall on the Conversation Curve. But the effect is not isolated to this discipline. Figure 3 shows two other journals to which Am Min authors also sometimes submit papers: Environmental Sciences and Technology (ES&T) and the Journal of the American Chemical Society (JACS). These two journals appear to fall on a parallel Conversation Curve, slightly displaced to lower JIF at a given CD3, compared to Earth Science journals, but ranging to much higher JIF and CD3. Also of note is JGR (Fig. 3), which SCImago treats as a single journal, but it is in fact a collection of seven journals. If total documents are equally distributed among JGR’s journals, and each has the mean JIF, they would all plot between the two Conversation Curves in Figure 3. In any case, by this measure, Earth Science journals such as GCA, EPSL, and Am Min are arguably stronger than either ES&T or JACS, achieving higher JIF and a greater y-intercept, while monopolizing a much smaller fraction of the conversation among their field’s top journals
A possible meaning of the Conversation Curve
The Conversation Curve is not a mathematical requirement (Table 1; Garfield 2006); rather, because JIFs represent citations normalized by numbers of citable documents, there should be no relationship between JIF and CD3 at all. The correlations of Figures 2 and 3 thus reflect a non-random, sociological phenomenon.
One hypothesis to explain the Conversation Curve is that certain journals, by virtue of their reputation, perceived importance, or simple visibility, are able to grow in influence as they grow in size, capturing a larger fraction of the most interesting conversations taking place within a discipline. Think of each journal as a room in a convention center. Scientists are free to move from room to room, but the conversations taking place in each room are not necessarily equally interesting, nor do they have equal participation. Journals in Class 1 are like panel discussions: a few people talk, many listen. If the panelists are interesting, that room’s “JIF” grows; but some panel discussions attract little attention. In Class 2, discussions involve all attendees who desire or are able to participate. Those rooms with the most interesting discussions attract more people, who add to the discussion; those rooms with perhaps interesting, but highly focused discussions, may attract fewer attendees. Positive or negative feedback loops may then ensue, as the discussion in a particular room waxes or wanes in terms of perceived interest and/or focus.
But why does one journal become the nucleation point for increased discussion and impact in the first place? There are at least two possibilities: (1) Increased conversation might be spurred by one or a few particularly good papers (or speakers, to continue the convention analogy). In such a case, it would be natural for authors/speakers, to choose the same journal/room to present new thoughts or counter-arguments. Or, (2) some journals might simply be more visible (or the talks are better advertised). Below, these ideas are tested.
Why do different journals have different JIFs?
[For example, why is JACS’s JIF(2011) = 9.9 higher than EPSL’s JIF(2011) = 4.2?] Figure 1 shows that a small subset of journals in a field can dominate what are perceived to be the most important conversations taking place within a discipline. This is not an unfamiliar concept; we submit our best papers to journals that we know are well circulated and believe to be frequently scanned by our colleagues. And this judgment, even today, can be made quite independent of JIF. For example, the Journal of the American Chemical Society [JACS; JIF(2011) = 9.907] has a much higher JIF compared to the Journal of Volcanology and Geothermal Research [JVGR; JIF(2011) = 1.971]. But by habit, one might still prefer to publish a geochemical study of Mt. St. Helens ash in JVGR, so as to reach an intended audience. But will this habit be maintained in the future? Search engines, such as ISI’s Web of Science, which catalogs both JVGR and JACS titles, may obviate this approach, as we perform database searches in lieu of browsing journal Tables of Contents.
But why does JACS have a higher JIF (9.9) compared to JVGR (1.97) in the first place, let alone EPSL (4.18), one of the top journals in the Earth Sciences? Two possible factors may work in tandem: (a) the number of people involved in a conversation, and (b) the amount of conversation that is monopolized by a given journal. We start with (b) and discuss (a) in the next section. As for concentration of conversation, JIFs in 2010 for journals in three SCImago categories are examined: (1) “Earth Sciences—Geochemistry and Petrology” (Geophysics journals from Appendix Table 11 are excluded), (2) “Environmental Sciences—Environmental Chemistry,” and (3) “Chemistry” (all subfields). As a test, journals in each discipline are ranked by JIF. Those that fall in the top 25 are selected (all of these journals qualify as elite or super-elite), and the sums of all papers published by just those journals are counted. How important are these top 25 journals in their respective fields? In Chemistry, the top 25 journals represent just 4.7% of all Chemistry journals, but garner 23.8% of all citations over a 3 yr period. In “Environmental Chemistry,” the top 25 journals represent 26% of all journals in the field, and acquire 71% of all citations. In “Geochemistry and Petrology,” the top 25 journals represent 30% of all journals in this sub-discipline, and garner 80% of all citations. Let’s now examine how this monopolization of the conversation affects JIF.
Figure 4a compares the 2010 JIF to the number of papers published by a given journal, as a fraction of all papers published among the top 25 journals in a given field. Using EPSL as an example, the journal in 2010 published 8.5% of all papers published among the top 25 journals in the field of Geochemistry and Petrology, and had a JIF that year of 4.06. The correlation within the Earth Sciences—Geochemistry and Petrology is weak, but positive, indicating that if a journal is in the top 25, it may increase its JIF by capturing more of the conversations taking place on the pages of the top journals. The inter-disciplinary trend is stronger. ES&T publishes nearly 20% of all papers published by the top 25 journals in Environmental Chemistry, and JACS publishes 35% of all papers in the top 25 journals in Chemistry. It thus appears that when one or a few journals come to dominate a field, as do ES&T and JACS, such journals can maintain very high JIFs. So ES&T and JACS achieve higher JIFs than EPSL because environmental scientists and chemists recognize a smaller fraction of their very best journals as being especially important (see Fig. 1).
How high can JIF be increased? The community size effect
But the size of a community can also affect JIF, as has long been known (Seglen 1997a). To illustrate, the total number of documents published within a given discipline is compared to the maximum JIF achieved by any journal in that same discipline (Fig. 4b; Table 2). The two categories are highly correlated—an unsurprising result because of the asymmetry of citation distributions in the disciplines; more articles means more citations/journal for the top journals in a given field. What is especially interesting in Figure 4b is that several disciplines (Cancer, Psychology, and Physics and Astronomy) plot above an otherwise remarkably coherent linear trend defined by all other disciplines. This is almost certainly due to the opportunities for citations of Physics articles in the fields of Earth Sciences, Materials Sciences, Chemistry, etc. and for opportunities for Cancer and Psychology to be cited by Medicine [i.e., what Seglen (1997a) refers to as citations by “adjacent fields,” which can be highly asymmetric].
Other effects on JIF?
Could monopolizing conversation lead to more “self citations” (at the journal level, this means that an article cites other articles published in the same journal in which the citing article appears)? There is some concern that editors may indeed manipulate a journal’s JIF by requesting authors to cite articles from the journal they edit (e.g., Falagas and Alexiou 2008). JIFs, however, are mostly uncorrelated to the percentage of self-citations or even negatively correlated for some journals, such as JGR, JACS, and ES&T (Fig. 5a). And Acta Geophysica Sinica has benefited little from rates of self-citations that are high for the discipline.
Other potential influences on JIF are the percentage of international collaborations (Fig. 5b), the publication of fewer low-citation papers (Fig. 5c) and the references/document ratio (Fig. 5d). The fraction of international collaborations has no effect on JIF, except for Acta Geophysica Sinica. On the other hand, high JIF journals clearly publish fewer papers that receive no citations in the year that the JIF is calculated (such papers may well be cited, perhaps even heavily, in later years). Finally, as might be anticipated (Table 1), a greater number of references per document does indeed lead to higher JIF. Interestingly, journals in the mineralogical sciences (e.g., the European Journal of Mineralogy, the Canadian Mineralogist, and American Mineralogist) appear to have fewer references/document, perhaps reflecting their publication of short papers on new minerals, etc.
Temporal trends in JIF, visibility, and electronic publishing
Temporal trends in JIF provide an interesting insight that should be of concern to society-published journals. Figure 6 compares JIFs for Am Min and CMP from 1990 to 2011, as well as mean JIF for the years 1983–1985 (the latter from Ribbe 1988). CMP is used as a comparison to Am Min as it is effectively equivalent in terms of intended content and quality. Figure 6 shows that although CMP’s JIFs are significantly higher from 1984 to 1999, the contrasts are small (about 20% on average), and those contrasts were effectively erased between 1995 to 1999. Beginning in 2000, however, CMP begins a tremendous decade-long increase in JIF compared to Am Min, reaching a mean difference in JIF of 71% for 2009–2011, an astonishingly high contrast for two journals that were judged to be equal or nearly so for over two decades.
Why the surge in CMP’s JIF? One possibility relates to the fact that since 2000, commercial publishers (Springer-Verlag, Elsevier) have bundled electronic versions of the journals they sell to libraries (Frazier 2001), offering faster, easier, and well-integrated access to more of their publications. As a test, Figure 6b compares percent increases in JIF for commercially published journals to society-published journals, which mostly do not take part in such electronic bundling; the height of each bar is the percent increase in JIF since 1999 relative to the mean of JIFs for the years 2009–2011. Commercially published journals have vastly outpaced their society-led counterparts, with mean and median percent increases in JIF of 169% and 91%, respectively, compared to mean and median increases of 25.6% and 12.7% for society-published journals (Table 3). Clearly, this contrast does not bode well for the health of non-profit publishers if authors choose to publish their best works in journals that have the highest JIF.
What does JIF mean for individual articles?
Within any given issue of a journal, the numbers of citations that accrue to various articles can vary greatly (e.g., Seglen 1997a) (nearly logarithmically for most journal issues studied here), regardless of JIF. To illustrate, we select arbitrary issues from both CMP and Am Min, published in the years 1990 to 2011. Since Am Min is published 8 times annually (about every 6 weeks) while CMP is monthly, one issue of Am Min is compared to two successive issues of CMP.
Figure 7 illustrates the difficulty in using JIF to assess both the quality of a journal and the quality of individual papers contained therein. To create Figure 7, papers from a given issue are ranked in order of decreasing citations, so that a Rank = 1 represents the paper that received the most citations in a given issue, and the maximum rank is identical to the total number of papers appearing in that same issue. Thus in Figure 7a, the 2003 issue of Am Min published 28 papers, and the most cited paper in that issue received 94 citations; CMP, in two successive issues (for the same months as covered by Am Min), published 16 papers, the most cited of which received 107 citations (Fig. 7a). Which journal had the better issue? CMP acquired a higher citation rate: 39.3 cites/paper compared to just 19.6 cites/paper for Am Min, mostly on account of CMP’s publishing several very highly cited papers (i.e., more papers with >40 citations), and publishing fewer papers (although still several) that receive <10 citations. But the area under the curve is not unimportant. The integral of the citation distribution curve of Figures 7a and 7b is simply the total number of citations acquired for the issue. By that measure, Am Min was more influential, garnering 548 citations, compared to 436 for CMP, by virtue of its publishing more papers of moderate influence, in the 10–25 citations/paper range.
As noted by Mutz and Daniel (2012), the JIF is thus problematic as it is an arithmetic mean that is used to represent a highly skewed, non-Gaussian distribution. How should editors, authors, and society members evaluate journal quality? The JIF represents a probability that a paper will be well cited, but a low JIF can mask a very good paper, and vice versa (and how well it measures such probability depends upon citation distributions, which are not normally distributed). Figure 7b illustrates an extreme example, where one paper in Am Min garners 138 citations, compared to <20 citations for the next most cited paper; this issue of Am Min acquires a higher citation rate than CMP because of one “super paper.”
How are citations generally distributed? For Figure 7c, each paper from a given issue of CMP and Am Min is sorted on the basis of total citations, as in Figures 7a and 7b. The vertical axis represents the cumulative sum of citations acquired by papers in that issue, calculated as a percent of all citations for the issue; the horizontal axis is the cumulative sum of the numbers of papers appearing in that same issue, also as a percent (see Seglen 1997a). In most issues examined ~25% of all papers in an issue garner >50% of all citations. A 2010 issue of CMP is typical, with 26.3% of the papers published in that issue receiving 52.4% of all citations to that issue. A slightly older issue of Am Min (2002) illustrates similar proportions. The 2010 issue of Am Min and the 1991 issue of CMP represent more extreme, but not entirely uncommon cases (Fig. 7c); in the 1991 CMP issue, 19% of the papers received 57% of all citations, while in the 2010 issue of Am Min, a single paper received >63% of all citations. Clearly, the JIF says nothing about the citation rate of individual papers.
How many citations might we expect for a given paper in the fields of Mineralogy and Petrology? Over a 12 yr random sample of CMP and Am Min (12 issues, 467 papers), 1.7% of all papers had ≥100 citations (i.e., the “super papers”), 5.1% had ≥50 citations, and 17% had ≥25 citations. The median number of citations is 10. This analysis suggests a pitfall not only to the JIF for judging journals, but to the H-index when evaluating individual scientists, as a moderate to high H-index can be attained without publishing papers that are at the very top of a given field (the H index is the number of papers x that have received x or more citations). Which researcher is more successful? One who publishes 20 papers over 10 yr, each with 25 citations (H-index = 20; total cites = 500), someone who has 10 papers/10 yr, each with 50 citations (H-index = 10; total cites = 500), or one who has published 5 papers/10 yr, each with 100 citations (H-index = 5; total cites = 500)?
Journal ranking systems
This article is not the first to note the problems of interdisciplinary JIF comparisons (e.g., Seglen 1997a; Archambault and Lariveière 2009; Moed 2010). Two notable ranking systems that attempt to deal with issues raised by Seglen (1997a) are flawed. The SCImago Journal Rank (SJR), for example, provides greater weight to citations in more highly ranked journals (see Habibzadeh and Yadollahie 2008), with the result that scientific value is intrinsically reduced as the size of the conversation is decreased (are studies of wakabashiite intrinsically inferior to studies of clinopyroxene?). Yet another scheme (Moed 2010) looks at the “citation potential” of a particular field. More citations per individual articles in a given field mean a greater “potential” for a paper in that field to be cited. To correct for this Moed (2010) normalizes a form of the journal impact factor to this “citation potential,” yielding what is called the Source Normalized Impact Factor (SNIP). While seemingly an improvement, this model does not account for the very strong and unequal asymmetry of citations among journals in various fields (Fig. 1). The system also fails at the intra-disciplinary level. For example, in 2011 Am Min earned a SNIP value of 1.271 while Acta Geophysica Sinica (AGS) in that same year scored a SNIP of 1.47 (journalmetrics.com), not because AGS received more citations, but because in 2011 its citation potential was calculated to be significantly lower than Am Min. This result might be considered dubious given the negative y-intercept for AGS in Figure 2a, and that journal’s high rate of self-citations (Fig. 5a).
A third ranking system preserves the undemocratic habit of SCImago, but also accounts for indirect citations (Journal-Ranking.com). The latter idea is that an original work may influence subsequent works, but with time, only derivative works are cited—a problem they apparently address by accounting for citations within citations. In Mineralogy (Appendix Table 31), Journal-Ranking.com ranks Am Min 1 in its Journal Influence Index (and 8 in Geochemistry and Geophysics; Appendix Table 41), and 3 in its Paper Influence Index (and 14 in Geochemistry and Geophysics).
Finally, Ribbe (1988), concerned about the economy of the mid-1980s and the hard choices libraries had to make, considered the cost of a journal relative to its JIF. Table 4 ranks journals according to their JIF/cost ratios, and society-published journals top the list, in spite of flagging JIFs.
Perhaps the single most important finding of our analysis of JIF and journal rankings presented here is that in the 21st century (the dawn of commercial electronic publishing/distribution models), commercially published journals have been speeding past society-published journals in regards to growth in JIF. Journals that were at one time considered equivalent in content and quality more than a decade ago now have vastly different JIFs. Moreover, these contrasts develop for reasons that appear to be completely independent of scientific quality. Another key result is that JIF says nothing at all about the citation rate of a given paper. For these and other reasons, it has been suggested that the name “impact factor” be replaced by “citation rate index,” since this is what JIF truly measures (Hecht et al. 1998).
Journal ranking systems have evolved to allow better comparisons of journals, but are often ignored, even when such rankings may benefit a given journal. But even these systems can be quite flawed, especially those that assume that scientific value or quality is somehow less if the scope of a discussion is small. A more appropriate approach is perhaps to say that the best journals are those that can claim a high rank in one or more categories or ranking systems, and that overall journal quality and usefulness cannot be reduced to a single number. In any case, electronic publishing-distribution models may spell trouble for society-published journals, especially if newer scientists instinctively think less in terms of the historical prestige that society journals have often held, and more in terms of JIF.
Beyond issues related to electronic publishing and citation asymmetry among journals within a given field, many key results shown here for the Earth and Planetary Sciences have been observed elsewhere. More than a decade ago Seglen (1997a) warned against comparing JIFs across disciplines. He noted that the size of a given field can impact JIF, as can citation density, and citations in “adjacent fields” (in the latter case, fundamental sciences are more likely to be cited by “adjacent” applied fields rather than vice versa). The current findings regarding the “Conversation Curve” (Fig. 2) and the apparent relationship between JIF and the degree to which a journal dominates a conversation within a discipline extend these earlier findings.
So how should we view JIFs of journals, or citation rates, H-indices, or total citations for individual scientists? If only the electronic-publication effect could be equalized, JIFs might provide one useful means to compare citation rates within a discipline. Even within a discipline, though, journals that publish only review articles must be separated from those that publish new contributions, since reviews articles are more highly cited (Seglen 1997a, 1997b; Moed 2010). With such separation, JIF and total citations (the area under a citation distribution curve, Figs. 7a and 7b), might then allow for a more useful perspective on a journal’s actual influence. But even still, the citation potential of plagioclase feldspar is far greater than for benitoite (the state mineral of California, found only in San Benito County). Is a scientific work on benitoite intrinsically of lesser quality for that reason? Might individual scientists suffer (not just in prestige, but in tenure, promotion, or funding) for being fascinated by rare minerals? Might a journal be at a disadvantage for publishing such works, even if such are of exceptionally high quality?
Setting aside the citation potential of a given topic, even the meaning of total citations is not entirely clear. Seglen (1997b) has noted that citations are often selected from “utility” rather than scientific quality. Those and other results led Seglen (1997b) to conclude that “citation rates are determined by so many technical factors that pure scientific quality may be a very minor influence.” Our work leads to similar conclusions, and should give pause to anyone who, in judging a paper, author, journal or discipline, substitutes a numerical score for human judgment.
We thank the Past-President of the Mineralogical Society of America, Mike Hochella, for his support and for highly thoughtful comments on this paper; the senior author is especially appreciative of many stimulating discussions with M. Hochella regarding journal impact factors and related topics that inspired a large fraction of this work. We also thank Alex Speer, who first clearly identified electronic distribution methods as a concern for society-based publications, especially with regard to JIF. We thank Editor Simon Redfern for handling our manuscript, and Mark Welch and an anonymous reviewer for their very thoughtful comments and suggestions.
↵‡ Present address: Sylvia Fedoruk Canadian Centre for Nuclear Innovation, 54 Innovation Blvd (Peterson Building), Saskatoon, SK S7N 2V3, Canada. E-mail: ,
↵1 Deposit item AM-13-051, Appendix Tables. Deposit items are available two ways: For a paper copy contact the Business Office of the Mineralogical Society of America (see inside front cover of recent issue) for price information. For an electronic copy visit the MSA web site at http://www.minsocam.org, go to the American Mineralogist Contents, find the table of contents for the specific volume/issue wanted, and then click on the deposit link there.
- Manuscript Received September 21, 2012.
- Manuscript Accepted December 19, 2012.