The Hand LabMeNeuroBiographyWorkTeachingStatisticsMiscellaneous
A random quotation (don't like it? refresh!)

"A good leader is a person who takes a little more than his share of the blame and a little less than his share of the credit"

John C Maxwell

My latest work:

Holmes NP (unpublished) Multisensory integration: inverse effectiveness and noisy signals. Current Biology, 0:0
  [NBArticle #45652] [Cites 1]


Reader A, Tamè L, Holmes NP (unpublished) Two instances of presyncope during magnetic stimulation of the median nerve, and evaluation of resting motor threshold with transcranial magnetic stimulation (tms). , 0:0
  [NBArticle #45584]


Blanchard CCV, McGlashan HL, French B, Sperring RJ, Petrocochino B, Holmes NP (2017) Online control of prehension predicts performance on a standardised motor assessment test in 8-12 year old children. Frontiers in Psychology, 8:374
    [NBArticle #41777] [Cites 51] [CitedBy 3]


What's Wrong With Impact Factors (IFs)?

References - Bibliography

Summary

This project lists articles that assess, review, or criticise the use of impact factors in science. I have also added a few of my own comments and analyses.

1. What is an 'Impact Factor'?

The Impact Factor (IF), invented by Eugene Garfield in around 1955, for the Institute of Scientific Information (ISI), is a ratio:

based on 2 elements: a) the numerator, which is the number of citations in the current year to any items published in a journal in the previous 2 years, and b) the denominator, which is the number of substantive articles (source items) published in the same 2 years. Garfield E (2001), Cortex, 37(4):575.

It is now widely used to assess the 'impact' (a proxy for quality) of scientific research by numerous institutes, organisations, and individual scientists. People will sit down and have serious discussions about whose impact is bigger than whose, and how high their paper can get on the impact list.

2. Who cares?

Well, far too many people, apparently. Judging from the number of negative articles on the topic, and from the apparent influence of impact factors on the recruitment, funding, and assessment of researchers, everyone seriously interested in scientific research has to care about IFs.

3. Don't critics of Impact Factors just have a poor publication record? Impact Envy, you could say...

Define 'poor' publications...

4. But Impact Factors correlate very well with the perceived quality and prestige of journals

Great, so our intuitions are good enough that we don't need the IF!

Eleven things that are wrong with the Impact Factor:

  1. Reflects past, not current or future, performance
  2. Reflects performance of journals as a whole, not of individual articles
  3. Small (negligible) differences in IF may guide important career or funding decisions
  4. Research fields and journals with more authors per paper tend to have higher IFs (because of 'self-citation')
  5. Slow research, reviewing, or publication is disadvantaged, since IF is based only on 2 years' worth of citations, so the citation 'half-life' needs to be accounted for
  6. Typical number of citations per article ('citation density') varies by field and sub-field, affecting IF
  7. Disadvantages non-English language journals
  8. Citations to editorials, letters, commentaries, and news reports are (often) included in the total number of citations to a journal (the numerator), but are not included in the number of citable items for the same journal (the denominator). Consequently, journals with more citable news, editorials or letters will have inflated IFs
  9. Large research fields are likely to have more super-cited methodological and theoretical papers, inflating their IFs
  10. It assesses popularity and ease-of-citation, not quality (this is my main problem with the damn things)
  11. The ISI database is not exhaustive. Tracking my own citations in the literature that I scan suggests that ISI finds only about 60% of citations to my articles, and, for example, often misses 'in press' citations even in those journals that it does cover. You can happily double your ISI-counted citations as a reasonable estimate of the true number of citations.

Two things that are right with the Impact Factor:

  1. It makes a lot of people's jobs easier
  2. It prevents us from having to read, and/or to think about, the too-numerous articles that are published

Case study: 'Mirror Neurons'

Arguably one of the most influential neuroscientific ideas of the last decade, the original paper reporting the first 'mirror neurons' results in the macaque monkey (di Pellegrino et al. 1992), was cited just 4 times between 1992-1994, according to Thomson ISI. In IF terms, that's reasonable for the journal it was published in (Experimental Brain Research). But, 15 years on, the article is being cited more than ever (more than once a week on average in 2007-2008), and a total of nearly 500 times since publication (see Figure 1, below). Two of the follow-up studies on mirror neurons were cited much more frequently in their first two years after publication, yet neither was in a very 'high-profile' journal (Gallese et al. 1996, Brain; Rizzolatti et al. 1996, Cognitive Brain Research).

There will always be such outliers of course, and we should be very wary of counting citations as a proxy for quality. The point here is: Could anyone have predicted the 'impact' of these papers, given the journals they were published in or the number of citations they received in their first two years?

Figure 1: Citations to the extremely-influential single unit neurophysiology papers on 'mirror neurons'. While IFs are calculated for just 2 years' worth of citations, the true 'impact' of a paper may take many years or decades to become evident.

Alternative & Supplementary Impact Measurements

See also: Jeroen Smeets on IFs

References: