Ultimos comentarios en Twitter

Friday, 22 April 2011

What's best: Web of Science, Scopus or Publish or Perish?

The answer to this question posed in this title depends on what you want to know but it has to be understood that these three methods of tracking citations and measuring your h-index are not the same.

Web of science


Web of Science (WoS) which is available through Web of Knowledge via your institution's library pages - provided they subscribe - is run by the same company, Thomson Reuters, who award impact factors to journals.  Therefore, only journals with an impact factor awarded by Thomson Reuters appear in this database.  This puts some subjects - mainly those in the humanities and some social sciences, including nursing - at a disadvantage.  One feature of WoS is the facility to create your own ResearcherID webpage whereby you are given a unique ID number and, thereafter, can easily track your own papers and citations and check up on your h-index.  My experience is that WoS is the most conservative estimate of your citations and h-index and, despite the imperfections of the impact factor system, you are clear on the standards expected of the journals that are included.  My view is that WoS is the 'gold standard'.


Scopus

Scopus  is run by Elsevier  who are a major publishing house.  It includes most of the journals on WoS, not only Elsevier journals, but will include all Elsevier journals and some more that do not have an impact factor.  Thus, it tends to include more journals than Web of Science and, as Elsevier are major publishers of health and nursing, is advantageous to some subjects, especially nursing.  Your library will have to subscribe to Scopus.  Scopus has one feature missing in WoS which is that you can generate an h-index with and without self-citations.  WoS allows you to remove self-citations but does not generate and h-index.  My experience is that Scopus awards an h-index 2 points higher than WoS when self-citations are included and reduces this by 1 point when self-citations are excluded.



Harzing's Publish or Perish

Harzing’s Publish or Perish (PoP) is available free through public domain down-loadable software that trawls Google for your publications.  It is very inclusive, including anything that is available on the Internet and found by Google.  Therefore, it includes journals, books, chapters, conference proceedings and reports.  At one level it is very easy to use: type in your name and press 'Enter'; however, if you have a very common name then it will find everything by everyone with that name and this can mean checking through hundreds, perhaps thousands, of hits and also checking for duplicates.  Clearly, with such inclusivity, PoP tends to be very generous in assessing your citations and h-index.  It does provide other measures of citation such as the g-index.  In my experience it is unstable, providing vastly different estimates of h-index on different occasions.

Thursday, 21 April 2011

Is the h-index a good measure of publication performance?

The h-index of publication performance was first proposed by Hirsch in 2005 as an improvement on total citations on the grounds that total citations do not measure the impact of specific papers and are easily inflated by large numbers of citations to individual papers.  In our paper in Journal of Clinical Nursing  David Thompson and I argued that the h-index, while not perfect, did indicate a sustained body of work.  This editorial has generated a commentary and a further editorial on the use of the h-index for journals as an alternative to the impact factor by Hunt & Cleary.


While the impact factor is unlikely ever to be superseded by the h-index as a measure of journal performance; for the same reasons as it is a superior measure of individual performance, it is also a superior measure of journal performance.  The impact factor - which has the virtue of simplicity, but also arbitrariness (ie the two year window for citations excluding those cited in the year of publication) - is relatively easily manipulated in much the same way as total citations (eg self-citation within the journal and maximising citation by other journals), the h-index (which is highly correlated with impact factor) is less easy to manipulate.  In the same way as more complicated and purportedly meaningful measures of citations such as the eigenfactor and the SCIimago journal and country rank (which are less easy to understand than the impact factor and offer little by way of discrimination between journals) have been developed, more complicated and concomitantly less simple meaures related ot the h-index have been developed such as the g-index.


Therefore, the h-index is a measure of how often a particular set of n papers has been cited n times.  Someone with an h-index of 10 has 10 papers that have been cited 10 times.  Some of these papers may have been cited more than 10 times but this does not influence the h-index score and, to increase the h-index, an 11th paper must be cited 11 times and all of the other papers contributing to the h-index of 10 must also have been cited at least 11 times.  Thus, the h-index is intractable compared with total citations, being less easily manipulated by a high number of citations to any single paper; therefore, it is harder to manipulate by individuals and journals than total citations or impact factor.