Everyone is familiar with the cliche' "publish or perish". But how many know what the H-Index is? The latter has in recent years become a common metric used to assess scientists.
I had an interesting conversation the other day with an administrator about assessing the performance of scientists. We ultimately began talking about the citation index and the H-index as useful metrics of science impact (for those unfamiliar with these, the citation index is a measure of how many times a scientist’s work is cited by other scientists; the H-index reflects both the number of publications and the number of citations per publication).
Mr. Admin was dismissing this metric as valid because it does not distinguish between negative and positive citations. The implication is that a scientist who publishes poor (flawed) papers might get citations, but those citations that are critical of the work are mixed up with positive ones. So he was arguing that the citation rate is not a good measure of the quality of a person’s work.
Of course, the answer is that evaluators should also look at the work itself and where it’s published, in addition to the times it is cited.
But then I thought about this some more. Are papers that get criticized necessarily wrong or fail to impact science? I think that many controversial papers or ones that report novel findings are immediately attacked (especially by competitors or those whose previous work is threatened). The consequent citations reflect criticisms of the work. Does this mean they are not relevant or should not be counted in quantifying a scientist’s impact? On the contrary. I think they are very relevant and reflect the very essence of science impact.
Even papers that are based on flawed data or incorrect interpretations, but that stimulate a new direction of research have a valuable impact. Early work using standard techniques available at the time may later be reinterpreted when newer, better methods become available. But these early pioneers have performed an essential service to their field of science. We honor them by citing their work, even if the later body of work shows that the original theory needed some tweaking. Even if initial ideas are completely overturned, the subsequent work often reveals new insights that might never have been discovered if those later researchers had not been stimulated into looking into the topic further.
Scientists who consistently publish thought-provoking work and challenge the current dogma tend to have high citation rates. Scientists who routinely do pedestrian work that just repeats what others have done tend not to be cited much, if at all. People who do really poor, flawed work tend not to get published.
There are other, possibly more valid criticisms of citation indices, such as the fact that all the authors on a cited publication receive a citation regardless of their contribution. Here is a compilation of some pros and cons of the h-index.
What do you think? Is the Science Citation Index/H-Index a valid measure of a scientist’s impact? Do you know how many times your work has been cited? Are you aware that academic search committees often use this metric (along with others) to decide whether you are a worthy candidate? Do you regularly analyze your citations to see which papers (and topics) are being cited most/least? Do you use this information to modify your research questions/approaches/writing style/journals? Do you know how you stack up compared to your peers?