Everyone is familiar with the cliche' "publish or perish". But how many know what the H-Index is? The latter has in recent years become a common metric used to assess scientists.
I had an interesting conversation the other day with an administrator about assessing the performance of scientists. We ultimately began talking about the citation index and the H-index as useful metrics of science impact (for those unfamiliar with these, the citation index is a measure of how many times a scientist’s work is cited by other scientists; the H-index reflects both the number of publications and the number of citations per publication).
Mr. Admin was dismissing this metric as valid because it does not distinguish between negative and positive citations. The implication is that a scientist who publishes poor (flawed) papers might get citations, but those citations that are critical of the work are mixed up with positive ones. So he was arguing that the citation rate is not a good measure of the quality of a person’s work.
Of course, the answer is that evaluators should also look at the work itself and where it’s published, in addition to the times it is cited.
But then I thought about this some more. Are papers that get criticized necessarily wrong or fail to impact science? I think that many controversial papers or ones that report novel findings are immediately attacked (especially by competitors or those whose previous work is threatened). The consequent citations reflect criticisms of the work. Does this mean they are not relevant or should not be counted in quantifying a scientist’s impact? On the contrary. I think they are very relevant and reflect the very essence of science impact.
Even papers that are based on flawed data or incorrect interpretations, but that stimulate a new direction of research have a valuable impact. Early work using standard techniques available at the time may later be reinterpreted when newer, better methods become available. But these early pioneers have performed an essential service to their field of science. We honor them by citing their work, even if the later body of work shows that the original theory needed some tweaking. Even if initial ideas are completely overturned, the subsequent work often reveals new insights that might never have been discovered if those later researchers had not been stimulated into looking into the topic further.
Scientists who consistently publish thought-provoking work and challenge the current dogma tend to have high citation rates. Scientists who routinely do pedestrian work that just repeats what others have done tend not to be cited much, if at all. People who do really poor, flawed work tend not to get published.
There are other, possibly more valid criticisms of citation indices, such as the fact that all the authors on a cited publication receive a citation regardless of their contribution. Here is a compilation of some pros and cons of the h-index.
What do you think? Is the Science Citation Index/H-Index a valid measure of a scientist’s impact? Do you know how many times your work has been cited? Are you aware that academic search committees often use this metric (along with others) to decide whether you are a worthy candidate? Do you regularly analyze your citations to see which papers (and topics) are being cited most/least? Do you use this information to modify your research questions/approaches/writing style/journals? Do you know how you stack up compared to your peers?
I am aware that people who are going to assess the impact of our scientific work often use such indices. They say that such measures are a first attempt to quantify the quality of our scientific work, a trial to quantify what's hard to quantify, not perfect but something .. I doubt that the H-index fulfills this expectation, but mirrors only our rank in the „Vanity Fair“. Two examples: (1) I reviewed an application of an early-stage researcher for a scholarship recently. The applicant submitted seven papers – a number not bad related to his career stage. All papers are published in peer reviewed journals. But a closer look shows that they contain basically the same pictures, similar text. Seven publications representing only one research idea! I wonder how many points (sorry, citations) he will get for them. (2) A colleague showed me a review of one of his manuscripts today. The anonymous reviewer gives numerous comments. None of them really focuses on the improvement of the content, but demand the incorporation of new citations and references. I couldn't hardly say that all of them are unnecessary .. but surprisingly (or not?) all seven new citations refer to publications of one and the same author and/or co-author. You are a fool if you think twice: citations, citations!
ReplyDeleteDrA:
ReplyDeleteSomeone publishing the same data in seven different publications will bulk up their publication list, but this will not impact their citation rate.
The citation index is how many times an author's work has been cited by others. One cannot become highly-ranked unless a lot of other scientists cite your work frequently. You can look at citations of individual papers to see whether they've made an impact on the field. A paper that has been cited 100's of times is clearly more influential than one that has only be cited 10 times (or not at all).
The h-index takes into account the number of publications relative to citations. Someone with an index of "h" has published h papers, each of which has been cited h times. An index of 25 is a respectable number in ecology, for example.
In terms of citations or the H-Index, your applicant with seven papers will not be rated highly unless a lot of people cite one or more of those papers. If no one cites those papers, then this person will get no "points", and this says that other scientists don't think their work is worth citing.
So if someone is bragging about a paper they published in "Vanity Science", but no one ever cites it, one can conclude that it had little impact. Thus, even if someone pads their resume with duplicative papers, the citation rate or h-index would reveal how that person really stacks up with respect to how important their work really is.
With regard to self-citations, citation calculators (Web of Science) can exclude them.
No index is perfect...
ReplyDeleteNo, no index is perfect. But if your work is going to be assessed with a specific index (such as the citation index), then it would behoove you to keep track of yours and figure out how to improve it.
ReplyDeleteIf you are not assessed with any kind of index (how many pubs, how many citations), then you may not need to worry. However, I would think any scientist would want to know if their work is making an impact on their field--and numbers of citations give a reasonable idea.
As I said in my post, the citation index is a good way to see which of your papers is being cited--therefore of interest to other researchers--helpful information in directing your work.
The Successful Researcher has a lot of good links that are useful to students and young scientists in improving their skills.