Thursday, February 3, 2011
The Catch-22 Culture
Let me be clear about what I mean. I'm not suggesting that there should not be any oversight at all or that government scientists should not be held accountable for their work. What I'm suggesting is that an "accountability regime", i.e., one in which accountability takes precedence over science mission, will eventually backfire and lead to a workforce that avoids the very activities needed to produce quality science. One of the reasons is that scientific productivity expectations are pretty low in comparison to academic institutions. For example, in my science unit, PIs are expected to submit only one paper per year (submit, not publish) to be rated fully successful in job performance. Theoretically, one could resubmit the same paper each year and meet the basic requirement. My point is that when you have low requirements for performance in combination with excessive rules of accountability, there is a great danger that people will decide that the easiest (and safest) route is to do as little as possible.
Over the past ten years, government scientists have been increasingly audited and subjected to more paperwork affirming that we have not made errors. To give you an idea of what a government scientist must go through prior to submitting a paper to a journal, here is the path a manuscript now takes (the process has evolved over the past few years). First, the author submits the manuscript to her supervisor who checks it for technical quality and policy issues and then sends it out for peer review (2 reviews are required). Depending on how diligent your supervisor is, these reviews may be accomplished in 2 or 3 weeks or languish for months in reviewer purgatory. Eventually, the reviews come in and they are forwarded to the author for reconciliation. The author must address all comments (no matter how bone-headed they may be) and prepare a reconciliation document detailing how changes were made (or not). Then the package containing original and revised versions of the ms, the reviews, the reconciliation, and all dated email correspondence are forwarded to the science unit head who goes over everything and approves or disapproves it. If approved, it then goes to the bureau approving official who again goes through the entire package. If your manuscript topic is deemed especially "sensitive", then it undergoes more intense scrutiny.
At any of these higher levels, there may be comments on the technical aspects of the manuscript; in some cases, these comments are helpful, in other cases, not. In most cases, the officials are not experts in the science topic and may raise inappropriate questions about technical aspects out of ignorance. Others suggest editing changes that are grammatically incorrect. Even though the author may be able to answer those questions and explain why a requested change is not correct scientifically or grammatically, the time involved in addressing these various questions adds up. If the author tries to ignore these, the next official in line will kick back the manuscript and demand all comments be addressed. So the author sometimes spends a lot of time addressing questions and suggested changes that do not improve the manuscript.
Once the bureau official finally signs off, then you are free to submit to a journal where your manuscript will go through the usual gauntlet of editorial and reviewer raking-over-the-coals. Getting collegial reviews prior to journal submission can be helpful, but all the time and paperwork involved in getting approvals at multiple levels is not.
You may be wondering at this point why should a manuscript that will go through a thorough review by peers and journal editors (i.e., a real review) when it is submitted for publication need to be reviewed beforehand and approved by people who are usually unfamiliar with the field of study? We ask that question all the time.
The novel "Catch-22" often comes to mind. This classic novel by Joseph Heller is a critique of bureaucratic logic and operation. It follows the protagonist, Yossarian, a B-25 bombardier in World War II. Yossarian is desperate to get out of the war and tries to figure out how to avoid flying missions. However, the military has a rule, Catch-22, which prevents soldiers from avoiding combat:
"There was only one catch and that was Catch-22, which specified that a concern for one's safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he were sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to."
As Yossarian flies more missions, his commanders keep raising the number of missions required to be discharged from the military. This and other manifestations of the military bureaucracy are variations on the Catch-22 theme. Enforcers of the crazy rules don't have to prove that their actions against rule violators are actually supported by a provision of the Catch-22. They can punish violators with impunity. The ultimate irony (in the novel) is that Yossarian finally realizes that the Catch-22 rule doesn't exist, but because it isn't real, it can't be refuted or overturned.
In the end, what you get is an army (or workforce) that spends all its time following insane rules (or trying to get around them) instead of doing what they were hired to do--fly their missions (or do science).
Catch-22 describes a paradoxical situation in which an individual needs something that can only be had by not being in that situation. For government scientists, the Catch-22 is that we are asked to prove our work is unassailable or essential, but in doing so we demonstrate that there is reason to question it. The example I gave in a previous post was that scientists are required to provide justification for travel to an international conference. To give a paper is not sufficient. We must demonstrate how our attendance benefits the agency and/or how our failure to attend will have negative consequences (for the agency).
Accountability rules are driven by the fear (of bureaucrats) that a scientific report will turn out to contain a flaw that becomes a public embarrassment or a member of Congress will question travel expenditures. However, the more we try to "prove" the lack of error, waste, bias, or fraud, the less convincing we are. It's impossible to prove a negative. It's the reason our justice system is based on the presumption of innocence (the burden of proof is on the accuser, not the accused).
If there are people who question the validity or integrity of a scientific report, why not require them to prove that there has been error, bias, or fraud? Why put the burden on the scientist to prove they are innocent of the charge, even before the charge has been made? As I suggested above, this burden will eventually chill scientific endeavor, especially for high-profile topics or new research directions. We've already seen several instances of climate scientists who have been challenged, grilled, and even threatened--and their institutions and agencies have not always leaped forward to support them, and in some cases even fired them.
Instead of promoting openness, accountability regimes create a climate of fear, paranoia, and confusion. They are antithetical to the mission of science.
Image/video credits: Catch-22 by Joseph Heller (Simon and Schuster); movie clip from Catch-22 (Paramount Pictures)