So why would so much effort be spent ensuring that government scientists do not commit fraud (or assuring the public of this)? It may partly be due to the efforts of special interest groups who have attacked scientists and scientific findings (e.g., climate change), charging that the science is poorly done or that scientists are biased. This is a whole other topic, but is what I suspect is behind this renewed emphasis on integrity in science. Bureaucrats fear embarrassing incidents that lead to Congressional inquiries and funding cuts (the government department that my agency belongs to has been recently embarrassed, although it was a high-level bureaucrat that was responsible--not scientists). The current administration has also emphasized integrity in science as an important focus, which has been addressed with new rules, greater scrutiny (of scientists), and special offices (to oversee the scrutiny).
Unfortunately, the establishment of an office to oversee scientific integrity, new rules and regulations, and increased scrutiny of science products sends the opposite message. If scientists are mostly trustworthy and doing their jobs properly, why would there need to be a special office to ensure that our work is free of bias or fraudulent actions? Why the need for new rules and regulations now? Is the public really assured by the creation of another bureaucracy?
Reading the materials on scientific fraud, one gets the impression that government scientists are ignorant of basic scientific guidelines and need to be given a code of conduct--not what I imagine the rule-makers want to convey. There is an official "Code of Scientific Conduct" for employees in my Department, which has recently been updated and expanded. It's pretty long, but here are some excerpts (the points that only relate to scientists):
(1) I will place quality and objectivity of scientific and scholarly activities and reporting of results ahead of personal gain or allegiance to individuals or organizations.
(2) I will maintain scientific and scholarly integrity and will not engage in fabrication, falsification, or plagiarism in proposing, performing, reviewing, or reporting scientific and scholarly activities and their products.
(3) I will fully disclose methodologies used, all relevant data, and the procedures for identifying and excluding faulty data.
(4) I will adhere to appropriate professional standards for authoring and responsibly publishing the results of scientific and scholarly activities and will respect the intellectual property rights of others.
(5) I will welcome constructive criticism of my scientific and scholarly activities and will be responsive to their peer review.
(6) I will provide constructive, objective, and professionally valid peer review of the work of others, free of any personal or professional jealousy, competition, non-scientific disagreement, or conflict of interest. I will substantiate comments that I make with the same care with which I report my own work.
These are certainly important guidelines, but is there any scientist who's not aware of these basic rules of conduct? If I were a non-scientist, I would wonder why government scientists must be reminded of these points and, moreover, why the government would employ a scientist who must be reminded that it's wrong to break any of these guidelines.
Government scientists struggle to keep up with the changing rules. It's sort of a joke among government scientists that the rules we must follow are moving targets. Even if you followed the previous rule about something and are caught in between a rule change, you can get your fingers rapped and be required to redo things under the new rule. The change usually involves a new rule or new step that increases the effort to get something approved....rarely the opposite. The specific rules guiding the product review process, for example, change constantly, so if your manuscript is caught mid-stream in a change-over, you may be sent back to square one to start all over again. On one rare occasion, the rule change removed a step: the requirement that abstracts submitted to conferences originally had to go through the same process as manuscripts (2 peer reviews, approvals at multiple levels) was eventually modified to require only supervisor and science unit head approval. As you might imagine, it was a nightmare trying to get an abstract reviewed and approved in time to meet a conference deadline.
Another concern is the misinterpretation by non-scientists about the ever-evolving body of science. The mission of science involves exploration, discovery, and risk-taking. What we report today in journals will likely be modified in the future (or even rejected) as more information becomes available. Scientists also disagree often about interpretation of results. Eventually, however, one hypothesis prevails; it stands the test of time while competing hypotheses fall by the way-side, one by one. However, details continue to emerge from research, which leads to continual modification.
This process is often misunderstood by non-scientists who expect results that are final and written in stone; they interpret any modification of a theory as evidence of wrongdoing by prior researchers. It's easy to imagine a government study conducted today with current methods and instrumentation is later shown to have been incomplete or even wrong by a future study using a new methodology. This situation is not only common, but expected in science. However, scientists don't fault early workers--we usually view them as pioneers, even if their original idea is eventually shown to be incorrect. Their hypothesis and initial efforts may have opened an entirely new line of research that ultimately led to important discoveries. Non-scientists (including the media) seem not to understand this. Special-interest groups have exploited this ignorance and used it to criticize scientists working on controversial topics. Michael Mann and the Hockey-stick controversy is just one example.
A change in a scientific concept as more data are collected could be naively interpreted as scientific fraud on the part of the original researcher. For example, a new study shows conflicting data, which leads to the assumption that the original findings must have been either the result of mistakes or fraud (on the part of the scientist). Or at least that's what some critics charge...particularly the ones who want to cast doubt on the integrity of scientists and the validity of their work. That's essentially what happened in Mann's case.
The creation of doubt in the minds of the public (about a scientific issue) is a powerful strategy that special-interest groups have learned to use. The book, Merchants of Doubt, does a great job of explaining this technique. It was used by the tobacco industry (it doesn't cause cancer); by critics of the ozone hole and acid rain (they don't exist), by proponents of DDT (it doesn't damage the environment), and by climate deniers (it's not happening). If you haven't read this book, I highly recommend it.
Government scientists struggle to keep up with the changing rules. It's sort of a joke among government scientists that the rules we must follow are moving targets. Even if you followed the previous rule about something and are caught in between a rule change, you can get your fingers rapped and be required to redo things under the new rule. The change usually involves a new rule or new step that increases the effort to get something approved....rarely the opposite. The specific rules guiding the product review process, for example, change constantly, so if your manuscript is caught mid-stream in a change-over, you may be sent back to square one to start all over again. On one rare occasion, the rule change removed a step: the requirement that abstracts submitted to conferences originally had to go through the same process as manuscripts (2 peer reviews, approvals at multiple levels) was eventually modified to require only supervisor and science unit head approval. As you might imagine, it was a nightmare trying to get an abstract reviewed and approved in time to meet a conference deadline.
Another concern is the misinterpretation by non-scientists about the ever-evolving body of science. The mission of science involves exploration, discovery, and risk-taking. What we report today in journals will likely be modified in the future (or even rejected) as more information becomes available. Scientists also disagree often about interpretation of results. Eventually, however, one hypothesis prevails; it stands the test of time while competing hypotheses fall by the way-side, one by one. However, details continue to emerge from research, which leads to continual modification.
This process is often misunderstood by non-scientists who expect results that are final and written in stone; they interpret any modification of a theory as evidence of wrongdoing by prior researchers. It's easy to imagine a government study conducted today with current methods and instrumentation is later shown to have been incomplete or even wrong by a future study using a new methodology. This situation is not only common, but expected in science. However, scientists don't fault early workers--we usually view them as pioneers, even if their original idea is eventually shown to be incorrect. Their hypothesis and initial efforts may have opened an entirely new line of research that ultimately led to important discoveries. Non-scientists (including the media) seem not to understand this. Special-interest groups have exploited this ignorance and used it to criticize scientists working on controversial topics. Michael Mann and the Hockey-stick controversy is just one example.
A change in a scientific concept as more data are collected could be naively interpreted as scientific fraud on the part of the original researcher. For example, a new study shows conflicting data, which leads to the assumption that the original findings must have been either the result of mistakes or fraud (on the part of the scientist). Or at least that's what some critics charge...particularly the ones who want to cast doubt on the integrity of scientists and the validity of their work. That's essentially what happened in Mann's case.
The creation of doubt in the minds of the public (about a scientific issue) is a powerful strategy that special-interest groups have learned to use. The book, Merchants of Doubt, does a great job of explaining this technique. It was used by the tobacco industry (it doesn't cause cancer); by critics of the ozone hole and acid rain (they don't exist), by proponents of DDT (it doesn't damage the environment), and by climate deniers (it's not happening). If you haven't read this book, I highly recommend it.
No comments:
Post a Comment