The other day, while working in my office, I had an experience that we all have from time to time.
I usually have my laptop open to a manuscript I'm writing and also have my desktop computer turned on to check email or to use the internet to search for information. As I was writing a sentence on my laptop, I realized that I needed to do a quick fact check. I turned and scooted my chair over to my desktop computer to search for the information I needed. As I moved to do this task, I glanced at my watch to gauge how much time I had until lunch...and had a few other random thoughts. This all happened in a span of 1 or 2 seconds. By the time I touched the mouse connected to my desktop computer, I had forgotten why I was switching from laptop to desktop. Annoyed, I turned back to my laptop and looked at the last sentence I had written...and this, of course, jogged my memory, and I quickly performed the necessary search.
When these things happen, I put them down to a "senior moment". People often experience this when they are doing some activity in one room and decide to take a moment to do something else in another room. By the time you reach the other room, you've forgotten why you went there in the first place. How often have you opened the refrigerator to get something and end up standing there staring, trying to remember what it was you wanted?
These events happened to me when I was younger, but not frequently. It still doesn't happen frequently, but when it does, it is scarier now. From reading about the brain and cognition, however, I know that what is happening has to do with short-term memory and that it is not uncommon to fail to incorporate certain information into short-term memory---hence these memory lapses. We are told not to worry if you forget where you put your keys; only if you forget what your keys are for is there likely a serious problem.
Some think that these little lapses increase as our memory system becomes overloaded with information. With the vast amounts of information coming into our brains via digital communication systems and electronic toys, we are perhaps suffering from cognitive overload. At least I hope that's the explanation. It certainly makes sense. I often feel that to learn some new fact....really learn it and retain it, get it into long-term memory...I need to make room for it by eliminating some other bit of information. I don't remember feeling this way when I was a student, despite the fact that I was learning and memorizing lots of things. There seemed to be lots of capacity in my brain. But during my student years, the information overload we are now experiencing was still years away....
I've also noticed that older people sometimes have difficulty performing a task (e.g., driving) and talking about a complex topic. Their attention is split between two fairly complex operations, and they have trouble performing one or both well. The term multi-tasking is often invoked as a positive process, i.e., the ability to multi-task is seen as being advantageous. I doubt that it is. In fact, I would suggest that your short-term memory is being overloaded while multi-tasking so that whatever you are doing, especially if it depends on learning something new, is being negatively affected by the other distractions. It may not feel this way to you because the effect may be subtle, e.g., taking you ten percent longer to complete the primary task than if you had no other distractions.
Anyway, I've decided that I need to minimize the cognitive overload by focusing on one task at a time, paying attention to what I'm doing at that moment, and not mentally running in multiple directions.
Saturday, February 26, 2011
Tuesday, February 15, 2011
The Disadvantages of Irregular Writing Schedules
How many of you were either explicitly taught or learned by imitation (of your adviser or some other mentor) that you should begin writing only after you have all your data in hand?
That was the procedure I was taught. I was further advised in my current job (government) that the accepted procedure for PIs was to conduct research projects in five year cycles, with the first three to four years devoted to design and conduct of the research followed by a year (or two!) of analysis and writing. This particular schedule was designed by bureaucrats who don't write or publish themselves. They were quite adamant that this was the way to do research and publish it. I ignored this advice.
I give the above example to illustrate an extreme case of putting off writing for years. This extreme schedule would sound like professional suicide to most scientists. If you think about it, though, this pattern is is not unlike the schedule students follow in their dissertation work: several years of data gathering and analysis, followed by an intense period of writing--often compressed into a few months at the end of their program. Writing something every three or four years clearly affects overall productivity. Less obvious is that irregular writing can have a serious effect on one's ability to write well. Imagine if you never wrote anything substantive (other than email or minor documents not meant for publication) for three years and then were faced with a large writing project (the product of years of research). This is a recipe for disaster...or at least a bad case of writer's block. No wonder that many Ph.D. students find themselves paralyzed at the point they begin writing their dissertations.
The Ph.D. student then goes on to a post-doc position in which they are expected to conduct and publish at a shorter time interval, e.g., two years. Some succeed, but many fail to publish according to expectations (in my experience). After this, they (may) find themselves in an academic or research position in which it's expected that they publish three or more papers or reports per year. Many attempt to continue the writing schedule they learned in graduate school: spending most of their time conducting the research, then madly writing for a short time at the end of the project, otherwise known as "binge writing". Those who continue the binge writing approach, which worked during graduate school (sort-of), find it increasingly difficult to meet more ambitious writing goals. Some work never gets published; as these unpublished works pile up, our guilt and frustration mount over time.
If you don't write on a regular schedule, you may find yourself struggling to produce something other than a pedestrian manuscript. Since most of the top-tier science journals reject 80% or more of submitted manuscripts, weakly-written manuscripts don't have a chance (and may even be rejected by mediocre journals that are trying to improve their impact factor rating). Those scientists who who regularly hone their writing skills and put as much (or more) effort into crafting compelling papers as they do in designing and conducting their research are going to take up that limited journal space.
Just as a musician must practice constantly to sustain and improve their skills, so must a writer. As I'll explore in later posts, good technical writing (or any writing, for that matter) requires constant practice, improvement, and exploration.
Unfortunately, many scientists fail to recognize how an irregular writing schedule affects their overall productivity, their writing skills, and their self-confidence. How do we break bad writing habits, especially if such habits are common among your peers and encouraged by your mentors? Well, we've already started by first recognizing that binge writing is counter-productive.
In the following posts, I'll look more closely at this and other barriers to productive writing and some skills a technical writer must develop to ensure a long career of productive, enjoyable writing.
In the meantime, see this list of books for guidance on writing. The list is designed for students preparing their thesis or dissertation, but contains suggested reading that is useful for anyone at any stage of their career. Interestingly, the list contains two books that I've recommended in this blog: If You Want to Write by Barbara Ueland and How to Write a Lot by Paul Silvia.
That was the procedure I was taught. I was further advised in my current job (government) that the accepted procedure for PIs was to conduct research projects in five year cycles, with the first three to four years devoted to design and conduct of the research followed by a year (or two!) of analysis and writing. This particular schedule was designed by bureaucrats who don't write or publish themselves. They were quite adamant that this was the way to do research and publish it. I ignored this advice.
I give the above example to illustrate an extreme case of putting off writing for years. This extreme schedule would sound like professional suicide to most scientists. If you think about it, though, this pattern is is not unlike the schedule students follow in their dissertation work: several years of data gathering and analysis, followed by an intense period of writing--often compressed into a few months at the end of their program. Writing something every three or four years clearly affects overall productivity. Less obvious is that irregular writing can have a serious effect on one's ability to write well. Imagine if you never wrote anything substantive (other than email or minor documents not meant for publication) for three years and then were faced with a large writing project (the product of years of research). This is a recipe for disaster...or at least a bad case of writer's block. No wonder that many Ph.D. students find themselves paralyzed at the point they begin writing their dissertations.
The Ph.D. student then goes on to a post-doc position in which they are expected to conduct and publish at a shorter time interval, e.g., two years. Some succeed, but many fail to publish according to expectations (in my experience). After this, they (may) find themselves in an academic or research position in which it's expected that they publish three or more papers or reports per year. Many attempt to continue the writing schedule they learned in graduate school: spending most of their time conducting the research, then madly writing for a short time at the end of the project, otherwise known as "binge writing". Those who continue the binge writing approach, which worked during graduate school (sort-of), find it increasingly difficult to meet more ambitious writing goals. Some work never gets published; as these unpublished works pile up, our guilt and frustration mount over time.
If you don't write on a regular schedule, you may find yourself struggling to produce something other than a pedestrian manuscript. Since most of the top-tier science journals reject 80% or more of submitted manuscripts, weakly-written manuscripts don't have a chance (and may even be rejected by mediocre journals that are trying to improve their impact factor rating). Those scientists who who regularly hone their writing skills and put as much (or more) effort into crafting compelling papers as they do in designing and conducting their research are going to take up that limited journal space.
Just as a musician must practice constantly to sustain and improve their skills, so must a writer. As I'll explore in later posts, good technical writing (or any writing, for that matter) requires constant practice, improvement, and exploration.
Unfortunately, many scientists fail to recognize how an irregular writing schedule affects their overall productivity, their writing skills, and their self-confidence. How do we break bad writing habits, especially if such habits are common among your peers and encouraged by your mentors? Well, we've already started by first recognizing that binge writing is counter-productive.
In the following posts, I'll look more closely at this and other barriers to productive writing and some skills a technical writer must develop to ensure a long career of productive, enjoyable writing.
In the meantime, see this list of books for guidance on writing. The list is designed for students preparing their thesis or dissertation, but contains suggested reading that is useful for anyone at any stage of their career. Interestingly, the list contains two books that I've recommended in this blog: If You Want to Write by Barbara Ueland and How to Write a Lot by Paul Silvia.
Monday, February 14, 2011
Three Easy Steps
We're talking about good writing habits. In the last post, I emphasized the importance of developing and sticking to a regular writing schedule. In the next few posts, we'll take a closer look at how scientists learn to write...in particular, how they develop a writing schedule.
As I look back on my science career, my satisfaction over the body of work that I've published is somewhat dampened by the knowledge of all those papers (and books) that were never written. I've got the data; it's filed away in lab and field notebooks, spreadsheets, and half-finished manuscripts. In fact, I would estimate that for every paper I've published, there are five more that were never written. Most of my colleagues who are the same age would admit to the same. A lot of the unpublished data were collected during and just after I finished my Ph.D. I was bursting with ideas, questions, and energy. In some cases, these were side-projects that I carried out alongside a primary research goal. In others, they were stand-alone projects that were separately funded. All of these studies were completed, but the work was never written up--for various reasons. Often, it was lack of time--more specifically, lack of a period of time scheduled in the project for writing. Back then, I thought the writing should take place after all data were collected. A typical project might have three months set aside at the end of the project, which was designated for writing things up. However, what usually happened was that I had to spend those three months completing some aspect of the research, redoing some analysis, writing the next grant proposal, and/or initiating the next research project. There never seemed to be time for writing manuscripts.
I now think that I could have taken most of this research to its logical conclusion--publication--if I had only developed better writing skills and habits early on. In the last post, I made the point that having a regular writing schedule (e.g., 2 hours per day, every weekday) was essential for sustained productivity. Part of the problem I had during my early research years was that I believed what I had been taught about how to write up research. My graduate advisers taught me the following procedure: design the study, conduct the study, write up the study. In that order. Only when I had all data in hand should I begin thinking about writing. Three easy steps, taken in sequential order. Sounds logical. It was never suggested to me that I could (or should) begin writing the moment I had an idea for a study.
As I look back on my science career, my satisfaction over the body of work that I've published is somewhat dampened by the knowledge of all those papers (and books) that were never written. I've got the data; it's filed away in lab and field notebooks, spreadsheets, and half-finished manuscripts. In fact, I would estimate that for every paper I've published, there are five more that were never written. Most of my colleagues who are the same age would admit to the same. A lot of the unpublished data were collected during and just after I finished my Ph.D. I was bursting with ideas, questions, and energy. In some cases, these were side-projects that I carried out alongside a primary research goal. In others, they were stand-alone projects that were separately funded. All of these studies were completed, but the work was never written up--for various reasons. Often, it was lack of time--more specifically, lack of a period of time scheduled in the project for writing. Back then, I thought the writing should take place after all data were collected. A typical project might have three months set aside at the end of the project, which was designated for writing things up. However, what usually happened was that I had to spend those three months completing some aspect of the research, redoing some analysis, writing the next grant proposal, and/or initiating the next research project. There never seemed to be time for writing manuscripts.
I now think that I could have taken most of this research to its logical conclusion--publication--if I had only developed better writing skills and habits early on. In the last post, I made the point that having a regular writing schedule (e.g., 2 hours per day, every weekday) was essential for sustained productivity. Part of the problem I had during my early research years was that I believed what I had been taught about how to write up research. My graduate advisers taught me the following procedure: design the study, conduct the study, write up the study. In that order. Only when I had all data in hand should I begin thinking about writing. Three easy steps, taken in sequential order. Sounds logical. It was never suggested to me that I could (or should) begin writing the moment I had an idea for a study.
Friday, February 11, 2011
Binge Writing
Now that I've gotten my thoughts about bureaucratic insanity, government accountability, and related topics off my chest (for now), I'd like to take a break and return to a more useful topic--writing.
I've just finished reading the book "How to Write a Lot: A Practical Guide to Productive Academic Writing" by Paul Silvia, who is a psychologist. He does research and publishes it, as well as writes books about psychology....and writing. This is a great book for beginning writers, especially those who have a bit of fear or misconceptions about the writing process. Silvia provides practical advice, encouragement, and specific tips for being productive and getting your stuff published. Although he is writing specifically from the viewpoint of a psychological researcher, his points are applicable to any technical science writing.
Some of his advice I've covered in previous posts (check out the Useful Posts list on Writing Strategies in the nav panel). His best piece of advice, with which I wholeheartedly agree, is to establish a specific time each day to write (e.g., 8 to 10 am every Mon., Wed., Fri.). This approach means that you decide on this time and stick to it no matter what. You don't schedule meetings, allow interruptions, take phone calls, work in the lab, or read email during these hours. If a student wants to schedule their general exam at 9 am on Wednesday, for example, you tell them you cannot meet during the hours you've set aside for your writing... and stick to it. People may accuse you of being rigid, selfish, or weird. So what? Simply say that you are available between the hours of 10 am and 5 pm and all day Tues. and Thurs. (or whenever). That's plenty of opportunity to schedule non-writing activities.
Wannabe authors usually express disbelief that the secret to productive writing is setting and sticking to a regular schedule. But that's the secret. This approach also does more than just ensure that you spend sufficient time writing each week, it ensures that you practice writing on a regular schedule. As I've talked about in previous posts, deliberate practice is the secret to becoming an expert at something--whether it's sports, music, or writing. It takes about 20,000 hours of deliberate practice to become an expert at some skill. People who become published authors of fiction often kept journals and/or wrote stories as young children and developed the habit of writing daily at an early age. That kind of discipline not only leads to becoming an accomplished writer earlier than most, it establishes a behavior pattern that ensures productivity in later years.
If you are serious about increasing your writing productivity, you cannot ignore this advice.
Unfortunately, many of us (scientists) engage in something called "binge writing". We believe that we need large blocks of time in order to write. Consequently, we put off a writing project until that large block of time appears on the horizon. Some people target holidays, the weekends, or sabbaticals as the time to tackle a writing project. This is a mistake according to Silvia. Part of the reason is that binge writing is exhausting. Something that exhausts us becomes a chore. We tend to avoid chores, to procrastinate, or to fear our ability to finish on time. Another reason is that we rarely are able to complete a large writing project (e.g., a technical paper) during one of those fabled "large blocks of time". Then we are dependent upon another "large block of time" (or several) to complete the job. The result is a file drawer full of half-finished papers, book chapters, and books. Sound familiar?
I know what some of you are thinking at this point: "I manage to write without a set writing schedule and do get some papers out."
My response would be, "But are you happy with your productivity? Do you find yourself proclaiming to colleagues and co-workers that you have gotten caught up with all your writing tasks and met all your goals for the year (getting your papers finished and submitted)? Or do you more often say (wistfully) that you got some writing done last week (last month, last year), but not as much as you'd hoped?"
Another aspect of binge writing is that writers do it during times they could (or should) be doing something else....like being with family, enjoying their vacation, or relaxing. The regular writer only writes during their scheduled writing hours, e.g., 8 to 10 am weekdays. Then they are done. Their evenings and weekends are free. They do not look forward to holidays as a time to tackle a writing chore, but instead to relax and enjoy themselves. They don't feel guilty about not writing on the weekend, because they finished their writing goals for the week at 10 am on Friday.
My recent experience with blogging has convinced me of the value of a regular writing schedule. I have regularly written posts for this blog--an average of 2.2 posts per week for the past two years. When I started, I had no idea what I was doing or what I wanted to say, and was pretty inexperienced with respect to the blogosphere. All I knew was that for a blog to be successful, it had to keep moving forward--like a shark that has to keep swimming to stay alive. I had to post something at least a few times per week, every week, to keep the blog alive. A couple of posts per week doesn't sound like much, but the volume of material accrued over two years is easily equivalent to a book (maybe 2 books). Even though I did not write a post a day, I did write something almost every day. Each evening I would open up a draft post and add a bit more to it, or revise, or look up information or links, or move some material to a new draft post, or create images to illustrate the post. This regular activity has generated an amazing amount of material.
In the process of blogging, I naturally developed a writing pattern that is a departure from my usual binge writing of scientific articles.
Before I began blogging, I thought my writing productivity was pretty good (considering my field of ecology and long-term studies). I've published around 75 journal articles and book chapters. That sounds like a lot of writing, but is it? If we use an average word count per article of 6,000 (excluding lit. cited), then I've written about 450,000 words over the past 30 years. Most of these words were written in a "binge writing mode", which I described above. That's 15,000 words per year, on average. Now consider my blog posts: 213 posts at an average of, say, 1000 words per post = 213,000 words in 2 years or about 100,000 words per year!
I was amazed when I did these calculations. Granted, there's a big difference between writing a blog and a science article. But in terms of getting one's thoughts organized and writing the narrative in a logical and compelling manner, the two are pretty similar. The most important difference is that I wrote my science papers in bursts of activity. I often sacrificed weekends and holidays to work on papers because I thought I could write only when I had a large block of time. Not surprisingly, I would have to set aside an unfinished writing project until the next big block of time and then waste time getting reacquainted with the project--figuring out where I had left off. This pattern would be repeated several times until I finally finished. I would be sick of the paper by then.
In contrast, I blogged at a fairly steady pace of 2,000 to 3,000 words per week, on average. I had a regular writing schedule and looked forward to it. I might spend only 15 to 30 minutes per day writing a paragraph or two or jotting down some notes. But it added up. And it did not wear me out.
I think my blogging experience has put the final nail in the coffin of any residual inclination to binge write.
I've just finished reading the book "How to Write a Lot: A Practical Guide to Productive Academic Writing" by Paul Silvia, who is a psychologist. He does research and publishes it, as well as writes books about psychology....and writing. This is a great book for beginning writers, especially those who have a bit of fear or misconceptions about the writing process. Silvia provides practical advice, encouragement, and specific tips for being productive and getting your stuff published. Although he is writing specifically from the viewpoint of a psychological researcher, his points are applicable to any technical science writing.
Some of his advice I've covered in previous posts (check out the Useful Posts list on Writing Strategies in the nav panel). His best piece of advice, with which I wholeheartedly agree, is to establish a specific time each day to write (e.g., 8 to 10 am every Mon., Wed., Fri.). This approach means that you decide on this time and stick to it no matter what. You don't schedule meetings, allow interruptions, take phone calls, work in the lab, or read email during these hours. If a student wants to schedule their general exam at 9 am on Wednesday, for example, you tell them you cannot meet during the hours you've set aside for your writing... and stick to it. People may accuse you of being rigid, selfish, or weird. So what? Simply say that you are available between the hours of 10 am and 5 pm and all day Tues. and Thurs. (or whenever). That's plenty of opportunity to schedule non-writing activities.
Wannabe authors usually express disbelief that the secret to productive writing is setting and sticking to a regular schedule. But that's the secret. This approach also does more than just ensure that you spend sufficient time writing each week, it ensures that you practice writing on a regular schedule. As I've talked about in previous posts, deliberate practice is the secret to becoming an expert at something--whether it's sports, music, or writing. It takes about 20,000 hours of deliberate practice to become an expert at some skill. People who become published authors of fiction often kept journals and/or wrote stories as young children and developed the habit of writing daily at an early age. That kind of discipline not only leads to becoming an accomplished writer earlier than most, it establishes a behavior pattern that ensures productivity in later years.
If you are serious about increasing your writing productivity, you cannot ignore this advice.
Unfortunately, many of us (scientists) engage in something called "binge writing". We believe that we need large blocks of time in order to write. Consequently, we put off a writing project until that large block of time appears on the horizon. Some people target holidays, the weekends, or sabbaticals as the time to tackle a writing project. This is a mistake according to Silvia. Part of the reason is that binge writing is exhausting. Something that exhausts us becomes a chore. We tend to avoid chores, to procrastinate, or to fear our ability to finish on time. Another reason is that we rarely are able to complete a large writing project (e.g., a technical paper) during one of those fabled "large blocks of time". Then we are dependent upon another "large block of time" (or several) to complete the job. The result is a file drawer full of half-finished papers, book chapters, and books. Sound familiar?
I know what some of you are thinking at this point: "I manage to write without a set writing schedule and do get some papers out."
My response would be, "But are you happy with your productivity? Do you find yourself proclaiming to colleagues and co-workers that you have gotten caught up with all your writing tasks and met all your goals for the year (getting your papers finished and submitted)? Or do you more often say (wistfully) that you got some writing done last week (last month, last year), but not as much as you'd hoped?"
Another aspect of binge writing is that writers do it during times they could (or should) be doing something else....like being with family, enjoying their vacation, or relaxing. The regular writer only writes during their scheduled writing hours, e.g., 8 to 10 am weekdays. Then they are done. Their evenings and weekends are free. They do not look forward to holidays as a time to tackle a writing chore, but instead to relax and enjoy themselves. They don't feel guilty about not writing on the weekend, because they finished their writing goals for the week at 10 am on Friday.
My recent experience with blogging has convinced me of the value of a regular writing schedule. I have regularly written posts for this blog--an average of 2.2 posts per week for the past two years. When I started, I had no idea what I was doing or what I wanted to say, and was pretty inexperienced with respect to the blogosphere. All I knew was that for a blog to be successful, it had to keep moving forward--like a shark that has to keep swimming to stay alive. I had to post something at least a few times per week, every week, to keep the blog alive. A couple of posts per week doesn't sound like much, but the volume of material accrued over two years is easily equivalent to a book (maybe 2 books). Even though I did not write a post a day, I did write something almost every day. Each evening I would open up a draft post and add a bit more to it, or revise, or look up information or links, or move some material to a new draft post, or create images to illustrate the post. This regular activity has generated an amazing amount of material.
In the process of blogging, I naturally developed a writing pattern that is a departure from my usual binge writing of scientific articles.
Before I began blogging, I thought my writing productivity was pretty good (considering my field of ecology and long-term studies). I've published around 75 journal articles and book chapters. That sounds like a lot of writing, but is it? If we use an average word count per article of 6,000 (excluding lit. cited), then I've written about 450,000 words over the past 30 years. Most of these words were written in a "binge writing mode", which I described above. That's 15,000 words per year, on average. Now consider my blog posts: 213 posts at an average of, say, 1000 words per post = 213,000 words in 2 years or about 100,000 words per year!
I was amazed when I did these calculations. Granted, there's a big difference between writing a blog and a science article. But in terms of getting one's thoughts organized and writing the narrative in a logical and compelling manner, the two are pretty similar. The most important difference is that I wrote my science papers in bursts of activity. I often sacrificed weekends and holidays to work on papers because I thought I could write only when I had a large block of time. Not surprisingly, I would have to set aside an unfinished writing project until the next big block of time and then waste time getting reacquainted with the project--figuring out where I had left off. This pattern would be repeated several times until I finally finished. I would be sick of the paper by then.
In contrast, I blogged at a fairly steady pace of 2,000 to 3,000 words per week, on average. I had a regular writing schedule and looked forward to it. I might spend only 15 to 30 minutes per day writing a paragraph or two or jotting down some notes. But it added up. And it did not wear me out.
I think my blogging experience has put the final nail in the coffin of any residual inclination to binge write.
Tuesday, February 8, 2011
Guilty Until Proven Innocent
My agency has just created an "Office of Science Quality and Integrity". One of its functions is to oversee how science is carried out and how science products are developed and released to the public. Many of the protocols revolve around scientific fraud, what constitutes fraud, and the penalties for its commission. The entire tone of the science integrity rules suggests that fraud is rampant in the agency. We are guilty until proven innocent. This approach strikes me as counter-productive--in any profession, but particularly science. Of any profession, scientists are trained to be unbiased, honest, and meticulous--characteristics that are integral to the performance of science. It is drilled into us at every stage of training that fraud or bias of any kind is not tolerated. Yes, there are dishonest scientists, but I think they are rare. People who are inclined to cheat, cut corners, or commit outright fraud are simply not attracted to the field of science.
So why would so much effort be spent ensuring that government scientists do not commit fraud (or assuring the public of this)? It may partly be due to the efforts of special interest groups who have attacked scientists and scientific findings (e.g., climate change), charging that the science is poorly done or that scientists are biased. This is a whole other topic, but is what I suspect is behind this renewed emphasis on integrity in science. Bureaucrats fear embarrassing incidents that lead to Congressional inquiries and funding cuts (the government department that my agency belongs to has been recently embarrassed, although it was a high-level bureaucrat that was responsible--not scientists). The current administration has also emphasized integrity in science as an important focus, which has been addressed with new rules, greater scrutiny (of scientists), and special offices (to oversee the scrutiny).
Unfortunately, the establishment of an office to oversee scientific integrity, new rules and regulations, and increased scrutiny of science products sends the opposite message. If scientists are mostly trustworthy and doing their jobs properly, why would there need to be a special office to ensure that our work is free of bias or fraudulent actions? Why the need for new rules and regulations now? Is the public really assured by the creation of another bureaucracy?
Reading the materials on scientific fraud, one gets the impression that government scientists are ignorant of basic scientific guidelines and need to be given a code of conduct--not what I imagine the rule-makers want to convey. There is an official "Code of Scientific Conduct" for employees in my Department, which has recently been updated and expanded. It's pretty long, but here are some excerpts (the points that only relate to scientists):
(1) I will place quality and objectivity of scientific and scholarly activities and reporting of results ahead of personal gain or allegiance to individuals or organizations.
So why would so much effort be spent ensuring that government scientists do not commit fraud (or assuring the public of this)? It may partly be due to the efforts of special interest groups who have attacked scientists and scientific findings (e.g., climate change), charging that the science is poorly done or that scientists are biased. This is a whole other topic, but is what I suspect is behind this renewed emphasis on integrity in science. Bureaucrats fear embarrassing incidents that lead to Congressional inquiries and funding cuts (the government department that my agency belongs to has been recently embarrassed, although it was a high-level bureaucrat that was responsible--not scientists). The current administration has also emphasized integrity in science as an important focus, which has been addressed with new rules, greater scrutiny (of scientists), and special offices (to oversee the scrutiny).
Unfortunately, the establishment of an office to oversee scientific integrity, new rules and regulations, and increased scrutiny of science products sends the opposite message. If scientists are mostly trustworthy and doing their jobs properly, why would there need to be a special office to ensure that our work is free of bias or fraudulent actions? Why the need for new rules and regulations now? Is the public really assured by the creation of another bureaucracy?
Reading the materials on scientific fraud, one gets the impression that government scientists are ignorant of basic scientific guidelines and need to be given a code of conduct--not what I imagine the rule-makers want to convey. There is an official "Code of Scientific Conduct" for employees in my Department, which has recently been updated and expanded. It's pretty long, but here are some excerpts (the points that only relate to scientists):
(1) I will place quality and objectivity of scientific and scholarly activities and reporting of results ahead of personal gain or allegiance to individuals or organizations.
(2) I will maintain scientific and scholarly integrity and will not engage in fabrication, falsification, or plagiarism in proposing, performing, reviewing, or reporting scientific and scholarly activities and their products.
(3) I will fully disclose methodologies used, all relevant data, and the procedures for identifying and excluding faulty data.
(4) I will adhere to appropriate professional standards for authoring and responsibly publishing the results of scientific and scholarly activities and will respect the intellectual property rights of others.
(5) I will welcome constructive criticism of my scientific and scholarly activities and will be responsive to their peer review.
(6) I will provide constructive, objective, and professionally valid peer review of the work of others, free of any personal or professional jealousy, competition, non-scientific disagreement, or conflict of interest. I will substantiate comments that I make with the same care with which I report my own work.
These are certainly important guidelines, but is there any scientist who's not aware of these basic rules of conduct? If I were a non-scientist, I would wonder why government scientists must be reminded of these points and, moreover, why the government would employ a scientist who must be reminded that it's wrong to break any of these guidelines.
Government scientists struggle to keep up with the changing rules. It's sort of a joke among government scientists that the rules we must follow are moving targets. Even if you followed the previous rule about something and are caught in between a rule change, you can get your fingers rapped and be required to redo things under the new rule. The change usually involves a new rule or new step that increases the effort to get something approved....rarely the opposite. The specific rules guiding the product review process, for example, change constantly, so if your manuscript is caught mid-stream in a change-over, you may be sent back to square one to start all over again. On one rare occasion, the rule change removed a step: the requirement that abstracts submitted to conferences originally had to go through the same process as manuscripts (2 peer reviews, approvals at multiple levels) was eventually modified to require only supervisor and science unit head approval. As you might imagine, it was a nightmare trying to get an abstract reviewed and approved in time to meet a conference deadline.
Another concern is the misinterpretation by non-scientists about the ever-evolving body of science. The mission of science involves exploration, discovery, and risk-taking. What we report today in journals will likely be modified in the future (or even rejected) as more information becomes available. Scientists also disagree often about interpretation of results. Eventually, however, one hypothesis prevails; it stands the test of time while competing hypotheses fall by the way-side, one by one. However, details continue to emerge from research, which leads to continual modification.
This process is often misunderstood by non-scientists who expect results that are final and written in stone; they interpret any modification of a theory as evidence of wrongdoing by prior researchers. It's easy to imagine a government study conducted today with current methods and instrumentation is later shown to have been incomplete or even wrong by a future study using a new methodology. This situation is not only common, but expected in science. However, scientists don't fault early workers--we usually view them as pioneers, even if their original idea is eventually shown to be incorrect. Their hypothesis and initial efforts may have opened an entirely new line of research that ultimately led to important discoveries. Non-scientists (including the media) seem not to understand this. Special-interest groups have exploited this ignorance and used it to criticize scientists working on controversial topics. Michael Mann and the Hockey-stick controversy is just one example.
A change in a scientific concept as more data are collected could be naively interpreted as scientific fraud on the part of the original researcher. For example, a new study shows conflicting data, which leads to the assumption that the original findings must have been either the result of mistakes or fraud (on the part of the scientist). Or at least that's what some critics charge...particularly the ones who want to cast doubt on the integrity of scientists and the validity of their work. That's essentially what happened in Mann's case.
The creation of doubt in the minds of the public (about a scientific issue) is a powerful strategy that special-interest groups have learned to use. The book, Merchants of Doubt, does a great job of explaining this technique. It was used by the tobacco industry (it doesn't cause cancer); by critics of the ozone hole and acid rain (they don't exist), by proponents of DDT (it doesn't damage the environment), and by climate deniers (it's not happening). If you haven't read this book, I highly recommend it.
Government scientists struggle to keep up with the changing rules. It's sort of a joke among government scientists that the rules we must follow are moving targets. Even if you followed the previous rule about something and are caught in between a rule change, you can get your fingers rapped and be required to redo things under the new rule. The change usually involves a new rule or new step that increases the effort to get something approved....rarely the opposite. The specific rules guiding the product review process, for example, change constantly, so if your manuscript is caught mid-stream in a change-over, you may be sent back to square one to start all over again. On one rare occasion, the rule change removed a step: the requirement that abstracts submitted to conferences originally had to go through the same process as manuscripts (2 peer reviews, approvals at multiple levels) was eventually modified to require only supervisor and science unit head approval. As you might imagine, it was a nightmare trying to get an abstract reviewed and approved in time to meet a conference deadline.
Another concern is the misinterpretation by non-scientists about the ever-evolving body of science. The mission of science involves exploration, discovery, and risk-taking. What we report today in journals will likely be modified in the future (or even rejected) as more information becomes available. Scientists also disagree often about interpretation of results. Eventually, however, one hypothesis prevails; it stands the test of time while competing hypotheses fall by the way-side, one by one. However, details continue to emerge from research, which leads to continual modification.
This process is often misunderstood by non-scientists who expect results that are final and written in stone; they interpret any modification of a theory as evidence of wrongdoing by prior researchers. It's easy to imagine a government study conducted today with current methods and instrumentation is later shown to have been incomplete or even wrong by a future study using a new methodology. This situation is not only common, but expected in science. However, scientists don't fault early workers--we usually view them as pioneers, even if their original idea is eventually shown to be incorrect. Their hypothesis and initial efforts may have opened an entirely new line of research that ultimately led to important discoveries. Non-scientists (including the media) seem not to understand this. Special-interest groups have exploited this ignorance and used it to criticize scientists working on controversial topics. Michael Mann and the Hockey-stick controversy is just one example.
A change in a scientific concept as more data are collected could be naively interpreted as scientific fraud on the part of the original researcher. For example, a new study shows conflicting data, which leads to the assumption that the original findings must have been either the result of mistakes or fraud (on the part of the scientist). Or at least that's what some critics charge...particularly the ones who want to cast doubt on the integrity of scientists and the validity of their work. That's essentially what happened in Mann's case.
The creation of doubt in the minds of the public (about a scientific issue) is a powerful strategy that special-interest groups have learned to use. The book, Merchants of Doubt, does a great job of explaining this technique. It was used by the tobacco industry (it doesn't cause cancer); by critics of the ozone hole and acid rain (they don't exist), by proponents of DDT (it doesn't damage the environment), and by climate deniers (it's not happening). If you haven't read this book, I highly recommend it.
Sunday, February 6, 2011
Audit vs. Accountability
This series of posts is about accountability regimes and their impact on science. I thought it might be worthwhile at this point to distinguish between audit and accountability.
According to Gaye Tuchman, author of Wannabe U: Inside the Corporate University, not all audits involve accountability and not all audits are coercive. She gives the example of a university classroom and the imposition of a pop quiz by the professor who just wants to know how well the class is understanding the coursework. If the test scores are not included in the final grade, the test would be an audit. If the professor incorporates the scores in the final grade, she would be imposing accountability in connection with the audit. Tuchman goes on to give other examples of audits that are clearly cases of accountability: an audit by the Internal Revenue Service could have serious consequences for those who have cheated on their tax return, as well as those who have not kept proper records or whose jobs are eminently auditable (a consultant, for example). All would be held accountable for any errors, oversights, or deliberate omissions and made to pay a penalty.
The comparable audit vs. accountability situation for scientists might entail a cataloging of the number and quality of publications. If the information is used by the scientist's department simply to promote itself, the publication data is just an audit. If, on the other hand, the data are used to gauge whether the scientist should be promoted (or retained), then the information is being used in an accountability sense. One amusing example Tuchman describes is for the British system of auditing individual scholarship by professors. The scholars are ranked (by a specific system) on a three point scale and given one, two or three stars reflecting the importance of their work. This star-ranking system prompted sarcasm by some academic observers, e.g., that scholarship is ranked "in terms of national, international, and inter-galactic importance". Another system (presumably tongue-in-cheek) proposes a system of scientist ranking using a celebrity-type approach: soap opera stars, other TV stars, Hollywood stars, and Oscar-Emmy winners.
But accountability regimes are not always amusing. Consider instances in which scientists are called to accountability for findings that certain special interest groups dislike. More on this later......
According to Gaye Tuchman, author of Wannabe U: Inside the Corporate University, not all audits involve accountability and not all audits are coercive. She gives the example of a university classroom and the imposition of a pop quiz by the professor who just wants to know how well the class is understanding the coursework. If the test scores are not included in the final grade, the test would be an audit. If the professor incorporates the scores in the final grade, she would be imposing accountability in connection with the audit. Tuchman goes on to give other examples of audits that are clearly cases of accountability: an audit by the Internal Revenue Service could have serious consequences for those who have cheated on their tax return, as well as those who have not kept proper records or whose jobs are eminently auditable (a consultant, for example). All would be held accountable for any errors, oversights, or deliberate omissions and made to pay a penalty.
The comparable audit vs. accountability situation for scientists might entail a cataloging of the number and quality of publications. If the information is used by the scientist's department simply to promote itself, the publication data is just an audit. If, on the other hand, the data are used to gauge whether the scientist should be promoted (or retained), then the information is being used in an accountability sense. One amusing example Tuchman describes is for the British system of auditing individual scholarship by professors. The scholars are ranked (by a specific system) on a three point scale and given one, two or three stars reflecting the importance of their work. This star-ranking system prompted sarcasm by some academic observers, e.g., that scholarship is ranked "in terms of national, international, and inter-galactic importance". Another system (presumably tongue-in-cheek) proposes a system of scientist ranking using a celebrity-type approach: soap opera stars, other TV stars, Hollywood stars, and Oscar-Emmy winners.
But accountability regimes are not always amusing. Consider instances in which scientists are called to accountability for findings that certain special interest groups dislike. More on this later......
Thursday, February 3, 2011
The Catch-22 Culture
The accountability fervor that has been increasing in recent years is having an increasingly chilling effect on government science. In previous posts, I explained how increasing amounts of time are spent by scientists justifying their work and filling out paperwork, attending training courses, and generally jumping through hoops. In addition to the time taken away from the actual conduct of science, these accountability activities lead to frustration and eventual burnout. When it becomes so overwhelmingly difficult (and expensive) to get a science product published or to get permission to travel to an international conference, some government scientists will cut way back on their efforts or just give up.
Let me be clear about what I mean. I'm not suggesting that there should not be any oversight at all or that government scientists should not be held accountable for their work. What I'm suggesting is that an "accountability regime", i.e., one in which accountability takes precedence over science mission, will eventually backfire and lead to a workforce that avoids the very activities needed to produce quality science. One of the reasons is that scientific productivity expectations are pretty low in comparison to academic institutions. For example, in my science unit, PIs are expected to submit only one paper per year (submit, not publish) to be rated fully successful in job performance. Theoretically, one could resubmit the same paper each year and meet the basic requirement. My point is that when you have low requirements for performance in combination with excessive rules of accountability, there is a great danger that people will decide that the easiest (and safest) route is to do as little as possible.
Over the past ten years, government scientists have been increasingly audited and subjected to more paperwork affirming that we have not made errors. To give you an idea of what a government scientist must go through prior to submitting a paper to a journal, here is the path a manuscript now takes (the process has evolved over the past few years). First, the author submits the manuscript to her supervisor who checks it for technical quality and policy issues and then sends it out for peer review (2 reviews are required). Depending on how diligent your supervisor is, these reviews may be accomplished in 2 or 3 weeks or languish for months in reviewer purgatory. Eventually, the reviews come in and they are forwarded to the author for reconciliation. The author must address all comments (no matter how bone-headed they may be) and prepare a reconciliation document detailing how changes were made (or not). Then the package containing original and revised versions of the ms, the reviews, the reconciliation, and all dated email correspondence are forwarded to the science unit head who goes over everything and approves or disapproves it. If approved, it then goes to the bureau approving official who again goes through the entire package. If your manuscript topic is deemed especially "sensitive", then it undergoes more intense scrutiny.
At any of these higher levels, there may be comments on the technical aspects of the manuscript; in some cases, these comments are helpful, in other cases, not. In most cases, the officials are not experts in the science topic and may raise inappropriate questions about technical aspects out of ignorance. Others suggest editing changes that are grammatically incorrect. Even though the author may be able to answer those questions and explain why a requested change is not correct scientifically or grammatically, the time involved in addressing these various questions adds up. If the author tries to ignore these, the next official in line will kick back the manuscript and demand all comments be addressed. So the author sometimes spends a lot of time addressing questions and suggested changes that do not improve the manuscript.
Once the bureau official finally signs off, then you are free to submit to a journal where your manuscript will go through the usual gauntlet of editorial and reviewer raking-over-the-coals. Getting collegial reviews prior to journal submission can be helpful, but all the time and paperwork involved in getting approvals at multiple levels is not.
You may be wondering at this point why should a manuscript that will go through a thorough review by peers and journal editors (i.e., a real review) when it is submitted for publication need to be reviewed beforehand and approved by people who are usually unfamiliar with the field of study? We ask that question all the time.
The novel "Catch-22" often comes to mind. This classic novel by Joseph Heller is a critique of bureaucratic logic and operation. It follows the protagonist, Yossarian, a B-25 bombardier in World War II. Yossarian is desperate to get out of the war and tries to figure out how to avoid flying missions. However, the military has a rule, Catch-22, which prevents soldiers from avoiding combat:
"There was only one catch and that was Catch-22, which specified that a concern for one's safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he were sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to."
As Yossarian flies more missions, his commanders keep raising the number of missions required to be discharged from the military. This and other manifestations of the military bureaucracy are variations on the Catch-22 theme. Enforcers of the crazy rules don't have to prove that their actions against rule violators are actually supported by a provision of the Catch-22. They can punish violators with impunity. The ultimate irony (in the novel) is that Yossarian finally realizes that the Catch-22 rule doesn't exist, but because it isn't real, it can't be refuted or overturned.
In the end, what you get is an army (or workforce) that spends all its time following insane rules (or trying to get around them) instead of doing what they were hired to do--fly their missions (or do science).
Catch-22 describes a paradoxical situation in which an individual needs something that can only be had by not being in that situation. For government scientists, the Catch-22 is that we are asked to prove our work is unassailable or essential, but in doing so we demonstrate that there is reason to question it. The example I gave in a previous post was that scientists are required to provide justification for travel to an international conference. To give a paper is not sufficient. We must demonstrate how our attendance benefits the agency and/or how our failure to attend will have negative consequences (for the agency).
Accountability rules are driven by the fear (of bureaucrats) that a scientific report will turn out to contain a flaw that becomes a public embarrassment or a member of Congress will question travel expenditures. However, the more we try to "prove" the lack of error, waste, bias, or fraud, the less convincing we are. It's impossible to prove a negative. It's the reason our justice system is based on the presumption of innocence (the burden of proof is on the accuser, not the accused).
If there are people who question the validity or integrity of a scientific report, why not require them to prove that there has been error, bias, or fraud? Why put the burden on the scientist to prove they are innocent of the charge, even before the charge has been made? As I suggested above, this burden will eventually chill scientific endeavor, especially for high-profile topics or new research directions. We've already seen several instances of climate scientists who have been challenged, grilled, and even threatened--and their institutions and agencies have not always leaped forward to support them, and in some cases even fired them.
Instead of promoting openness, accountability regimes create a climate of fear, paranoia, and confusion. They are antithetical to the mission of science.
Image/video credits: Catch-22 by Joseph Heller (Simon and Schuster); movie clip from Catch-22 (Paramount Pictures)
Let me be clear about what I mean. I'm not suggesting that there should not be any oversight at all or that government scientists should not be held accountable for their work. What I'm suggesting is that an "accountability regime", i.e., one in which accountability takes precedence over science mission, will eventually backfire and lead to a workforce that avoids the very activities needed to produce quality science. One of the reasons is that scientific productivity expectations are pretty low in comparison to academic institutions. For example, in my science unit, PIs are expected to submit only one paper per year (submit, not publish) to be rated fully successful in job performance. Theoretically, one could resubmit the same paper each year and meet the basic requirement. My point is that when you have low requirements for performance in combination with excessive rules of accountability, there is a great danger that people will decide that the easiest (and safest) route is to do as little as possible.
Over the past ten years, government scientists have been increasingly audited and subjected to more paperwork affirming that we have not made errors. To give you an idea of what a government scientist must go through prior to submitting a paper to a journal, here is the path a manuscript now takes (the process has evolved over the past few years). First, the author submits the manuscript to her supervisor who checks it for technical quality and policy issues and then sends it out for peer review (2 reviews are required). Depending on how diligent your supervisor is, these reviews may be accomplished in 2 or 3 weeks or languish for months in reviewer purgatory. Eventually, the reviews come in and they are forwarded to the author for reconciliation. The author must address all comments (no matter how bone-headed they may be) and prepare a reconciliation document detailing how changes were made (or not). Then the package containing original and revised versions of the ms, the reviews, the reconciliation, and all dated email correspondence are forwarded to the science unit head who goes over everything and approves or disapproves it. If approved, it then goes to the bureau approving official who again goes through the entire package. If your manuscript topic is deemed especially "sensitive", then it undergoes more intense scrutiny.
At any of these higher levels, there may be comments on the technical aspects of the manuscript; in some cases, these comments are helpful, in other cases, not. In most cases, the officials are not experts in the science topic and may raise inappropriate questions about technical aspects out of ignorance. Others suggest editing changes that are grammatically incorrect. Even though the author may be able to answer those questions and explain why a requested change is not correct scientifically or grammatically, the time involved in addressing these various questions adds up. If the author tries to ignore these, the next official in line will kick back the manuscript and demand all comments be addressed. So the author sometimes spends a lot of time addressing questions and suggested changes that do not improve the manuscript.
Once the bureau official finally signs off, then you are free to submit to a journal where your manuscript will go through the usual gauntlet of editorial and reviewer raking-over-the-coals. Getting collegial reviews prior to journal submission can be helpful, but all the time and paperwork involved in getting approvals at multiple levels is not.
You may be wondering at this point why should a manuscript that will go through a thorough review by peers and journal editors (i.e., a real review) when it is submitted for publication need to be reviewed beforehand and approved by people who are usually unfamiliar with the field of study? We ask that question all the time.
The novel "Catch-22" often comes to mind. This classic novel by Joseph Heller is a critique of bureaucratic logic and operation. It follows the protagonist, Yossarian, a B-25 bombardier in World War II. Yossarian is desperate to get out of the war and tries to figure out how to avoid flying missions. However, the military has a rule, Catch-22, which prevents soldiers from avoiding combat:
"There was only one catch and that was Catch-22, which specified that a concern for one's safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask; and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he were sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to."
As Yossarian flies more missions, his commanders keep raising the number of missions required to be discharged from the military. This and other manifestations of the military bureaucracy are variations on the Catch-22 theme. Enforcers of the crazy rules don't have to prove that their actions against rule violators are actually supported by a provision of the Catch-22. They can punish violators with impunity. The ultimate irony (in the novel) is that Yossarian finally realizes that the Catch-22 rule doesn't exist, but because it isn't real, it can't be refuted or overturned.
In the end, what you get is an army (or workforce) that spends all its time following insane rules (or trying to get around them) instead of doing what they were hired to do--fly their missions (or do science).
Catch-22 describes a paradoxical situation in which an individual needs something that can only be had by not being in that situation. For government scientists, the Catch-22 is that we are asked to prove our work is unassailable or essential, but in doing so we demonstrate that there is reason to question it. The example I gave in a previous post was that scientists are required to provide justification for travel to an international conference. To give a paper is not sufficient. We must demonstrate how our attendance benefits the agency and/or how our failure to attend will have negative consequences (for the agency).
Accountability rules are driven by the fear (of bureaucrats) that a scientific report will turn out to contain a flaw that becomes a public embarrassment or a member of Congress will question travel expenditures. However, the more we try to "prove" the lack of error, waste, bias, or fraud, the less convincing we are. It's impossible to prove a negative. It's the reason our justice system is based on the presumption of innocence (the burden of proof is on the accuser, not the accused).
If there are people who question the validity or integrity of a scientific report, why not require them to prove that there has been error, bias, or fraud? Why put the burden on the scientist to prove they are innocent of the charge, even before the charge has been made? As I suggested above, this burden will eventually chill scientific endeavor, especially for high-profile topics or new research directions. We've already seen several instances of climate scientists who have been challenged, grilled, and even threatened--and their institutions and agencies have not always leaped forward to support them, and in some cases even fired them.
Instead of promoting openness, accountability regimes create a climate of fear, paranoia, and confusion. They are antithetical to the mission of science.
Image/video credits: Catch-22 by Joseph Heller (Simon and Schuster); movie clip from Catch-22 (Paramount Pictures)