Home » » The CRU Scandals: A Reflection on Academia

The CRU Scandals: A Reflection on Academia

I am sure that you have been tracking the story of the hacked emails between top climatologists and the ensuing debate about whether those atop the discipline have stifled skeptics in the global warming debate. If you have not, here is a quick review:
http://www.washingtonpost.com/wp-dyn/content/article/2009/11/21/AR2009112102186.html?nav=hcmodule
I do not intend to wade into that debate but the entire controversy has held up a mirror to academic research in general and I don't think the reflected image is flattering.

Let us start with the ideal. Seekers of truth (Scientists, professors, Phd students... the academic research community) come up with interesting and provocative questions to answer, look at these questions objectively (and with no financial interests at stake) and with no preconceptions, develop theories and test them rigorously and then report these results without skewing them. Their research is reviewed by their peers, who bring the same objectivity and fairness to their assessments, and decide whether the research should be published.

As with most ideals, this one is utopian. Here is my more cynical view of how the process works.
1. Research what will be published, not what is interesting: When you first start climbing the academic ladder, the name of the game is to get published. Would you rather publish a ground breaking paper than an incremental one? Of course. But would you rather publish an incremental paper than have a ground breaking paper that does not get published? The answer again is affirmative. It is far easier to publish a paper than nibbles at the edges of big questions than one that asks and tries to answer big questions. If you pick up any academic journal and browse through the contents, you will see the evidence of this marginalization.

2, Bias in, bias out: Researchers are human and come in with biases and preconceptions, some of which are formed early in life, some during their academic experiences and some of which they acquire from their mentors and peers. Those biases then drive not only the topics that they choose to research but also how they set up the research agenda and in some cases how they look at the data.

3. Who you are matters: Where you went to school to get your doctorate, who your mentor is and what school you teach at right now all affect your chances of getting published. If you went to an elite school (and the elite can vary from discipline to discipline), worked with the right mentor (preferably a journal editor) and teach at another elite school, your chances of publication increase significantly.

4. Every discipline has an "establishment" view: There is an establishment view in every discipline. Papers that hew to this view have a much easier path to publication than papers that challenge the view. In finance, the establishment view for decades was that markets were efficient and that any evidence of inefficiency was more a problem with the models we had than with the underlying efficient market hypothesis. It has taken almost two decades for behavioral economists to breach this wall. Now, I sense that they are becoming part of the establishment and don't quite know what to do.

5. Peer review is wildly variable and sometimes biased: When you write a paper in a particular area, it will be sent out to other "experts" in the area for review. Some of them are scrupulously fair, read your paper in detail and provide you with extraordinary feedback that improves your paper. Others are defensive, especially if the paper challenges one of their pet theories, and find reasons to reject the paper. Still others are extremely casual about feedback and make suggestions that border on the absurd. While peer review, on average, improves papers, it does so at considerable cost.

6. Data abuse happens: As the volume of and access to data improves, it has become far easier to abuse the data by (a) selecting the slices of data that best fit your story (b) expanding sample sizes to the point that the sheer amount of data overwhelms the opposition and (c) reporting only a subset of the results that you get with the data.

I think peer review is useful and empirical testing is crucial. However, my advice to laymen looking at academic research is the following.
1. Don't assume that academics don't have an agenda and don't play politics. They do.
2. Don't let "research findings" sway you too much - for every conclusive result in one direction, there is almost always just as conclusive a result in the opposite one.
3. Just because something has been published does not make it the truth. Conversely, the failure to publish does not mean that a paper is unworthy.
4, Develop your own vision of the world before you start reading papers in an area. Take what you find to be interesting and provocative and abandon the fluff (and there is plenty in the typical published paper).
5. Learn statistics. It is amazing how much of what you see reported as the truth fails the "standard error" test.

One final note on the CRU email story. For the most part the faults of academic research create no significant damage because so much of the research is inconsequential. The scandal of the data manipulation and stonewalling of critics in this case is that it is so consequential, no matter what you think about global warming. If there is no global warming and the data has been manipulated to show that there is warming, the academics at the heart of this affair should be forced to answer to the coal miners, SUV assembly workers and others who lost their jobs because of warming-related environmental legislation. If there is global warming and the numbers were being cooked to make the case stronger, there is the real possibility that people will turn skeptical about warming about revert back to old habits. In either case, it behooves those involved in this mess to step down.

0 comments:

Post a Comment