Archives for category: Research

It’s taken me a little while to be able to write a response to the article “Who’s Afraid of Peer Review?” Every time I sit down to write something, I begin coherently and slowly devolve into angry writing. The article asks, “Who’s Afraid of Peer Review?” and then proceeds to only talk about the peer review, or lack thereof, in open access journals. Because the open access that the article portrays is shoddy and corrupt and seemingly should present no major threat to the vaunted, subscriber-based journals. Somebody should let PLoS ONE know.

The major, fundamental difference is who can access the articles and for how much. The decision about what makes it into which journal is not based on–however much scientists want to believe it is–based on the quality of the science. Nature and Science (where this particular article appeared) have the highest retraction rates of any journal. Knowing that, it’s hard to argue that they publish The Best Science. No, they try to publish good science, but they are also concerned with the sex appeal of the article; they only want the high impact science. So that begs the question: What happens to scientifically solid work that the editors don’t think is sexy enough? Maybe the hypothesis was wrong, or the results were interesting but not game changing, or it’s a less well-known field. Well, in that case, it goes to lower impact journals, maybe along with some stuff that wasn’t as well-done. What if there was an alternative? A journal that only accepts stuff based on scientific merit, publishes it, and then leaves it up to readers to decide whether it gets more or less views. Well it exists: all the PLoS journals.

When I hear people talk about “open access” they’re usually talking about three phenomena–alternatives to subscriber-based journals that seek to remove some of editorializing in paper selection; those subscriber articles that will automatically go open access if they were funded by the US or the UK; and, what this article deals with, a lax, grey market designed to get people publications.

So back to the article itself. It commits some of the same sins that we monitor scientists for:

1) Conflict of interest. Did anyone have a problem with the fact that this was published by a subscriber-based journal with skin in the game (so to speak)?

2) Or, how the article targets potentially notorious open access journals without including a sample set of subscriber journals? (No controls? that’s just bad science.)

Stop with the outrage about author fees–it’s not like publishing in PNAS is free. Servers cost money and current article fees pay to keep all the older articles accessible.

My biggest issue, and perhaps the hardest to explain, are some of the assumptions that the author made. If were were talking about a different field, the term we use would be “microaggression.” A lot of detail was given to the description of the author’s “experimental” set up–how he engineered the names of the authors to look authentically African, how he purposely wrote in poor or grammatically incorrect English, how he included blatantly incorrect data and interpretation, to the point of approaching misconduct. I fully understand wanting to test the limits of whatever review these journals were purporting to offer and how including some factual and grammatical errors would check to see if anybody was reading whatsoever. But why did he, an Oxford-educated white dude feel the need to play up the ‘otherness’ of his fake scientists? Did he think that some of these open access journals would blatantly target scientist from developing nations? What would have happened if he submitted with a different name? I just think it’s really problematic to implicitly tie poor work with scientists from developing nations, even if he wasn’t consciously doing that.

There are plenty of problems in science, but I don’t think open access is one of them. I think the emphasis on “publishable results” (read: positive) and peer review is a much bigger part of the problem. A recent release by Elsevier editors estimated that around 10% of the papers they receive have some evidence of misconduct. That’s a staggering number. So yes, there’s obviously problems in science publication, but I don’t think these open access journals are the cause, although they are perhaps a symptom.

EDIT: A lot of people have been talking about this. One of my favorite responses can be found here, although I hesitate to pile on over the Ar DNA thing:  http://www.michaeleisen.org/blog/?p=1439

I mentioned briefly in a different post an idea that’s been floating around in my head for a while: the pure capitalism can’t drive science. Or rather, that it can’t be the only driver. In making the case for basic science, I argued that government funding is necessary, because while the benefits of basic science are tangible, they’re often long-term and thus not attractive for profit-based investment.

When I wrote that I thought to myself, “I should probably cite that.” I know I’ve read it in several places, that basic science has tangible benefits. At the time, I was on a roll with thinking about open access and didn’t find the source. But now, serendipitously, an article in PLoS ONE popped up on my radar. It was just published last week and it has some interesting conclusions about science research and economic development.

As an aside, I’m a filthy idealist and I think that basic science is worth pursuing just increase our level of knowledge about the world we live in. I’m not religious, but what better way to celebrate our wonder at the amazing world we live in, than to try to understand it? Anyways, I also acknowledge that idealism doesn’t make the best argument, especially when many people don’t share your idealism. Also, research costs a lot of money, so some justification is needed for how we spend that money–we can’t just fund everything!

But this article came out in PLoS ONE, just in time for me to think about how I should better justify my statement that basic research has tangible benefits. The article links scientific research to economic growth and examines the utility of using one to track the other. Now, they don’t claim that investing in scientific output will trigger economic growth, rather they suggest that economic growth allows sustained, long-term economic development. One surprising conclusion is that applied research (such at agriculture, medicine, and pharmacy) is not the best indicator for economic development, but rather physics, chemistry, and materials science research. Specifically, countries who had higher relative productivity* in basic sciences had higher economic growth in the following five years. The authors suggest that mid-level economic countries would do best by investing in basic sciences, because as they note, “technology without science is unlikely to be sustainable.” Another tidbit that I found to be quite interesting was the idea that “individual specialization begets diversity at the national and global level.” It totally makes sense, but it also provides a good incentive for national or federal science programs to encourage training people in a variety of fields.

I’ll leave the authors themselves to summarize their conclusions:

  1. For historical periods with no global financial catastrophes, the economic growth of middle income countries can be predicted with high accuracy by looking at their relative academic productivity in physical sciences and engineering.
  2. Academic productivity is a much better predictor of future economic growth than economic complexity as measured in [16]. Scientific productivity is more accurate in predicting economic growth and wealth, than economic complexity. If we accept that “science is the mother of technology”, i.e. supports technological development, then science affects other aspects of live such as services, governability, rational thinking, attitudes, etc. and of the economy besides technological development[12][23]. This result is congruent with other statistical analyses comparing the information content of statistical models using ECI with those using scientific productivity to predict economic growth [24].
  3. No country with exclusive preferential investment in technology, without investment in basic science, achieved relatively high economic development. Thus, technology without science is unlikely to be sustainable.
  4. The effect on the economy of scientific development is long term. It can be observed in 5 years’ time. This time period is very short in terms of the process by which science creates new technology. Thus, we might be measuring the effect of science in preparing new technology leaders and in instilling rational thinking in the leaders of a country rather than the production of novel technology in middle income countries.
  5. No direct correlation between development in basic science and economic growth, or vice versa, exists. We suggest that the effect mentioned in point 1 is possible the outcome of the fact that relative investment in basic science is a reliable indicator of a rational decision making atmosphere, and if other factors allow, promotes economic growth.

Number 5 is really, really important. Blind investment in science isn’t what we want, but we want to foster an environment where investment in science is supported and encouraged. Getting more people who are more scientifically literate involved in government and decision-making processes is one way to help this; another is improving our educational system in the STEM fields.

So the next time I get asked, “what is the application of your research?” I can just answer: “economic growth.”

*calculated as a percentage of the country’s total scientific output

Jaffe K, Caicedo M, Manzanares M, Gil M, Rios A, et al. (2013) Productivity in Physical and Chemical Science Predicts the Future Economic Growth of Developing Countries Better than Other Popular Indices. PLoS ONE 8(6): e66239. doi:10.1371/journal.pone.0066239