Archives for category: Science

It’s taken me a little while to be able to write a response to the article “Who’s Afraid of Peer Review?” Every time I sit down to write something, I begin coherently and slowly devolve into angry writing. The article asks, “Who’s Afraid of Peer Review?” and then proceeds to only talk about the peer review, or lack thereof, in open access journals. Because the open access that the article portrays is shoddy and corrupt and seemingly should present no major threat to the vaunted, subscriber-based journals. Somebody should let PLoS ONE know.

The major, fundamental difference is who can access the articles and for how much. The decision about what makes it into which journal is not based on–however much scientists want to believe it is–based on the quality of the science. Nature and Science (where this particular article appeared) have the highest retraction rates of any journal. Knowing that, it’s hard to argue that they publish The Best Science. No, they try to publish good science, but they are also concerned with the sex appeal of the article; they only want the high impact science. So that begs the question: What happens to scientifically solid work that the editors don’t think is sexy enough? Maybe the hypothesis was wrong, or the results were interesting but not game changing, or it’s a less well-known field. Well, in that case, it goes to lower impact journals, maybe along with some stuff that wasn’t as well-done. What if there was an alternative? A journal that only accepts stuff based on scientific merit, publishes it, and then leaves it up to readers to decide whether it gets more or less views. Well it exists: all the PLoS journals.

When I hear people talk about “open access” they’re usually talking about three phenomena–alternatives to subscriber-based journals that seek to remove some of editorializing in paper selection; those subscriber articles that will automatically go open access if they were funded by the US or the UK; and, what this article deals with, a lax, grey market designed to get people publications.

So back to the article itself. It commits some of the same sins that we monitor scientists for:

1) Conflict of interest. Did anyone have a problem with the fact that this was published by a subscriber-based journal with skin in the game (so to speak)?

2) Or, how the article targets potentially notorious open access journals without including a sample set of subscriber journals? (No controls? that’s just bad science.)

Stop with the outrage about author fees–it’s not like publishing in PNAS is free. Servers cost money and current article fees pay to keep all the older articles accessible.

My biggest issue, and perhaps the hardest to explain, are some of the assumptions that the author made. If were were talking about a different field, the term we use would be “microaggression.” A lot of detail was given to the description of the author’s “experimental” set up–how he engineered the names of the authors to look authentically African, how he purposely wrote in poor or grammatically incorrect English, how he included blatantly incorrect data and interpretation, to the point of approaching misconduct. I fully understand wanting to test the limits of whatever review these journals were purporting to offer and how including some factual and grammatical errors would check to see if anybody was reading whatsoever. But why did he, an Oxford-educated white dude feel the need to play up the ‘otherness’ of his fake scientists? Did he think that some of these open access journals would blatantly target scientist from developing nations? What would have happened if he submitted with a different name? I just think it’s really problematic to implicitly tie poor work with scientists from developing nations, even if he wasn’t consciously doing that.

There are plenty of problems in science, but I don’t think open access is one of them. I think the emphasis on “publishable results” (read: positive) and peer review is a much bigger part of the problem. A recent release by Elsevier editors estimated that around 10% of the papers they receive have some evidence of misconduct. That’s a staggering number. So yes, there’s obviously problems in science publication, but I don’t think these open access journals are the cause, although they are perhaps a symptom.

EDIT: A lot of people have been talking about this. One of my favorite responses can be found here, although I hesitate to pile on over the Ar DNA thing:  http://www.michaeleisen.org/blog/?p=1439

I recently had the opportunity to attend a conference that is completely outside of my field of study. I study bioinorganic chemistry and do most of my work on small metalloproteins, so I spend a lot of time thinking about fundamentals like inorganic spectroscopy, biophysics, and biochemistry.  I managed to have some interdisciplinary experiences early on and these likely influenced my choice of a Chemical Biology program over the more traditional programs that I was also admitted to.

This past week I saw some talks at the Goldschmidt, which is one of the largest geochemistry conferences. While there, I spent most of my time at the bio-geochemistry talks, since that’s what’s most interesting to me, and stuff like vulcanism (it’s a thing, I swear) and mantle chemistry is totally out of my reach. I have long had an interest in applying some of my skills to environmental problems and questions, but outside of reading some papers, haven’t had the opportunity to get a good idea of what’s been going on in the field. This experience got me thinking a lot about specialization in science and how, without having finished my Ph.D., I feel that am already very specialized.

While it’s true that what our group does is pretty interdisciplinary and what my project has entailed has been particularly broad, I worry about being able to broaden my horizons even more. Most people will tell you that the most important thing you learn in a Ph.D. is learning what you don’t know and how to get that information, but it’s still hard to imagine beginning a in a completely different field whether that be policy, publishing, or just a different scientific field, without knowing what’s there and what isn’t. Understanding the state of the field and the perspectives from which a lot of people in the field work are important to be able to work productively and push limits.

Modern society selects for specialization. We all know this–from the advent of agriculture and tradesmen, people have been specializing. Science is no different. Lots of progress has been made by people specializing and developing more and more powerful techniques, just think of protein NMR. So obviously expertise is rewarded and sought after, but the important questions and research that will be done will (in part) require synthetic skills. Meaning being able to bring together multiple fields of study, a breadth of background and conversation, and unique skill sets (in a person or a team).

How well are most scientists trained to work in such a situation? My gut answer would be that the oldest scientists are great at this. Peruse the notebooks of Linus Pauling if you don’t believe me (it’s online here). He was a polymath with a voracious appetite for knowledge. Richard Feynman is another person who comes to mind. But there’s definitely a cadre of people in younger generations that have thrived off of specialization. Perhaps in response to this, interdisciplinary fields, journals, and programs are springing up. The NIH and NSF are looking into funding groups of scientists for a single project, rather than single labs. From where I am within the system it’s hard to tell whether this will favor broader interests or whether we’ve just reached another level of specialization.

For myself, I am interested in having a broad perspective, and so choosing how I decide to approach my career post-graduate school seems very significant. Finding an appropriate post-doc that allows me to learn a different field/technique and poises me for progress in a new area is difficult and daunting. I’m looking to find a way to break outside my (self-) imposed boundaries and do something that I care about; that pushes me creatively, and that matters, and I have a sneaking suspicion that for me, the answer lies in between disciplines.

I mentioned briefly in a different post an idea that’s been floating around in my head for a while: the pure capitalism can’t drive science. Or rather, that it can’t be the only driver. In making the case for basic science, I argued that government funding is necessary, because while the benefits of basic science are tangible, they’re often long-term and thus not attractive for profit-based investment.

When I wrote that I thought to myself, “I should probably cite that.” I know I’ve read it in several places, that basic science has tangible benefits. At the time, I was on a roll with thinking about open access and didn’t find the source. But now, serendipitously, an article in PLoS ONE popped up on my radar. It was just published last week and it has some interesting conclusions about science research and economic development.

As an aside, I’m a filthy idealist and I think that basic science is worth pursuing just increase our level of knowledge about the world we live in. I’m not religious, but what better way to celebrate our wonder at the amazing world we live in, than to try to understand it? Anyways, I also acknowledge that idealism doesn’t make the best argument, especially when many people don’t share your idealism. Also, research costs a lot of money, so some justification is needed for how we spend that money–we can’t just fund everything!

But this article came out in PLoS ONE, just in time for me to think about how I should better justify my statement that basic research has tangible benefits. The article links scientific research to economic growth and examines the utility of using one to track the other. Now, they don’t claim that investing in scientific output will trigger economic growth, rather they suggest that economic growth allows sustained, long-term economic development. One surprising conclusion is that applied research (such at agriculture, medicine, and pharmacy) is not the best indicator for economic development, but rather physics, chemistry, and materials science research. Specifically, countries who had higher relative productivity* in basic sciences had higher economic growth in the following five years. The authors suggest that mid-level economic countries would do best by investing in basic sciences, because as they note, “technology without science is unlikely to be sustainable.” Another tidbit that I found to be quite interesting was the idea that “individual specialization begets diversity at the national and global level.” It totally makes sense, but it also provides a good incentive for national or federal science programs to encourage training people in a variety of fields.

I’ll leave the authors themselves to summarize their conclusions:

  1. For historical periods with no global financial catastrophes, the economic growth of middle income countries can be predicted with high accuracy by looking at their relative academic productivity in physical sciences and engineering.
  2. Academic productivity is a much better predictor of future economic growth than economic complexity as measured in [16]. Scientific productivity is more accurate in predicting economic growth and wealth, than economic complexity. If we accept that “science is the mother of technology”, i.e. supports technological development, then science affects other aspects of live such as services, governability, rational thinking, attitudes, etc. and of the economy besides technological development[12][23]. This result is congruent with other statistical analyses comparing the information content of statistical models using ECI with those using scientific productivity to predict economic growth [24].
  3. No country with exclusive preferential investment in technology, without investment in basic science, achieved relatively high economic development. Thus, technology without science is unlikely to be sustainable.
  4. The effect on the economy of scientific development is long term. It can be observed in 5 years’ time. This time period is very short in terms of the process by which science creates new technology. Thus, we might be measuring the effect of science in preparing new technology leaders and in instilling rational thinking in the leaders of a country rather than the production of novel technology in middle income countries.
  5. No direct correlation between development in basic science and economic growth, or vice versa, exists. We suggest that the effect mentioned in point 1 is possible the outcome of the fact that relative investment in basic science is a reliable indicator of a rational decision making atmosphere, and if other factors allow, promotes economic growth.

Number 5 is really, really important. Blind investment in science isn’t what we want, but we want to foster an environment where investment in science is supported and encouraged. Getting more people who are more scientifically literate involved in government and decision-making processes is one way to help this; another is improving our educational system in the STEM fields.

So the next time I get asked, “what is the application of your research?” I can just answer: “economic growth.”

*calculated as a percentage of the country’s total scientific output

Jaffe K, Caicedo M, Manzanares M, Gil M, Rios A, et al. (2013) Productivity in Physical and Chemical Science Predicts the Future Economic Growth of Developing Countries Better than Other Popular Indices. PLoS ONE 8(6): e66239. doi:10.1371/journal.pone.0066239

There are several interesting paradigms in how science is practiced today. I can really only speak to the US, but since international science is based on the American model, we can say that some of these particularities are generally true. There are three main ways that scientific discoveries are reported and all are subject to some level of peer review: articles in journals, conferences, and patents. The first two are far and away are much more important in terms of discoveries and knowledge acquired.

Now, most research is federally funded, especially non-applied research. You just can’t rely on capitalism to fund basic research; the pay off isn’t consistent enough or on a short enough timescale, although the rewards are demonstrable down the road.

So, that being the case, the most prestigious journals are a pay to play sort of thing. A subscription to Science, on of most prestigious (and highest retraction rate–d’oh!) journals costs the individual $146 per year. And that’s if you’re a scientist. If, God forbid, you are an interested civilian, a yearly subscription to Science is $310. To stay abreast of all the federally funded research that gets published, you would have to pay quite a lot of money per year, and yet it’s your tax money that is going into this enterprise. Seem fair? No? You’re right, it’s not. It sucks, actually. I’ve even had collaborators at other universities ask me to send them PDFs of articles because their institution can’t afford the subscription.

Now, there have been various efforts to rectify this. One of the more revolutionary ideas is open access journals, like PLoS and the Frontiers series of journals. They put all the burden to publish on the authors (to be fair, PNAS does this too). Not meaning that there’s no peer review, but rather, the cost of publishing is borne by the authors, not the readers. NPG (Nature Publishing Group) recently acquired the Frontiers journal series, which perhaps lends some more legitimacy to the whole enterprise, or at least an indication that open acces is here to stay. An even greater indication of this is that the Obama administration released a memo via the OSTP, which calls for all agencies that fund more than $100 million in extramural research to make all publications available within a year of publication along with any unclassified, federally-funded research. This is pretty huge, and reflective of greater cultural changes, I think. We have been increasingly insistent on greater transparency in government and science. Obama was practically elected on a platform of greater transparency and then WikiLeaks happened (along with Manning and Snowden), not to mention the plagiarism suspicions/instances that are now rampant in the scientific community (there’s a whole website dedicated to retractions!).

Policy changes like this are part of a greater shift towards transparency that was, in part, brought about by the internet and all the data that anybody can access. There are a group of scientists that are dedicated to opening up science in general–making lab notebooks available, encouraging citizen scientists, and overall increasing transparency. I actually think this is a great idea. I know that in highly competitive fields, this level of transparency is anathema to getting to be the first one to publish something, but nobody says you can’t delay publicizing your lab notebooks. Maybe if more of the day to day grind of science were made available, we wouldn’t have so many drugs getting pulled off the market, and people might appreciate how difficult doing science actually is. Or I’m a total idealist and opening up data will just result in a kerfuffle like publication of some of the climate science data did. If you’re interested in this idea of opening up lab notebooks, the folks over at scifundchallenge.org are hosting a free online course/discussion group to get this going.

So is there a happy ending with the OSTP memo? Actually, yes. The American Association of Publishers has announced CHORUS, which stands for ClearingHouse for Open Research of the United States. It basically takes advantage of existing infrastructure via CrossRef and puts all of those publications that are federally funded under a 1 year embargo, after which the article goes open access. This is the publisher’s solution the the memo, but doesn’t necessarily reflect its final form, which will require some changes in the federal agencies themselves (especially after some recent debate in Congress). Overall, I think it’s a revolutionary step forward. What remains to be seen is also what happens with publishers (like NPG) who have not signed on to CHORUS and are not US-based.