Blog Post 4: October 1
The article on periodicals from Oxford Music Online was very informative. While I did not know much of the information about the specific periodicals they mentioned, I did know most of the basic information about periodicals and how they work. I had a fair amount of exposure to periodicals whenever I would do research for papers during my undergrad in Colorado. I did chuckle a little when it mentioned that Wagner provided music copy for journals briefly during a "miserable" time.
I found both of the articles on peer review to be rather interesting. I have always grown up hearing that peer review should be conducted on a new article, so as a result I have never really considered the fact that this process may in fact be flawed. To be honest, after reading these articles I may now be convinced that this is a rather biased process that can inhibit productivity. I really liked that in "The promise of peer review" the author suggested that we take the recommendation for publication out of the review process and instead have reviewers highlight what is good and bad about a particular article. I think that this would be an excellent way to improve the peer review process. I would agree with both authors that the solution to this biased process is not to eliminate peer review all together, but to find ways to improve it for the future. The second article out of JSRM mentioned the same issue that at times recommending whether or not an article should be published can at times not even be relevant to the article itself. I loved that they also mentioned at times reviewers will feel that nothing can be published since they may have had an article rejected recently. This seems very immature but would be one of the many flaws of using peer review. Overall I think peer review is an excellent idea. We do the same type thing in studio class every week, one student performs and then the rest of the studio will give comments and "peer review" the performance. I think this can be a process that is beneficial for everyone involved, it just needs some tweaking in order to become less flawed.
Journal retractions have always fascinated me. While I think it is good for an article to be retracted if the data is unfounded, the information has already been published and some of the public may have fully subscribed to the article's statements. Much like the journal on the potential danger of vaccinations. Even though the original research has been found to be biased, many people still believe that these vaccines are unsafe really for no reason at all. I think this partially goes back to what was mentioned in the peer review articles about approval of publications being based on faith that the researcher is actually presenting the truth. While I would like to believe that everyone is always honest in their intentions, this is not always true. In this case the author was subject to serious financial gains if he could prove this point. At the end of "The MMR vaccine and autism: Sensation, Refutation, Retraction, and Fraud" the author points out that scientists have an ethical responsibility to represent only the facts in their findings and publications. Given that Wakefield did not do this, he put many people at risk of disease due to parents not having their children vaccinated. Part of the root of the peer review problem are examples like this where the data was taken on trust but turned out to be completely unfounded and false. I'm really glad that we have a system in place to pull unfounded and fraudulent articles, but I think it also proves why peer review needs to be taken so seriously. It may seem like a pain-staking process that prolongs the amount of time required to publish an article, but it can help things like this from happening in the future.
I found the article from the New York Times to be very shocking. That would be a lot of effort to go through in order to help get articles published. It makes me wonder how often this happens but goes undetected. This is further proof that the peer review system is flawed, but I still think that it can be worthwhile if done correctly. The most surprising thing to me in this article was at the end when they mentioned that academic institutions in other countries have a formula for promotions based entirely on the number of articles a professor has published, not necessarily based on how good they are. That seems totally backwards to me. I could write 30 articles by the end of the month. They may all suck, but I could still write 30 articles. I don't think at all that what I produce would be worthy of any kind of recognition.
It is really cool to see a blog based entirely on keeping track of retracted articles. One of the posts I found on there was entitled "Author of alcohol paper retracted for plagiarism defends copy-and-paste strategy." This was pretty fascinating because it had arguments from both the authors and the journal defending their stances. And spoiler alert, there are some pretty good reasons why the article was retracted. I'm really glad to know that sites like this exist. I have always wondered how people get word that a particular article or essay has been retracted and all of the news surrounding the issue.
I love the Wagner quote in the periodical article as well -- I asked the author once whether he meant that Wagner was writing because it was a miserable time, or that it was miserable because Wagner was writing. He deflected my question very charmingly . . .
ReplyDelete