Tuesday, October 28, 2014

Blog Post 6: October 28

The article from Oxford Music online about sources brought up some very good points. The part that interested me the most was when they discussed copies that were sent to the engraver that may have not survived. I can see how this would lead to a major gap in the information we have about a particular piece of music. If all that is available is the autograph manuscript and the modern printed parts then it can be very difficult to determine what is actually correct. In any case I think that the original manuscript should be consulted whenever possible. If nothing else, it can be very helpful in determining what the composer was thinking about when they were writing a particular piece.

I find sketches very interesting, and very exciting to look at when learning a piece of music. For myself, as a performer, they can provide more information than even a full score. It can allow the performer to see exactly what the composer was desiring before they wrote the full piece. This can help with determining style, voicing etc. I think it is interesting that Oxford Music Online mentioned that Beethoven's sketches vary from simple melodies to almost fully completed scores. It would be fascinating to see his works in the various states as he was working on them.

It is no surprise to me that historical editions have become more desired fairly recently. The process of acquiring accurate, and complete historical editions though, I think would be very difficult. I think it is interesting how the approach to publishing these editions has evolved over time, with complete editions being expected more recently. I also found it interesting that when historical editions first became popular one of the things they wanted to do was also to publish consistent editions of popular works. I had never really considered that this would be a problem, but it makes sense given that people could have gotten their hands on an almost complete sketch and published that. I also found it interesting that after 1950 they revisited and republished the works of many major composers. It would be interesting to see how different these publications are from each other.

Urtext seems like a very interesting idea to me. It seems like this would allow someone to see the evolution of a piece of music based on it showing what was edited when. This is not a term, or technique, that I had heard of before. I do think it is interesting how they pointed out that some people argue that the composer didn't care about how the piece was performed because there were multiple versions. I would argue that this is likely not the case. I have never met a composer that was not extremely particular about how their music was played. I think people make this argument simply because composers like Beethoven and Mozart are not here to yell at them when they get things wrong.

The passage about Urtext in the Slonimsky made me laugh. While mostly it is making fun of how much people value urtext, it brought up some very good points. Every composer is different, so how are we to know exactly what they are desiring without being able to ask them ourselves. I think the contrast between Ives and Rimsky-Korsakov summed it up perfectly.

The article on the shelf life of urtext was very cool. For me it just emphasized the need to be able to compare editions of a work all together. This can help a lot with deciphering what a composer actually wrote if their writing is illegible. Even if just a couple of notes are changed in editions of urtext, it can be very important. One of the others things I found interesting was when they mentioned editions or manuscripts that are in private collections and are inaccessible. It seems to me that if you own an autograph manuscript you should let at least a couple scholars look at it so that it can be studied and analyzed. It is probably more complicated than them just not letting people look at the manuscript, but it still seems kinda dumb.

The article on Brahms's "Hungarian Dances" is an excellent example of how one minor detail can completely change how we know or perform a piece. This discovery of his notes changed how a very important passage was being interpreted and played. It is incredible that this information was published over 100 years ago, although incorrectly, and was not discovered until recently. Just another reason as to why our understanding of music is always changing and there are always new editions of urtext.

It was cool to see Struck's instructions for this passage after the discovery of what Brahms wanted. He did an excellent job of presenting what Brahms said about the passage and analyzing how it should be played based on this information, versus how it is normally played. I liked his note at the end about how Brahms should have notated it differently than he did if that is actually how he wanted it played.

Tuesday, October 7, 2014

Blog Post 5: October 8

The study performed by Antelman gave a lot of great insight as to how open access articles impact research. It seems like common sense that articles that are available to the general public would be accessed more often than those that are not, but at the same time the more important question might be which articles the academic world is using. The only thing I did not like about this study is that the articles they selected seemed to be at random. I really like that they looked into multiple fields of study, but it seems to me that they should have had a formula based on publication type and/or author to follow in the selection process. The conclusion that she came to seems spot on. I think that given the amount of access we have to articles that are online, it would be very easy for someone to only cite sources that are easily available to them, also called citation bias. I know that I am generally guilty of this technique whenever I need to do quick research on a topic. I would be very interested to see this study performed strictly on how open access articles affect student research. I have a feeling that the results would be staggeringly in favor of open access.

One of the problems that I had been pondering while reading the first article about open access journals was how to tell if the articles, or publishers as well, are any good or trustworthy. The article "As Open Access Explodes, How to Tell the Good from the Bad and the Ugly" did a very good job of pointing this issue out. Just because an article is open access and online does not mean it is any good, so then how can we tell if it is a credible source? The point that stuck out to me the most was about the transparency of the publishing process. I am not surprised at all that the journals that clearly state their publication and acceptance procedures are the ones that are most trustworthy. After all, the only reason not to clearly state this information is if there is something to hide. I also found it curious when they stated that open access journals are essentially finding new ways to lower the bar on acceptance policies article standards since they make more money for each new article that they accept. This seems like a conflict of interest for academic integrity more than anything else.

I think Jeffrey Beall hits the nail on the head with his article on predatory publishers. Its sad that these publishers exist but it is a problem that needs to be addressed. Beall makes a really good point when he state that the people being hurt the most by these publishers are the scientists themselves. Even if an article is legitimate and of good quality, it can danger the author's reputation if it is published alongside plagiarized articles. It seems to me that this problem has come somewhat out of a way to circumvent the peer review system. These predatory journals obviously do not use legitimate peer review if they are publishing plagiarized articles. That may make this issue more open access vs peer review. Both have great advantages, but both have faults. Really neither of the systems are perfect, but maybe there is a middle ground that can be found to benefit everyone involved.

Upon reading the article from the New York Times, it seems that this issue is much more intense than I would have expected. This is truly like spam emails and phishing for the academic world. I think it is great that Beall keeps a "black list" of predatory publishers, but I think it might actually be a better idea for there to be a "white list" of approved publishers. Given the nature of this issue a "black list" would never be complete, new predatory publishers appear every single week. A "white list" on the other hand could be much more comprehensive and easier to check before submitting articles. This could help prevent new authors from living the horror stories that were told about well respected scientist being unable to alienate themselves from these predatory publishers.

I thoroughly enjoy the TED talk on "Battling Bad Science." It was extremely informative, as TED talks always are, and the speaker was excellent. It was refreshing to hear someone explain all of the ways that studies can be rigged in order to generate a certain result. I think the same logic can be applied to the predatory publishers. They aim to shield the true process, as Goldacre explained the medicine companies do, and never really reveal their true intentions. My take-away from this lecture was that we need to have more insight to the full process. Essentially, if a publisher does not have full transparency on their publishing process and standards, then they should not be trusted.

The DOAJ is a very cool website that can be very helpful when looking into open access publications. They aim to provide the best open access articles in one place while still maintaining academic integrity. In order for a publication to be included in the DOAJ it must go through a strict application process, this helps them determine the legitimacy of the source. One of the main requirements for inclusion is that the articles in the journal must be subject to peer review before publication. The website is extremely easy to navigate and has an excellent search feature. Users can search by platform, subject, publication type, date of publication, country of publication, and other identifiers. The DOAJ is a good place to browse open access articles without having to worry about whether or not they are trust worthy.