I was perusing the Pitchfork People’s List of the top 200 albums of the past fifteen years, and was interested to see that they’d done a breakdown of some of the data they collected along with votes for readers’ top album choices. The results of the breakdown interested me to varying degrees, but the one element I was most intrigued by was the year of release.
If you scroll all the way down to ‘Best Years For Music’ you’ll find that data about which I’m thinking. Clearly the title for the infographic misses the mark; these aren’t ‘best years’ necessarily, but ‘years in which the albums that voters supported came from’. As I would expect, the graph tends to drift upward; the recency effect predicts that more people would vote for albums released closest to the time of the poll (and the effect really kicks in with polls about ‘best guitarist’ and so on, from which one might conclude that hardly anyone ever played the guitar nearly as well as the leader of a band that scored a platinum record last year but will be forgotten next year). Therefore I was impressed that the year 2000 did as well as it did — outpolling any of the years apart from 2007 and 2010, in a close race with 2003).
But what would interest me even more would be a comparison of the albums normalised for recency, so that older albums were allotted an incremental boost, and newer albums were weighted less (because of their cognitive advantage over older albums). One could do similar operations relative to music by women (Björk comes in at #51, the first female performer on the list, though collectives such as Broken Social Scene which feature prominent female performers appear as high as #25. Does Arcade Fire count as featuring women prominently?), nationality of voter versus nationality of performer, and so on.
If I weren’t already sidetracked from optimal productivity, such a data set could preoccupy me for days on end, and produce results that would positively fascinate me (if no one else).