New Algorithm Maps Beatles’ Musical Progression

Fans and critics of The Beatles know that the Fab Four’s music underwent drastic changes within short periods of time, their sound continually evolving over the span of their music career together (and on into their solo careers as well). A new algorithm developed by computer scientists at Lawrence Technological University has the ability to analyze and compare musical styles, which give the artificial intelligence the capacity to investigate into The Beatles’ musical progression.  

First, the algorithm converts each song into a spectrogram (a visual representation of the audio content), which then turns an audio analysis task into an image analysis problem. Aural to visual. The problem is then solved by applying further algorithms that make each music spectrogram a set of nearly 3,000 numeric descriptors. These numeric descriptors reflect more visual aspects, like shapes, textures, and the distribution of pixels. The artificial intelligence then uses statistical methods and pattern recognition to reveal and quantify similarities between various pieces of music.

Developed by Assistant Professor Lior Shamir and graduate student Joe George, who had previously developed audio analysis technology to study the vocal communication of whales, the team expanded the algorithm to analyze the albums of the Beatles. Their study also included other famous bands such as Queen, U2, ABBA and Tears for Fears. LTU’s team analyzed 11 songs from each of the 13 Beatles studio albums released in Great Britain. Similarities between each song were then quantified. Results from the study of individual songs were then used to compare the similarities between complete albums. The results of the study were published in the journal Pattern Recognition Letters, the conclusion being that the structure of the Beatles music does indeed change progressively from one album to the next. A detailed description of the study’s results can be viewed on LTU’s news webpage.

The significance of this study is historical to say the least. In the future, it might be used to help determine why some music lives on while other music evolves into something unrecognizable, or simply disappears almost entirely from music culture. It could very soon be used to aid searching, browsing, and organizing large music databases, as well as identifying music that matches an individual listener’s musical preferences. Anyone interested in what makes music distinctive (and appealing to the masses) would be intrigued by this new algorithm’s potential.