Imani Danielle Mosley

music historian | digital & public humanist | bassoonist

Filtering by Tag: digital musicology

machines care if you listen

A lot has happened since my last entry. The questions, the conundrums…they grow. On Twitter we laugh, we talk, we use the emoji of a pensive face, raised brow, and a hand up to its chin (you know the one: 🤔) and we catch the eye of Amazon’s taxonomist (#nometadatanofuture).  But something that lets me know that I’m on the right track with all of this is the articles that keep coming out about machine learning, metadata, and music. 

Pitchfork’s article from last week, “How Smart Speakers Are Changing the Way We Listen to Music,” asks what happens when we remove the friction from listening:

Though any streaming platform user is deeply familiar with mood and activity-geared playlists, the frictionless domestic landscape of voice commanded speakers has led to a surge in such requests. “When people say, ‘Alexa, play me happy music,’ that’s something we never saw typed into our app, but we start to see happening a lot through the voice environment,” Redington explains.

The article goes on to talk about how music streaming services determine what “happy” is (metadata) and a brief discussion of how that happens (machine learning! human curation!) but what it doesn’t do is discuss what it means to have machines (and humans) decide for the rest of us what happy is or if/how it manifests itself in a song. I find the whole idea incredibly invasive, even more so now after doing algorithm testing.  

The first algorithm I tested was “tonal/atonal,” an algorithm supposedly designed to tell us if a song is…I don’t know. If the song has a key center? Perhaps. But this seems beyond unuseful as the majority of music would be classified in some way as “tonal.” In explaining this to my co-workers, I invoked the most typically musicological definition of atonality that I could, accompanied with a little Pierrot Lunaire. But NPR's music library is not solely comprised of Schoenberg and Berg. As I do not know what music was included in the original model for this algorithm, I have no idea on what it was trained. Regardless, it had a very hard time distinguishing what was bad singing versus what was the absence of a tonal center. But Machine Learning!, right? And of course, there was non-Western music in my dataset which raised no end of problems. My parameters were wide for NPR's sake but I couldn't help but think that the sheer existence of this algorithm highlights a human (Western) bias around how we supposedly listen.

This is all far away from the “happy” algorithm that Eric Harvey mentions in his Pitchfork article (note: I will be assessing that algorithm next week) but all of these things are interconnected. We are letting machines make “decisions” about what something is as if that could be determined outside of a few clear examples (BPM, meter, etc) and in doing so, it is reshaping the way we listen, whether we know it or not. I myself have smart speakers (three Echo Dots scattered throughout my house) but like in all other circumstances, my listening is curated and decided solely by myself, meaning no asking Alexa for “happy” music or something of that ilk (though that will be a fun experiment). This hearkens back to Anna Kijas’ keynote in my last post. At the moment, I’m writing about programmer bias and canon replication in streaming music: what happens when I ask Alexa to play music like Debussy or “radiant” classical music?  What you hear is no longer in your control; your listening experience has become frictionless for lack of a better term. I think, subconsciously, many classical music listeners feel this (without, possibly, knowing just what it is they do feel) because at the end of the day, classical music listeners are collectors and curators. However, I don’t think people see this lack of friction as a problem about which to be concerned. I do. Digital streaming is the way we are consuming music (I made a point to use the word “consuming,” both coming from capitalist terminology and from eating and devouring) and if we don’t answer these questions and address these issues now, they may become impossible to rectify. 


 

My next big project is addressing NPR’s classical music metadata and library organization system and that’s a doozy.  I already have a lot to say, but I will go into it more next time. ❋

Exspiravit ex machina

Getting this started has proved more difficult than initially envisioned, who knows why. I say this because I have been completely overtaken by this work and the questions that have arisen from it so, naturally, writing about it should be easy, right?

Right.

Let's start with a little background (or perhaps quite a bit of it), when I accepted this internship at NPR, I had no idea, truly, what I was in for. I've spent the last two years immersed in digital humanities and librarianship, realizing that this space was perfect for me. It was a weird combination of all the things I love, addressing the many and myriad questions I had about being a scholar in the future whatever that means, and it allowed me to focus on the things at which I'm really quite good: workflows, information management, metadata, academic technology, and so on. These were things that, over the years, I've noticed musicology as a discipline had little interest in (something that did not and still makes no sense to me personally) and this outlet was one I needed badly. So when I saw the call for interns and read the description, I knew it was something I needed to do. I had no idea just how much that decision would change my life.

Well that was dramatic.

My internship began and I learned about my daily tasks, things I was aware of such as ingesting new promotional music into our in-house database. After a few weeks of general intern training and practice working with our systems, I moved on to the real meat: music information retrieval (MIR) mood tagging. Now I had to do some research, you know, real scholarly stuff first. I read published papers by those foremost in the field such as J Stephen Downie and read the documentation provided by Essentia, the algorithm library we use to assess our tracks. Scintillating stuff that lead to the deepest of rabbit warrens. It was here that I learned about digital musicology, a term I had heard but with which I had not engaged. I am still learning quite a bit about it but what I have gleaned so far is that…there are not a lot of musicologists involved in digital musicology. That might sound odd to you, it sounded odd to me at first. Let's spend a little time on that, shall we?

Digital musicology along with music information retrieval touches on a number of various fields: music theory, acoustics, music cognition/perception, music psychology, programming, music science, library studies, and more. I suggest the Frontiers in Digital Humanities page linked above for more information. And to be fair, there are musicologists engaged in this work, asking really interesting questions. But when MIR is put to work in the real world, say, music streaming services, the specific tools that musicologists have as humanists are shelved in favor of the tools of theorists and programmers. 

Enter yours truly.

In the process of undertaking a massive assessment of several MIR algorithms, I found myself asking lots of epistemological questions — humanist questions — that seemed unanswered. What is tonality and how do we define it for an algorithm? What should be included in our training models? What biases are represented and replicated in our algorithms? Musicologist Anna Kijas touches on these things beautifully in her Medium post, taken from her keynote at the very recent Music Encoding Conference (at which my project supervisor was present) and I highly suggest reading it. I will get into all of the problems I have faced and am facing but this is already quite long for a blog post. (I know, it's my blog, I can do what I like. Point taken.) Plus, I feel it's only fair to give those problems the space they deserve.