Imani Danielle Mosley

bassoonist | musicologist | digital humanist

Filtering by Tag: internet of things

machines care if you listen

A lot has happened since my last entry. The questions, the conundrums…they grow. On Twitter we laugh, we talk, we use the emoji of a pensive face, raised brow, and a hand up to its chin (you know the one: 🤔) and we catch the eye of Amazon’s taxonomist (#nometadatanofuture).  But something that lets me know that I’m on the right track with all of this is the articles that keep coming out about machine learning, metadata, and music. 

Pitchfork’s article from last week, “How Smart Speakers Are Changing the Way We Listen to Music,” asks what happens when we remove the friction from listening:

Though any streaming platform user is deeply familiar with mood and activity-geared playlists, the frictionless domestic landscape of voice commanded speakers has led to a surge in such requests. “When people say, ‘Alexa, play me happy music,’ that’s something we never saw typed into our app, but we start to see happening a lot through the voice environment,” Redington explains.

The article goes on to talk about how music streaming services determine what “happy” is (metadata) and a brief discussion of how that happens (machine learning! human curation!) but what it doesn’t do is discuss what it means to have machines (and humans) decide for the rest of us what happy is or if/how it manifests itself in a song. I find the whole idea incredibly invasive, even more so now after doing algorithm testing.  

The first algorithm I tested was “tonal/atonal,” an algorithm supposedly designed to tell us if a song is…I don’t know. If the song has a key center? Perhaps. But this seems beyond unuseful as the majority of music would be classified in some way as “tonal.” In explaining this to my co-workers, I invoked the most typically musicological definition of atonality that I could, accompanied with a little Pierrot Lunaire. But NPR's music library is not solely comprised of Schoenberg and Berg. As I do not know what music was included in the original model for this algorithm, I have no idea on what it was trained. Regardless, it had a very hard time distinguishing what was bad singing versus what was the absence of a tonal center. But Machine Learning!, right? And of course, there was non-Western music in my dataset which raised no end of problems. My parameters were wide for NPR's sake but I couldn't help but think that the sheer existence of this algorithm highlights a human (Western) bias around how we supposedly listen.

This is all far away from the “happy” algorithm that Eric Harvey mentions in his Pitchfork article (note: I will be assessing that algorithm next week) but all of these things are interconnected. We are letting machines make “decisions” about what something is as if that could be determined outside of a few clear examples (BPM, meter, etc) and in doing so, it is reshaping the way we listen, whether we know it or not. I myself have smart speakers (three Echo Dots scattered throughout my house) but like in all other circumstances, my listening is curated and decided solely by myself, meaning no asking Alexa for “happy” music or something of that ilk (though that will be a fun experiment). This hearkens back to Anna Kijas’ keynote in my last post. At the moment, I’m writing about programmer bias and canon replication in streaming music: what happens when I ask Alexa to play music like Debussy or “radiant” classical music?  What you hear is no longer in your control; your listening experience has become frictionless for lack of a better term. I think, subconsciously, many classical music listeners feel this (without, possibly, knowing just what it is they do feel) because at the end of the day, classical music listeners are collectors and curators. However, I don’t think people see this lack of friction as a problem about which to be concerned. I do. Digital streaming is the way we are consuming music (I made a point to use the word “consuming,” both coming from capitalist terminology and from eating and devouring) and if we don’t answer these questions and address these issues now, they may become impossible to rectify. 


 

My next big project is addressing NPR’s classical music metadata and library organization system and that’s a doozy.  I already have a lot to say, but I will go into it more next time. ❋