logoalt Hacker News

DonHopkinstoday at 11:14 AM0 repliesview on HN

A lot of this discussion makes more sense if you know the history of The Echo Nest and their acquisition by Spotify.

The Echo Nest was one of the most interesting music-tech companies ever built: a music intelligence platform spun out of MIT that analyzed audio, metadata, web text, artist similarity, genre structure, and playlist construction. Spotify bought them in 2014 specifically to strengthen music discovery and recommendation. At the time, Spotify said the deal would let it use The Echo Nest's "in depth musical understanding and tools for curation", and even said the Echo Nest API would remain "free and open" for developers.

https://en.wikipedia.org/wiki/The_Echo_Nest

https://news.cision.com/spotify/r/spotify-acquires-the-echo-...

If you ever used the old Echo Nest APIs, Remix SDK, demos, Music Hack Day projects, or Paul Lamere's experiments, that was a golden era. Echo Nest had open APIs for artist similarity, track analysis, playlisting, "taste profiles", ID mapping across services, and beat/segment-level music analysis. Paul Lamere's whole ecosystem of demos came out of that world: Boil the Frog, Sort Your Music, Organize Your Music, playlistminer, and later Smarter Playlists. His GitHub still points to a lot of that lineage, and his blog is still active. In fact, he posted just this month about rebuilding Smarter Playlists after ten years of use.

https://github.com/plamere

https://musicmachinery.com/

The sad part is that the open developer platform mostly did not survive the acquisition. By 2016, developers were being told that the Echo Nest API would stop issuing new keys and then stop serving requests, with migration to Spotify’s API instead. Community discussions at the time also noted that some Echo Nest capabilities, especially things like Rosetta-style cross-service mapping, were not really carried over.

https://github.com/beetbox/beets/issues/1920

That's also why Spotify's current AI DJ is so frustrating. The problem is that "AI DJ" is not the same thing as a system that deeply understands musical structure, discography semantics, performance history, or classical work/movement hierarchy. It's a recommendation + narration layer, not a true MIR-native musical intelligence system.

If you're interested in the research side of this field, the conference is ISMIR: the International Society for Music Information Retrieval, which is literally dedicated to computational tools for processing, searching, organizing, and accessing music-related data. That community is still very active. The ISMIR site describes MIR exactly in those terms, and the 2010 Utrecht conference included Paul Lamere's tutorial, "Finding A Path Through The Jukebox -- The Playlist Tutorial."

https://ismir.net/

https://news.ycombinator.com/item?id=36482468

>gffrd on June 26, 2023 | parent | context | favorite | on: Show HN: Mofi – Content-aware fill for audio to ch...

>Yes! It was "Infinite Jukebox," created by Paul Lamere ... it was awesome because it would analyse a track, then visualize its "components" and you could watch as the new "infinite" track looped back on itself and jumped from point-to-point in the original track to create an everlasting one. He created some excellent products from the Rdio API, and later Spotify ... and I believe his analysis engine ended up being the foundation upon which Spotify's _play more tracks like these_ capability is based.

>Looks like he's moved over to publish on Substack -- there's a recent(ish) post reflecting on 10 years of Infinite Jukebox:

https://musicmachinery.substack.com/p/the-infinite-jukebox-1...

>rahimnathwani on June 26, 2023 | next [–]

>However, that wasn't the end of the Infinite Jukebox. An enterprising developer: Izzy Dahanela made her own hack on top of mine. To make it work without using uploaded content, she matches up the Echo Nest / Spotify music analysis with the corresponding song on YouTube. She hosts this at eternalbox.dev. It runs just as well as it ever did, 10 years later.

>DonHopkins on June 28, 2023 | parent | context | favorite | on: Show HN: Mofi – Content-aware fill for audio to ch...

>I was working on some music retrieval stuff in 2010, so I joined the EchoNest developer program and played around with their web apis that let you upload music and download an analysis that you could use in all kinds of cool ways. They had an SDK with some great demos and example code. I discussed it with Eric Swenson and Paul Lamere, and had the chance to hang out with Paul Lamere and Ben Fields at ISMIR 2010 (the International Society for Music Information Retrieval conference) in Utrecht, where they gave a tutorial about playlisting:

https://ismir2010.ismir.net/program/tutorials/index.html#tut...

Finding a path through the Jukebox: The Playlist Tutorial:

https://musicmachinery.com/2010/08/06/finding-a-path-through...

>Tutorial 4: Finding A Path Through The Jukebox -- The Playlist Tutorial. The simple playlist, in its many forms -- from the radio show, to the album, to the mixtape has long been a part of how people discover, listen to and share music. As the world of online music grows, the playlist is once again becoming a central tool to help listeners successfully experience music. Further, the playlist is increasingly a vehicle for recommendation and discovery of new or unknown music. More and more, commercial music services such as Pandora, Last.fm, iTunes and Spotify rely on the playlist to improve the listening experience. In this tutorial we look at the state of the art in playlisting. We present a brief history of the playlist, provide an overview of the different types of playlists and take an in-depth look at the state-of-the-art in automatic playlist generation including commercial and academic systems. We explore methods of evaluating playlists and ways that MIR techniques can be used to improve playlists. Our tutorial concludes with a discussion of what the future may hold for playlists and playlist generation/construction.

>[...]

Some of the most interesting Echo Nest descendants are still around in one form or another. Paul Lamere's current/public projects include Smarter Playlists, and his GitHub still highlights SortYourMusic, OrganizeYourMusic, playlistminer, and BoilTheFrog. Glenn McDonald’s Every Noise at Once is another major descendant of that tradition: an enormous map of music genre space. Glenn's own site still describes it as an "inexorably expanding universe of music-processing experiments", and the public genre pages now explicitly say they're a long-running snapshot based on Spotify data through 2023-11-19. After Spotify's layoffs in 2023, TechCrunch reported that Glenn lost access to the internal data needed to keep Every Noise fully updated, which is why it now feels more archival than alive.

Back in 1998 when I was working on The Sims 1, I proposed in my review of the design document something I called "Moody Music": essentially a soundtrack plus a synchronized semantic/emotional control track that could affect gameplay over time. The idea was that music wouldn't just decorate the simulation; it would change it: influencing mood, motives, relationships, skills, timing, and even triggering events at specific musical moments. I wrote that up in my review of the 1998-08-07 Sims design document, along with the broader idea of letting the game recognize a player's own CDs and fetch associated "moody tracks" from the network.

Don’s review of The Sims Design Document, Draft 3 – 8/7/98:

https://donhopkins.com/home/TheSims/TheSimsDesignDocumentDra...

>I have some ideas about how the music could effect the game, that I will write up more completely later. In a nutshell, the people in the house could have a cd or record collection to choose from, each record an object that has the sound (audio wave and/or midi) and a “moody” track synchronized with the music. Playing the music also plays the moods into the environment that the people pick up on. Music can subtly effect how people react to the environment, objects, and each other. It can effect their motives and even their skills temporarily. For example, you might be able to clean the house better and faster if you put on some up tempo bouncy music. The player should be able to assume the role of disc jockey on the radio, and play from another larger library of music and commercials, that effect the peoples moods and buying habits. The TV of course is another source of mood altering temporal media, with commercials and shows that should effect different people differently. But the most important part of this idea is instead of the game effecting the music that’s played, the music effects how the game plays! The ultimate way for the user to effect the game via music, is to insert one of their own CD’s into their real computer’s CDROM drive, and the game would recognize it, and start playing it (maybe with a simple cd player interface to select the song). There could be a database associating the unique ID number of the CD with a table of contents and “moody” tracks that tell how the song effects the peoples emotions over time, with "percussion" events at dramatic moments of the music that can trigger arbitrary events in the game (like provoking a fight that was brewing, or triggering an orgasm at just the right place in the song). We hire monkeys to listen to well known CD’s, and enter time synchronized tracks with semantic meanings in Max (like note tracks, and user defined numeric tracks) or some other timeline editing tool). Put the database up on the web for instant retrieval, so when somebody sticks in a new CD, it downloads our “moody” tracks that go with it, and it starts playing and effecting their game! Streaming emotions over the net! Eventually there should be an end-user tool so people can record their own responses to music as moody tracks they can use in our games. This mechanism could be used in all kinds of games, to varying degrees of effect. I’m not saying that music should be the only way to control the game – it’s more like a subtle background effect, but there certainly could be a scenario where you try to accomplish some task (like taming a wild beast) by using only your musical taste and timing. The real bottom line benefit is that you get to listen to your OWN cd collection of music you want to hear, instead of being driven crazy by the repetitive music bundled with the game.

In hindsight it was quite adjacent to MIR, affective computing, adaptive soundtrack systems, and some of the ambitions that Echo Nest represented. That's why I was so excited about The Echo Nest in 2010 when I was working with Will Wright at the Stupid Fun Club on a music spatial organization and navigation system called MediaGraph.

MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright's Stupid Fun Club

https://www.youtube.com/watch?v=2KfeHNIXYUc

>This is a demo of a user interface research prototype that I developed for Will Wright at the Stupid Fun Club. It includes pie menus, an editable map of music interconnected with roads, and cellular automata.

>It uses one kind of nested hierarchical pie menu to build and edit another kind of geographic networked pie menu.