Every week, I send out a development log newsletter for Loud Numbers, the data sonification podcast that I'm making with Miriam Quick. The latest episode features a round-up of all the work we've done on the project in 2020, and what we've learned along the way. I'm cross-posting it here for posterity.
Hi there, and welcome to the last Loud Numbers development log newsletter of 2020. In case you’re a new subscriber, this is normally where we (Duncan & Miriam) talk about what we’ve been working on over the past week as part of our quest to make the world’s first data sonification podcast.
This week will be a bit different, though. We’re going to run through everything we’ve learnt about sonification this year. Which is pretty much everything we know about sonification, because we didn’t really start working on it until this year. Make yourself a cup of tea and strap in, because this might be a bit longer than usual.
Our greatest discovery of the year has been Sonic Pi - the coding environment where we make the raw forms of almost all of our sonifications. It’s based on Ruby, which neither of us had ever coded in before, but our experience in other languages and the fantastic tutorial allowed us to get up to speed quite fast.
Our first attempts at sonification were pretty ugly, but that was always going to be the case. We knew we’d need to push through the phase where we mapped everything to pitch and amplitude and it all sounded like a 70s arcade machine going wrong.
What we didn’t know is that our coding technique would improve substantially too. After creating five full sonification systems, we’ve now built up a solid framework of proto-modules that load data, process it into Sonic-Pi-friendly data structures, normalise it, and then apply sonification mappings.
We’re primarily focusing on getting our first set of sonifications released, but once that’s complete we’d like to clean up our framework, document it properly, and then share it with the community so others can use it for their own sonifications. We’ll likely do that with a set of beta testers, so if that’s something you’re interested in then stay tuned for more information in 2021.
The second stage of our sonification pipeline is Logic, where we turn the raw sonifications into something that sounds a bit more like music.
This year we got a lot better at using the program to create sounds and build tracks. We’re now much more familiar with Logic’s vast library of built-in sounds, and better at knowing which kinds of sound will work in each role. For most of our sonifications, we build a rough working version in Sonic Pi first, often using sounds made in Logic and pulled in as external samples. Then we either export tracks to audio files (if they’re based on data) and add effects in Logic, or, if they’re purely musical, rebuild them in Logic using a more refined sound set. The end result is always mixed down in Logic, on good old KRK studio monitors.
Recently, we went back to the first sonification track that we completed and listened to it again with fresh ears. It… was not as good as we remembered. It was a huge pile of dense sound mappings, with so much going on that it was almost impossible to pull out the story we were trying to tell.
We’ve got some ideas to rework it with more clarity in mind. And we believe that it’s possible to combine clarity with complexity. But going back to that first track made us realise how our sonification style has evolved over the course of 2020.
We started out by trying to find as many tangentially-connected datasets as possible and mapping all of them to different sounds. But over the year, two things have happened. First, we’ve become more confident - now we just map one or two datasets, and fill in the rest with musical elements that merely contribute to the mood and genre. Second, we try to only use sound mappings that we’re able to explain in a single sentence - “The longer the note, the higher the temperature”, for example.
By doing this, we get to keep the clarity of story while still evoking mood and genre in the way we want to. And as an added bonus, we don’t need to come up with a zillion different sonification mappings and make sure that they all work well with each other.
We didn’t have a strong design philosophy in mind when we began this project. We knew we wanted to make data sound like music, but that was about as far as it went. Now, though, after spending a year with the artform and reading around the subject, we have a much clearer idea of what we want to achieve with this project.
First, we’ve realised that we’re always shooting for that sweet spot where a data story overlaps with a musical track, exploring the space between data structures and musical structures – where do they complement each other, and where are they in conflict? Our end goal is always to create something that’s satisfying to listen to, something you’d press play on more than once.
Second, we’ve realised that creating tracks with a strong mood is centrally important to us, because that’s how people remember things. Doing this effectively means tapping into the meanings we associate with sounds, from airhorns to funeral bells. We believe sonification can transform data into an experience that’ll stay with you.
Third, all of the tracks we’ve made so far build on pre-existing genres, from baroque counterpoint to UK jungle, taking styles people are already familiar with and putting a data spin on them. This is perhaps unusual in sonification but feels natural for us because we are both quite diverse in our musical tastes. Music genre provides another creative constraint in addition to the fact that, y’know, trying to write tracks around data is already quite constraining. But it’s also liberating, opening up sound worlds rich with memory and resonance.
We began this project with more than a little arrogance. We felt that a lot of the sonification work on the web wasn’t very good, and we thought that we could do a lot better. We’re a little ashamed of those feelings now, even though they were a powerful motivator to start the project in the first place.
Instead, as we’ve worked on Loud Numbers over this year, we’ve come to realise that there’s actually loads of great sonification work out there that isn’t reaching the audience it deserves. As is common in every artform, the best work isn’t necessarily the stuff that floats to the surface.
We think there’s a real opportunity to bring that work to a wider audience, and that’s something we’re passionate about working on - again, once our first set of sonifications is complete and published. Look out for more from us on that in 2021.
We want to shout out to a few people in particular who’ve supported and/or inspired us along the way. They’re all doing fascinating things with data and they deserve your attention:
- Sara Lenzi
- Jordan Wirfs-Brock
- Mike Brondbjerg
- Lindsay Diamond
- Shawn Graham
- Nightingale & Jason Forrest
- Stefanie Posavec
In addition, a huge thanks to everyone who has read this devlog over the course of the year. It’s actually one of the most useful things that we’ve done. Sending a weekly newsletter not only forces you to keep a project moving forward on a weekly basis, but also allows you to capture thoughts and reflections along the way.
That’s all from us for 2020. We hope you have a wonderful and restful holiday season. See you in 2021!