Tenday Notes 11 August - 20 August 2023

Every ten days or so, I share a quick digest of what I've been working on and reading. Here's the latest. More in the series here.


I spent last weekend in Stockholm. My first visit to the capital in probably five years or so. It hasn’t changed too much - except that several people I know there have left. Slussen is still under construction, somehow.

I was there for a workshop about coding for the Norns music computer, run by Jonathan ‘Jaseknighter’ Snyder and Zack ‘Infinitedigits’ Scholl - two prominent members of the wonderful Lines community, which just so happens to be the best music community on the internet. We defined musical problems and solutions. We designed esoteric notation systems. We coded in Lua and Supercollider. It was lovely. But even lovelier was the opportunity to meet and learn from the instructors and other participants. When you’re into very niche technology platforms, it’s a real joy to meet other human beings IRL who are also into those things.

I used some of the coding time to do a couple of quality-of-life updates to my Norns scripts. My Loud Numbers sonification script now preserves any data files you load into it when updated, so you don’t lose your numbers. And my Grid of Points quantised keyboard got some sound design - it now has more of a distinct “voice” of its own, blending gently between sine and square waves depending where on the grid you play. There's more work I want to do here.

I also took the opportunity to meet up with my friends at Mojang and Data4Change, and ran into another dataviz acquaintance on the street. It’s a small big city.

Finally, a small shout-out to our workshop hosts, Blivande, which is an amazing collaborative community based in Stockholm’s Frihamnen port. If I lived there, I would definitely want to be a member. Maybe I should sign up to the Stockholm housing queue. Just in case…

Blivande

Otherwise things are going fairly smoothly right now. I'm doing a day a week for Possible. I'm helping Drawdown out with some infographics. I'm working with Infogr8 on an update for the American Opportunity Index. I just finished writing a long read for datajournalism.com, which I'll say more about when it goes live. Loud Numbers continues. Elevate continues. My weekly column for the official Minecraft website continues. And I'm in negotiations about a fairly chunky project for the Autumn.

Oh, and I'm about to take two weeks off to visit my family back in the UK. For the first time I'm going to try and do it by train - which means going down to Malmö Friday night, staying over there, a full day of travel down to Brussels the next day, staying over there, and then getting the Eurostar to London the following morning. Cross fingers that it goes smoothly 😬


Engadget has a nice explainer on why we can't use artificial intelligence to translate animal calls into English (or any other language). Here's a snippet:

Training language models isn’t simply a matter of shoving in large amounts of data. When training a model to translate an unknown language into what you’re speaking, you need to have at least a rudimentary understanding of how the the two languages correlate with one another so that the translated text retains the proper intent of the speaker.
“The strongest kind of data that we could have is what's called a parallel corpus,” Dr. Goodman explained, which is basically having a Rosetta Stone for the two tongues. In that case, you’d simply have to map between specific words, symbols and phonemes in each language — figure out what means “river” or “one bushel of wheat” in each and build out from there.
Without that perfect translation artifact, so long as you have large corpuses of data for both languages, “it's still possible to learn a translation between the languages, but it hinges pretty crucially on the idea that the kind of latent conceptual structure,” Dr. Goodman continued, which assumes that both culture’s definitions of “one bushel of wheat” are generally equivalent.

Read the rest.

Why humans can’t use natural language processing to speak with the animals | Engadget
We’ve already got machine-learning and NLP that can translate speech into any number of languages.

Short one this time, but sometimes that's how it goes. I'll miss the next entry as I'll be travelling, so expect to hear from me again in mid-September.

See you then!

Duncan