Data Sonification – using sound to make data easier to understand – is fascinating, and I think an incredibly powerful way to understand something quickly and instinctively – more so than just looking at the numbers alone.
The rising numbers of the coronavirus outbreak in the UK have become difficult to comprehend, so I wanted to use sound to create a more meaningful interaction with the statistics.
Listen out for the following:
Harp = Total cases | Violin = R number | Church Organ = new cases per month.
I used two tone for the data sonification after cleaning the data, then exported each ‘data song’ file into Logic Pro X to mix. The piece goes up at the end because of the rising case numbers. However, I felt the piece needed something extra to make it more ‘listenable’ and therefore easier to understand – instead of just hearing a series of musical notes. So the challenge: how do we balance any accompanying instruments – adding ambience and atmosphere without obstructing the data?
The hardest thing was choosing how to orchestrate the data itself: for fast moving numbers I felt the sound needed to be more percussive – but for the R number I felt there needed to be a more constant sound. I suspect there are some innate rules for data sonification I’m tapping into here, which might be interesting to research further.
Finally I used deep music visualiser to generate an #AI video which responds to the pitch and tempo, then onto Final Cut Pro X to edit on the captions and statistics to help listeners detect how the changes in pitch correlate to the numbers.
I hope that the next time I’m turning coronavirus data into music that the piece ends up lower at the end.
24-hour streaming AI-generated heavy metal on YouTube completely fascinated me – created by the eccentric Dadabots, half of whom I’ve regularly collaborated with on various strange musical projects. Their outputs inspired me to start my own journey of intersecting music with machine learning.
I’ve been composing since I was a kid on whatever platform I could find. Classically trained with a music degree while hungry for as much new music as possible makes for a strange hybrid, a musician and performer trying to understand a technologist’s world.
Amid much struggling and general frustration and many false starts, the stubbornness and late night wrangling paid off. I had my first track and plucked up the courage to share some of my experiments online.
So, here’s one of my first flirtations with Music and Machine Learning on Instagram – the Beatles singing ‘Call Me Maybe’ – because for some reason I thought it needed to exist. And, buoyed by my coding success, I learned how to generate some eye-bending video based on pitch and tempo too.
Each track takes quite a few hours to generate – even 45 seconds or so is a whole evening of attention. The way I’ve been doing it is heavily supervising the code, I need to intervene every few seconds to suggest a new direction for the algorithm in order for it to fit the direction I want it to go in. A lot of the decisions I’m making are not technical – they’re based on my musical knowledge. Then I listen repeatedly to the slowly lengthening audio to see if there’s a recognisable tune being created. Is it sounding like something a human can sing? Plus the ‘upsampling’ process, where some of the noise is removed, can take many hours. A lot of the time I’ll crash out of the virtual machine I’m using because I’m on the free tier. Sometimes I’ll lose everything.
Sounds frustrating, and it’s even more annoying in practice. Yet I find the ultimately infuriating nature of co-composing this way rather addictive. And, wow, when it actually does work, the results are incredibly rewarding.
So, ‘my’ new song made up of thousands of tiny bites of Beatles was compiled. And it is undeniably the Beatles singing ‘Call Me Maybe’ – so much so that a few of my friends thought this could easily be a demo tape or an unheard song if not for the lyrics.
My work received admiration from those familiar with AI music generation – they could tell how much effort was required to create it. And as well as praise, this short tune also generated unsettling feelings for others – which weirdly excited me – to have made something so conversation-worthy – especially in a field as wide as AI and Machine Learning felt like I was onto something, that my musical approach could add value in its own way.
Here’s another one – Queen singing ‘Let It Go’.
So why do I think this might make you question your musical tastes? Well, many of us are quite specific about the music we like. But if a fifty-year-old Beatles recording can be rehashed for a 21st Century Audience, would this track encourage a non-Beatles listener to explore more of this kind of music? Or would a devout 1960s music fan be persuaded to venture outside their comfort decade into the world of sugary pop music? I think it does.
Here’s U2 singing ‘Bat Out Of Hell’.
I’m surprised how much the original artist maintains their presence in each of these examples. And I’m somewhat tickled that the processing and supervision of each track makes this a very labour-intensive activity – not unlike standard music production.
As a new composing method, I am in awe of the sheer amount of work that must have gone into creating this program, and the brilliant minds behind it who conceived and created such a formidable tool for co-creation.
It even seems possible to train the AI on any kind of music as long as the artist has made enough material to be sampled adequately. Which is great news for those of us keen to create cross-cultural artworks – even though there are thousands of artists in the current Jukebox library, the content does appear to skew toward English-speaking music – a useful reminder that bias is built in to every system with humans at one end of it. So one of my next quests will be to see whether I can create my own training set (which might prove taxing on the free tier).
Finally, from a musical perspective, human composers still have quite a few advantages over machines, though generating music with AI is like a whole band writing all its parts at once, which can be very satisfying, if erratic. Sometimes the algorithm is temperamental – and doesn’t work at all. Other times, sublimely beautiful chords and ad-libs come out. No one can know whether the next track is a hit or a miss.
Even controlling the output is gloriously elusive: for example I can’t force a tune to go up or down at any point (though I can choose one of the alternatives that fits roughly where I’d like the tune to go). I don’t have much choice over the rate or meter of the lyrics – though there is some leeway when paginating them in the code. And changing the rate of intervention also affects what’s being generated – in short, the illusion of pulling order from chaos, a pleasing reflection of what composing music means to me.
In quite a few instances the AI has surprised me musically, and that is intriguing enough on its own for me to want to continue creating and co-composing with a machine. With so many possibilities in this field right now, I’m looking forward to exploring more.
Which artists do you want to hear? What do you think of AI-generated music? Feel free to comment below.
I’ve been away from work and the internet for a while working on a massive project – building a baby! Having always been completely fascinated by music’s power to move us and change our perceptions, I thought there would be lots of music specifically for giving birth – but nothing came up. Which was ironic, I thought, considering that for conceiving a baby there are any number of musical accompaniments available! So I created music especially for me and anyone else going through the intense experience of labour, birth and early parenthood.
This album is very special to me – I wanted music that had calmness at its heart to support the incredible and inevitable journey from pregnancy to parent, but I think it’s also a very enjoyable listen if I just want to zone out and remember how to breathe.
Talking of which, this instrumental music complements all kinds of breathing rhythms during stages of labour and birth. I composed it while pregnant and only completed it half an hour before labour!
Originally I wasn’t going to release this music or talk publicly about my experience (it’s so personal!) but below I explain exactly why I chose to do so.
So yes, I had a baby – he is awesome. And for anyone interested in why I wrote such an album, I’ve shared my very personal story about conquering a lifelong phobia of giving birth below the track listing. If you’re not interested that’s fine too – in any case it’s good to be back, baby! Normal streaming, tweeting and writing about music, inventions, technology and synaesthesia will soon resume.
Relaxing Music for Giving Birth Tracklist
TRACK 1: Incarnation. Labour, Birth and Calm Over an hour long and can be repeated seamlessly throughout labour. will play on multiple devices and speakers without sounding out of tune or out of time. It also works as a continuous loop. I didn’t want music to get in the way of my breathing or the physiological process of giving birth – this was actually the hardest part, making sure it was musical and rhythmic while still leaving space for getting into the zone. I had this track on repeat throughout my labour.
Plus – Bonus Tracks for after the big day to give the new family a gentle soundtrack for those first utterly indescribable weeks….
TRACK 2: Serenity. Soundtrack to a Contented Baby
Serene piano sounds soothe and support a calm environment – contains a unique ‘white noise blanket’ – to soften any unexpected sudden sounds from outside that might startle a new baby. It seems to soothe my little one!
TRACK 3: Relaxation: Sleepy Parents, Sleepy Baby
Encouraging even breathing; aims to elongate any rare moments of calmness and sleep – not that you’ll get much sleep over the first months of parenthood! I found this really helped my little one settle. A sleepy soundtrack to chill out with the new arrival. For babies AND parents.
No apologies for massive wall of text below!
HOW THIS ALBUM WAS BORN
(NB don’t worry there are no scary bits. It contains music, a little tech and a deeply personal story behind the album)
I’ve recently started playing with an app called Periscope, giving interactive music concerts at my stage piano.
My ‘cast’ (if that’s what it’s called?) includes talks on music theory, breaking down similarities in familiar tunes and of course playing the odd request – like a classical version of Michael Jackson’s ‘Human Nature’ – or a mashup between Journey’s ‘Don’t Stop Believing’ and The Commodores’ ‘Easy Like Sunday Morning’
It’s really fun – and my usecase seems to be a unique enough to get a mention in the Daily Telegraph‘s round up of the technology.
So, what do I get up to when I cast? you can find out on Mondays at 20:30 UK Time! At least, that’s the plan…
Essentially I’m talking about classical music theory using contemporary tunes – why is something catchy? What songs sound similar? What bits make a tune feel good? The session is mixed with live composition and conversation – content creation and audience interaction in real time.
Giving the audience access to the creative process and also a chance to communicate is pretty much exactly opposite to a traditional classical concert, where I’d be on a stage, far away from the listeners.
I believe it’s possible to demystify music without dissecting it, it’s so fun to explain what’s happening while playing some of the most memorable songs on the planet. I think this kind of informal direct broadcast is a great proving ground until I have my own big budget show where I have a huge grand piano and some notable musical guests to riff with.
Until then, viewers who make the effort to interact and contribute positively are going to shape how this cast evolves. How exciting! What works, what doesn’t, what do people want more of? I’m finding out every day. I’d hope to keep the audience interactivity if a big TV company wants to fund the huge grand piano and notable musical guests version.
For those of you reading this on Thursday 9th April 2015, there’s a replay available until 22:30 tonight, but you’ll need to download the app on an Apple device to watch at the moment. They say there’s an Android version coming. And, if you do visit, please ignore my faffing with the cables at the start, it definitely gets better.
At about 1AM this morning, I think I solved the problem of getting a decent audio feed in and listening at the same time, so Monday’s cast should have really rather good sound quality.
Oh and a final note from the technology presenter in me – streaming from mobiles has been available before – apps like Seesmic and Qik did this many years ago. But now data is cheaper, social media makes things more immediate, plus our connections are generally faster. This means the tech is ripe for mass adoption.
A notable alternative Meerkat has some big names endorsing it – Madonna released a video using this platform recently. I’ll let you know if I get a chance to try it out. And I’m sure there are other players in this area. In coming months we’ll get to see whether a single platform gains dominance, or if these apps can co-exist. It’ll inevitably play out over the next few months. Interesting interactive times!