The AI Song Contest

A code – be it a chord, melody or rhythm in someone’s head or a disk – can output into puffs of air to create this exquisite, yet ultimately unquantifiable substance we know as music. So how can a machine help us write music for humans?

The gorgeous accidents of the musician’s fingers slipping across their instrument, falling into a new collection of notes – or that warm feeling when a voice breaks between notes of a melody that make the whole tune sweeter – this is the intimate sound of a flawed and beautiful musician, their power, their individuality, their magic – the song inside the tune as Christina Aguilera once said.  

So how can you go from that mystery of music to asking a machine to write a hit song? Well, my team ‘Smorgasborg’, one of 38 entries to this year’s AI Song Contest decided to explore this question.

Voting closes on July 1st – listen to as many entries as you like – and vote up the ones you enjoy the most!

This years’ participants are a fantastic collection of artists, technologists and machine learning enthusiasts, all of whom approached the brief differently – make music – using machines! Now, as a musician learning the elements of coding, I approached the challenge from a musical perspective – choosing to discover “How can we use accessible AI tools to augment the human creative process rather than replace it?”

I’d taught myself AI composing with Style Transfer during my cancer treatment and subsequent shielding here in the UK. It’s surprising how much time there is for learning when utterly stationary, or unable to leave the house, or indeed when it’s impossible to sleep through the night… so why not stay up and run code every few minutes?

My initial results were interesting but not what I would call ‘easy listening’. This time would be different.

Like many musicians, I miss jamming with real people in real time, that physiological response to sound that makes us stand with others in front of massive speakers ‘in flow state’ – that feeling of being somewhere or nowhere – ‘in the zone’.

AI’s potential goes much further than just audio generating audio – which is what I’d been doing – and it just didn’t feel enough like a collaboration. After all, we could have asked the machine to produce random sounds and then selected the best. But instead, we split the song- some elements created by AI, some by the musician- and NO autotune!

Here’s a flowchart showing the relationship between human- and machine-generated content.

My team discussed many issues – even at the level of deciding what popular music is – for example, dominant music theory is based on Western, European tradition. This was partly why we elected to start with AI lyric generation rather than music, as it kept orchestral and melodic options open. We felt AI music would have been more likely to generate western diatonic sounds, as the data sets are currently largely lacking in music outside these descriptors. We wanted to explore some influences outside the popular western sounds.

Here are a few seeding prompts in bold below: For a while I was tempted to go with really simple songs. Then, during a team call, I read through a few that were strangely poetic”.

Example Prompts | Completions from GPT-3 (safe outputs) follow:

Do people want a catchy tune, a riff that can fly them to the moon? | Or would they rather just see me in my underpants?…” 

“dance dance I wanna | see you move dance dance dance! (Oh!) It’s time to make it right (Oh!) We’re gonna make it…”

write a love song about 2 machines | falling in luv. 1 hour later. BAM BAM BAM BAM BAM BAM “i just wan 2 say, that’s one of the best songs i’ve ever heard” “I KNOW RIGHT” “i mean, i would probably pay like $1.99 just to get that song on my phone.” “do it, do it now, i’ll wait” BAM BAM BAM BAM BAM BAM “i did it” “KEEP THE CHANGE…” 

After further experimentation, the below completion caught our attention.  

feel the bass dance all night headphone on smartphone light walking like a zombie in the dark pulsating body nightlife make you move vibrating body sending shiver to the core….”

These words… sounded most like a story, they had evocative phrases like ‘walking like a zombie’” We fed this modified prompt back into the engine and generated our lyrics which remain unchanged  throughout the songwriting process.  

feel the bass dance all night headphones on smartphone light walking like a zombie in the dark | ‘til the streetlight burns the sunrise behind your back you were here now you’re gone There’s a girl I know who knows what she wants.

I was inspired by the words and created a first draft of the melody, but was getting stuck on ‘Shine like a diamond dust in the universe’. We wanted to use lyrics verbatim to stay faithful to the AI, but were stumped on how to parse this particular musical phrase.So we used OpenAI’s Jukebox trained on Adele to suggest various new melodic lines. 

At first I used a model to output 30 seconds of music at a time – but my first attempts were frustrating – it didn’t create tunes that made theoretical sense! After more false starts, I realised co-composing suited me more, given my mainly musical background. Supervising every 4-6 secs added my own musical preferences to the generative result. 

After 21 attempts (and more crashes!), attempt 22 inspired me to re-scan the lyric lines –

|| Shine like a diamond  ||  Dust in the universe became

|| Shine || Like a diamond dust  ||  In the universe.

Yes! I gleefully thanked the program out loud even though it was 01:30AM, and sang a guide melody and piano accompaniment into Logic Pro X. I felt no need to upsample as I wasn’t planning to output audio and just needed to hear the melodic lines.

Google’s NSynth -one of the settings used with Ableton | Imaginary Soundscapes – with the image used to generate fireworks in the chorus

The bass, piano and pad are all generated via the NSYNTH sound. I was inspired by team mate Leila saying the song was set “In Space” and chose sounds based on this thought – resulting in ethereal and floating pads, with sharp shards of passing comet dust! Continuing the theme, we also used AI-generated audio from Imaginary-soundscape, an online engine designed to add suggested soundscapes to (normally earth) landscapes. We used an image of space from Unsplash and the AI returned audio – fireworks!  You can hear these alongside the chorus.

If you’d like to help us become Top of the Bots – please vote here – no need to register! Team Smorgasborg is excited to be part of the AI Song Contest!

A selection of AI tools used in the creative process: we also used Deep Music Visualizer and Tokkingheads for the music video

GPT-3 – Lyric generation from word prompts and questions  https://beta.openai.com 

Jukebox (OpenAI) – neural networks style transfer for solving musical problems https://jukebox.openai.com 

NSYNTH – machine learning based synthesiser for sound generation  https://magenta.tensorflow.org/nsynth 

Imaginary Soundscape –  AI generated soundscapes from images   https://www.imaginarysoundscape.net

DreamscopeApp – deep dream image generator https://dreamscopeapp.com/deep-dream-generator 

Music Video for Team Smorgasborg, LJ, Dav and Leila.

“I knew the song was finished when Logic gave me the “System Overload” message.”

-very late at night

Covid-19 in the UK – sonified

Data Sonification – using sound to make data easier to understand – is fascinating, and I think an incredibly powerful way to understand something quickly and instinctively – more so than just looking at the numbers alone.

Coronavirus Data from the UK made into music

The rising numbers of the coronavirus outbreak in the UK have become difficult to comprehend, so I wanted to use sound to create a more meaningful interaction with the statistics.

Listen out for the following:

Harp = Total cases | Violin = R number | Church Organ = new cases per month.

I used two tone for the data sonification after cleaning the data, then exported each ‘data song’ file into Logic Pro X to mix. The piece goes up at the end because of the rising case numbers. However, I felt the piece needed something extra to make it more ‘listenable’ and therefore easier to understand – instead of just hearing a series of musical notes. So the challenge: how do we balance any accompanying instruments – adding ambience and atmosphere without obstructing the data?

The hardest thing was choosing how to orchestrate the data itself: for fast moving numbers I felt the sound needed to be more percussive – but for the R number I felt there needed to be a more constant sound. I suspect there are some innate rules for data sonification I’m tapping into here, which might be interesting to research further.

Finally I used deep music visualiser to generate an #AI video which responds to the pitch and tempo, then onto Final Cut Pro X to edit on the captions and statistics to help listeners detect how the changes in pitch correlate to the numbers.

I hope that the next time I’m turning coronavirus data into music that the piece ends up lower at the end.

The UK #Coronavirus data was correct at 20 Dec. Stats taken from Our World In Data

3 Surprising AI Music Mashups that will make you question your musical tastes

24-hour streaming AI-generated heavy metal on YouTube completely fascinated me – created by the eccentric Dadabots, half of whom I’ve regularly collaborated with on various strange musical projects. Their outputs inspired me to start my own journey of intersecting music with machine learning.

I’ve been composing since I was a kid on whatever platform I could find. Classically trained with a music degree while hungry for as much new music as possible makes for a strange hybrid, a musician and performer trying to understand a technologist’s world.

Amid much struggling and general frustration and many false starts, the stubbornness and late night wrangling paid off. I had my first track and plucked up the courage to share some of my experiments online.

So, here’s one of my first flirtations with Music and Machine Learning on Instagram – the Beatles singing ‘Call Me Maybe’ – because for some reason I thought it needed to exist. And, buoyed by my coding success, I learned how to generate some eye-bending video based on pitch and tempo too.

Each track takes quite a few hours to generate – even 45 seconds or so is a whole evening of attention. The way I’ve been doing it is heavily supervising the code, I need to intervene every few seconds to suggest a new direction for the algorithm in order for it to fit the direction I want it to go in. A lot of the decisions I’m making are not technical – they’re based on my musical knowledge. Then I listen repeatedly to the slowly lengthening audio to see if there’s a recognisable tune being created. Is it sounding like something a human can sing? Plus the ‘upsampling’ process, where some of the noise is removed, can take many hours. A lot of the time I’ll crash out of the virtual machine I’m using because I’m on the free tier. Sometimes I’ll lose everything.

Sounds frustrating, and it’s even more annoying in practice. Yet I find the ultimately infuriating nature of co-composing this way rather addictive. And, wow, when it actually does work, the results are incredibly rewarding.

So, ‘my’ new song made up of thousands of tiny bites of Beatles was compiled. And it is undeniably the Beatles singing ‘Call Me Maybe’ – so much so that a few of my friends thought this could easily be a demo tape or an unheard song if not for the lyrics.

My work received admiration from those familiar with AI music generation – they could tell how much effort was required to create it. And as well as praise, this short tune also generated unsettling feelings for others – which weirdly excited me – to have made something so conversation-worthy – especially in a field as wide as AI and Machine Learning felt like I was onto something, that my musical approach could add value in its own way.

Here’s another one – Queen singing ‘Let It Go’.

So why do I think this might make you question your musical tastes? Well, many of us are quite specific about the music we like. But if a fifty-year-old Beatles recording can be rehashed for a 21st Century Audience, would this track encourage a non-Beatles listener to explore more of this kind of music? Or would a devout 1960s music fan be persuaded to venture outside their comfort decade into the world of sugary pop music? I think it does.

Here’s U2 singing ‘Bat Out Of Hell’.

I’m surprised how much the original artist maintains their presence in each of these examples. And I’m somewhat tickled that the processing and supervision of each track makes this a very labour-intensive activity – not unlike standard music production.

As a new composing method, I am in awe of the sheer amount of work that must have gone into creating this program, and the brilliant minds behind it who conceived and created such a formidable tool for co-creation.

It even seems possible to train the AI on any kind of music as long as the artist has made enough material to be sampled adequately. Which is great news for those of us keen to create cross-cultural artworks – even though there are thousands of artists in the current Jukebox library, the content does appear to skew toward English-speaking music – a useful reminder that bias is built in to every system with humans at one end of it. So one of my next quests will be to see whether I can create my own training set (which might prove taxing on the free tier).

Finally, from a musical perspective, human composers still have quite a few advantages over machines, though generating music with AI is like a whole band writing all its parts at once, which can be very satisfying, if erratic. Sometimes the algorithm is temperamental – and doesn’t work at all. Other times, sublimely beautiful chords and ad-libs come out. No one can know whether the next track is a hit or a miss.

Even controlling the output is gloriously elusive: for example I can’t force a tune to go up or down at any point (though I can choose one of the alternatives that fits roughly where I’d like the tune to go). I don’t have much choice over the rate or meter of the lyrics – though there is some leeway when paginating them in the code. And changing the rate of intervention also affects what’s being generated – in short, the illusion of pulling order from chaos, a pleasing reflection of what composing music means to me.

In quite a few instances the AI has surprised me musically, and that is intriguing enough on its own for me to want to continue creating and co-composing with a machine. With so many possibilities in this field right now, I’m looking forward to exploring more.

Virtual Presenting

Virtual conferences and events are becoming the norm; here are a few things I’ve done at home this month…

AI for Good virtual summit - video conference call hosted by LJ Rich for the United Nations
Hosting the AI For Good United Nations | Global Esports Federation panel – Virtually
Remote Recording for BBC Click Week 1 – working while shielding.
Remote Recording for BBC Click Week 2 – more time for set dressing as the equipment was already working. Eagle-eyed viewers may notice Bender the Robot in the background.

Getting into Flow states during difficult times – like going through Breast Cancer

Stories of my disappearance are greatly exaggerated.

I’ve just been diagnosed with breast cancer, so now looking forward to playlists such as “Now that’s what I call Chemo!” followed by the hotly anticipated “Last Night a Surgeon Saved my Life” slated for release in the new year. *

I’ll still present and perform occasionally when going through treatment. We’ll play it by ear, depending on how everything progresses and energy levels. Short Notice bookings might work. I’m optimistic that normal service will slowly resume if the results are encouraging.

To those of you unlucky enough to have some experience of the disease (either yourself, or someone you know) I send positivity and love.

BACKSTORY

My mum died of cancer when I was a child – details of her medical treatment were whispered in hushed tones in separate rooms, out of earshot and in secrecy. Yet attempts to conceal her terminal diagnosis were pointless as I knew what was going on. With nobody to ask, I felt powerless.

It’s why this very personal post exists: to try the opposite track. Knowledge (and Music) has always been a great comfort, and my curiosity to learn has helped me overcome so many obstacles, achieve so much and connect with frankly extraordinary people while performing or presenting. I’ve explained as openly and as age appropriately as possible to my toddler. Telling children early on is advised by Macmillan and other cancer charities -such a difficult experience to do but it was taken well and felt like the best decision for him.

FORWARD STORY

I may be dealing with cancer now, but as anyone different has always known, there’s simply so much more to us than how we look, our personal struggles, or what equipment we use to access the world. We are still artists, engineers, writers, thinkers, comics, poets, AI magicians, coders, hackers, lockpickers, DJs, storytellers. We laugh at great jokes, cry at terrible movies on planes (well, OK, I do) and enjoy connecting with others through our love of creativity, unconventional thinking. Best of all we love to collaborate with those who challenge and excite us. We crave pure connections which at their best transcend physical and mental capabilities, creating that elusive and magical experience of flow for audiences and performers to share.

In short, you are welcome to ask cancer questions, which I will answer if I feel up to it…. or, we can talk about stuff I still really love, like music, technology and performing. Any great, fun audio books with happy endings? Wholesome comedy video clips and podcasts? let me know!

Want to lend me some Interesting VR/Immersive kit for a day to try out during or after a chemo treatment in London? I’m down for any entertaining tech distractions during my many medically enhanced hours over the next weeks and months. Some things feel rather unpleasant, so if you think it might be fun to see how creative tech could help make procedures more palatable – that sounds kind of awesome to me too. Interested? Do click here to contact.

Finally, to everyone reading this far, thank you so much for our connection – whether it’s IRL or online  – however brief, however close, through TV, presenting or hacker / music circles, please know that many of you have been responsible for so many of my happiest thoughts. Your impact has been so positive in helping me want to be better – authentic, inspiring, thoughtful, uplifting – and kind. I wish you all the very best.

Love, LJ

P.S *These playlists don’t exist – yet. Want me to make some?

Space: The Filthy Frontier!

An exciting use. of knowledge from my time in the NASA Datanauts program,
here’s a 4 minute a story about sustainability in space for the BBC. Watch to find out how to clean up space – because we can’t use a vacuum cleaner in a vacuum…

I scored all the music for this feature, I really enjoy doing this when I have the time !

Happy.

Those asking about my health:

Thankyou! I may be in lockdown but I’m now IN REMISSION from Breast Cancer!

Still recovering from surgery, some procedures remain, but they’ve got to be easier than chemo! Sending love and see you online soon x

Under new management!

…excited to announce that I’m now exclusively represented by JLA Management.

It’s great timing as I’m being asked to present and perform at so many incredible events!

Do please contact Hannah Oldman on +442079072800 for further information.

United Nations Performance May 2019

This Week: The Musical

Oooooh!!!!!

I’ve been super busy because I have a new show!! It’s called This Week: The Musical and we are soft launching it today… you’re some of the first humans to hear it!

Listening links right here:  
iTunes:     

This Week: The Musical is a comedy tech news podcast,  complete with songs and sketches and special guests. Totally free to listen and very silly.

Leila Johnston and I have wanted to make something like this for ages!

So if you’d like to listen to our podcast right now, that would be utterly amazing.

This week’s episode includes songs about Spice Girls being replaced by machines and also an interview with the formidable Gretchen Greene, an artist who’s building her own AI to assist her in her creations.

p.s. if do you like it, please do subscriiiiiiibe and/or chat about it with us on #ThisweekTM. We live at www.the2ljs.com  and have a facebook page too. Plus we’re on instagram and twitter @the2ljs
 

Yay! thanks for reading this far, I love you all dearly. I hope you like it 🙂

This Week: The Musical

Tech News Comedy Podcast with songs, sketches, special guests and silliness. Includes against-the-clock innovation and a rogue pikachu.

LJ x

 

What’s The Best World Cup England Football Song?

OK, I enjoy the world cup, and understand the off-side rule, but generally prefer playing football to watching it. But when my friend from Great British Chefs posted a query: What’s The Best World Cup England Football Song? World in Motion or Three Lions? – for many days now I’ve been thinking about it. So much so that my response simply does not fit into a social media reply… Mecca, here’s your answer.

“World in Motion” works brilliantly as a standalone pop song, but is it ‘catchy’?  

Harmonically, “World In Motion” is musically complex (apart from the ‘rap’) and there’s a lot going on in terms of chords, percussion and orchestration. It’s interesting, but also demands effort from the listener, meaning cognitively you have to think about where the song is going and pay attention to it. 

Melodically, “World In Motion” is quite gentle – there’s no descending melodies or musical ‘jumps’; it’s tuneful but stays in one place. Yes, the rap segment was distinctive and served to make the song very memorable – but not for musical reasons.  

“World In Motion” is also a little slow for singing along – most people sing football songs while excited, so their heart rate will be higher – faster tempo songs will come to mind more easily in a heightened state like this!

Apart from tempo, both songs have a lot in common – two choruses: ‘Love’s got the World in Motion / Let’s hear it for England’ and ‘Three Lions on a shirt / It’s Coming Home’,  both largely in major (happy) keys, and both are quite sparsely orchestrated. Both songs also do not require particularly good vocal skills.

And so, to catchiness –  have you heard anyone singing “World In Motion” in the street? Personally I haven’t, but “It’s Coming Home” is everywhere.

So what’s happening here? How are they different?

I think the answer lies in melody. Despite great orchestration and production of New Order’s “World in Motion”, the “Three Lions” musical phrases are much more likely to be sung in the wild. They are easier to remember, easier to sing along to and more physically pleasant to sing out loud.

I’ll tell you how this works: The descending tune of the “Three Lions on a shirt” call is followed by an upward response “Jules Rimet still gleaming”.  Wow, super catchy and melodically pleasant –  it’s a classic gospel song call and response: You know where it’s going, there’s a clear musical path that you only need to hear once to sing along, and you’re excited to hear it resolve. A great pop example of this is ‘Twist and Shout’

The second is the clincher “It’s coming home, it’s coming home, it’s coming…” – what an anthem! There’s a built-in melodic jump – you can hear these in a lot of Abba songs (Winner Takes it All, Take a Chance on Me) and many existing football chants. There’s also repetition – this simple phrase is a natural ear-worm, there’s no effort involved in remembering this even after hearing it once.  Listening to this phrase takes low cognitive effort in contrast to “World In Motion”, plus there’s yet another embedded ‘call and response’ too. 

Finally ‘It’s Coming Home’  is a semi-unresolved melody. Getting to the end of the phrase, you want to sing it again. You can see this unresolved effect at work in notable examples like ‘Song 2’ by Blur, ‘My Shorrona’ and ‘Living on a Prayer’. Catchy? Absolutely. And now you know why my considered answer is “Three Lions”, musically the most suited football song for England’s World Cup 2018 journey.