A code – be it a chord, melody or rhythm in someone’s head or a disk – can output into puffs of air to create this exquisite, yet ultimately unquantifiable substance we know as music. So how can a machine help us write music for humans?

The gorgeous accidents of the musician’s fingers slipping across their instrument, falling into a new collection of notes – or that warm feeling when a voice breaks between notes of a melody that make the whole tune sweeter – this is the intimate sound of a flawed and beautiful musician, their power, their individuality, their magic – the song inside the tune as Christina Aguilera once said.  

So how can you go from that mystery of music to asking a machine to write a hit song? Well, my team ‘Smorgasborg’, one of 38 entries to this year’s AI Song Contest decided to explore this question.

Voting closes on July 1st – listen to as many entries as you like – and vote up the ones you enjoy the most!

This years’ participants are a fantastic collection of artists, technologists and machine learning enthusiasts, all of whom approached the brief differently – make music – using machines! Now, as a musician learning the elements of coding, I approached the challenge from a musical perspective – choosing to discover “How can we use accessible AI tools to augment the human creative process rather than replace it?”

I’d taught myself AI composing with Style Transfer during my cancer treatment and subsequent shielding here in the UK. It’s surprising how much time there is for learning when utterly stationary, or unable to leave the house, or indeed when it’s impossible to sleep through the night… so why not stay up and run code every few minutes?

My initial results were interesting but not what I would call ‘easy listening’. This time would be different.

Like many musicians, I miss jamming with real people in real time, that physiological response to sound that makes us stand with others in front of massive speakers ‘in flow state’ – that feeling of being somewhere or nowhere – ‘in the zone’.

AI’s potential goes much further than just audio generating audio – which is what I’d been doing – and it just didn’t feel enough like a collaboration. After all, we could have asked the machine to produce random sounds and then selected the best. But instead, we split the song- some elements created by AI, some by the musician- and NO autotune!

Here’s a flowchart showing the relationship between human- and machine-generated content.

My team discussed many issues – even at the level of deciding what popular music is – for example, dominant music theory is based on Western, European tradition. This was partly why we elected to start with AI lyric generation rather than music, as it kept orchestral and melodic options open. We felt AI music would have been more likely to generate western diatonic sounds, as the data sets are currently largely lacking in music outside these descriptors. We wanted to explore some influences outside the popular western sounds.

Here are a few seeding prompts in bold below: For a while I was tempted to go with really simple songs. Then, during a team call, I read through a few that were strangely poetic”.

Example Prompts | Completions from GPT-3 (safe outputs) follow:

Do people want a catchy tune, a riff that can fly them to the moon? | Or would they rather just see me in my underpants?…” 

“dance dance I wanna | see you move dance dance dance! (Oh!) It’s time to make it right (Oh!) We’re gonna make it…”

write a love song about 2 machines | falling in luv. 1 hour later. BAM BAM BAM BAM BAM BAM “i just wan 2 say, that’s one of the best songs i’ve ever heard” “I KNOW RIGHT” “i mean, i would probably pay like $1.99 just to get that song on my phone.” “do it, do it now, i’ll wait” BAM BAM BAM BAM BAM BAM “i did it” “KEEP THE CHANGE…” 

After further experimentation, the below completion caught our attention.  

feel the bass dance all night headphone on smartphone light walking like a zombie in the dark pulsating body nightlife make you move vibrating body sending shiver to the core….”

These words… sounded most like a story, they had evocative phrases like ‘walking like a zombie’” We fed this modified prompt back into the engine and generated our lyrics which remain unchanged  throughout the songwriting process.  

feel the bass dance all night headphones on smartphone light walking like a zombie in the dark | ‘til the streetlight burns the sunrise behind your back you were here now you’re gone There’s a girl I know who knows what she wants.

I was inspired by the words and created a first draft of the melody, but was getting stuck on ‘Shine like a diamond dust in the universe’. We wanted to use lyrics verbatim to stay faithful to the AI, but were stumped on how to parse this particular musical phrase.So we used OpenAI’s Jukebox trained on Adele to suggest various new melodic lines. 

At first I used a model to output 30 seconds of music at a time – but my first attempts were frustrating – it didn’t create tunes that made theoretical sense! After more false starts, I realised co-composing suited me more, given my mainly musical background. Supervising every 4-6 secs added my own musical preferences to the generative result. 

After 21 attempts (and more crashes!), attempt 22 inspired me to re-scan the lyric lines –

|| Shine like a diamond  ||  Dust in the universe became

|| Shine || Like a diamond dust  ||  In the universe.

Yes! I gleefully thanked the program out loud even though it was 01:30AM, and sang a guide melody and piano accompaniment into Logic Pro X. I felt no need to upsample as I wasn’t planning to output audio and just needed to hear the melodic lines.

Google’s NSynth -one of the settings used with Ableton | Imaginary Soundscapes – with the image used to generate fireworks in the chorus

The bass, piano and pad are all generated via the NSYNTH sound. I was inspired by team mate Leila saying the song was set “In Space” and chose sounds based on this thought – resulting in ethereal and floating pads, with sharp shards of passing comet dust! Continuing the theme, we also used AI-generated audio from Imaginary-soundscape, an online engine designed to add suggested soundscapes to (normally earth) landscapes. We used an image of space from Unsplash and the AI returned audio – fireworks!  You can hear these alongside the chorus.

If you’d like to help us become Top of the Bots – please vote here – no need to register! Team Smorgasborg is excited to be part of the AI Song Contest!

A selection of AI tools used in the creative process: we also used Deep Music Visualizer and Tokkingheads for the music video

GPT-3 – Lyric generation from word prompts and questions  https://beta.openai.com 

Jukebox (OpenAI) – neural networks style transfer for solving musical problems https://jukebox.openai.com 

NSYNTH – machine learning based synthesiser for sound generation  https://magenta.tensorflow.org/nsynth 

Imaginary Soundscape –  AI generated soundscapes from images   https://www.imaginarysoundscape.net

DreamscopeApp – deep dream image generator https://dreamscopeapp.com/deep-dream-generator 

Music Video for Team Smorgasborg, LJ, Dav and Leila.

“I knew the song was finished when Logic gave me the “System Overload” message.”

-very late at night

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s