I scored all the music for this feature, I really enjoy doing this when I have the time !
The AI Song Contest
A code – be it a chord, melody or rhythm in someone’s head or a disk – can output into puffs of air to create this exquisite, yet ultimately unquantifiable substance we know as music. So how can a machine help us write music for humans?
The gorgeous accidents of the musician’s fingers slipping across their instrument, falling into a new collection of notes – or that warm feeling when a voice breaks between notes of a melody that make the whole tune sweeter – this is the intimate sound of a flawed and beautiful musician, their power, their individuality, their magic – the song inside the tune as Christina Aguilera once said.
So how can you go from that mystery of music to asking a machine to write a hit song? Well, my team ‘Smorgasborg’, one of 38 entries to this year’s AI Song Contest decided to explore this question.
Voting closes on July 1st – listen to as many entries as you like – and vote up the ones you enjoy the most!
This years’ participants are a fantastic collection of artists, technologists and machine learning enthusiasts, all of whom approached the brief differently – make music – using machines! Now, as a musician learning the elements of coding, I approached the challenge from a musical perspective – choosing to discover “How can we use accessible AI tools to augment the human creative process rather than replace it?”
I’d taught myself AI composing with Style Transfer during my cancer treatment and subsequent shielding here in the UK. It’s surprising how much time there is for learning when utterly stationary, or unable to leave the house, or indeed when it’s impossible to sleep through the night… so why not stay up and run code every few minutes?
My initial results were interesting but not what I would call ‘easy listening’. This time would be different.
Like many musicians, I miss jamming with real people in real time, that physiological response to sound that makes us stand with others in front of massive speakers ‘in flow state’ – that feeling of being somewhere or nowhere – ‘in the zone’.
AI’s potential goes much further than just audio generating audio – which is what I’d been doing – and it just didn’t feel enough like a collaboration. After all, we could have asked the machine to produce random sounds and then selected the best. But instead, we split the song- some elements created by AI, some by the musician- and NO autotune!
My team discussed many issues – even at the level of deciding what popular music is – for example, dominant music theory is based on Western, European tradition. This was partly why we elected to start with AI lyric generation rather than music, as it kept orchestral and melodic options open. We felt AI music would have been more likely to generate western diatonic sounds, as the data sets are currently largely lacking in music outside these descriptors. We wanted to explore some influences outside the popular western sounds.
Here are a few seeding prompts in bold below: For a while I was tempted to go with really simple songs. Then, during a team call, I read through a few that were strangely poetic”.
Example Prompts | Completions from GPT-3 (safe outputs) follow:
“Do people want a catchy tune, a riff that can fly them to the moon? | Or would they rather just see me in my underpants?…”
“dance dance I wanna | see you move dance dance dance! (Oh!) It’s time to make it right (Oh!) We’re gonna make it…”
“write a love song about 2 machines | falling in luv. 1 hour later. BAM BAM BAM BAM BAM BAM “i just wan 2 say, that’s one of the best songs i’ve ever heard” “I KNOW RIGHT” “i mean, i would probably pay like $1.99 just to get that song on my phone.” “do it, do it now, i’ll wait” BAM BAM BAM BAM BAM BAM “i did it” “KEEP THE CHANGE…”
After further experimentation, the below completion caught our attention.
“feel the bass dance all night headphone on smartphone light walking like a zombie in the dark pulsating body nightlife make you move vibrating body sending shiver to the core….”
These words… sounded most like a story, they had evocative phrases like ‘walking like a zombie’” We fed this modified prompt back into the engine and generated our lyrics which remain unchanged throughout the songwriting process.
“feel the bass dance all night headphones on smartphone light walking like a zombie in the dark | ‘til the streetlight burns the sunrise behind your back you were here now you’re gone There’s a girl I know who knows what she wants.
I was inspired by the words and created a first draft of the melody, but was getting stuck on ‘Shine like a diamond dust in the universe’. We wanted to use lyrics verbatim to stay faithful to the AI, but were stumped on how to parse this particular musical phrase.So we used OpenAI’s Jukebox trained on Adele to suggest various new melodic lines.
At first I used a model to output 30 seconds of music at a time – but my first attempts were frustrating – it didn’t create tunes that made theoretical sense! After more false starts, I realised co-composing suited me more, given my mainly musical background. Supervising every 4-6 secs added my own musical preferences to the generative result.
After 21 attempts (and more crashes!), attempt 22 inspired me to re-scan the lyric lines –
|| Shine like a diamond || Dust in the universe became
|| Shine || Like a diamond dust || In the universe.
Yes! I gleefully thanked the program out loud even though it was 01:30AM, and sang a guide melody and piano accompaniment into Logic Pro X. I felt no need to upsample as I wasn’t planning to output audio and just needed to hear the melodic lines.
The bass, piano and pad are all generated via the NSYNTH sound. I was inspired by team mate Leila saying the song was set “In Space” and chose sounds based on this thought – resulting in ethereal and floating pads, with sharp shards of passing comet dust! Continuing the theme, we also used AI-generated audio from Imaginary-soundscape, an online engine designed to add suggested soundscapes to (normally earth) landscapes. We used an image of space from Unsplash and the AI returned audio – fireworks! You can hear these alongside the chorus.
If you’d like to help us become Top of the Bots – please vote here – no need to register! Team Smorgasborg is excited to be part of the AI Song Contest!
A selection of AI tools used in the creative process: we also used Deep Music Visualizer and Tokkingheads for the music video
GPT-3 – Lyric generation from word prompts and questions https://beta.openai.com
Jukebox (OpenAI) – neural networks style transfer for solving musical problems https://jukebox.openai.com
NSYNTH – machine learning based synthesiser for sound generation https://magenta.tensorflow.org/nsynth
Imaginary Soundscape – AI generated soundscapes from images https://www.imaginarysoundscape.net
DreamscopeApp – deep dream image generator https://dreamscopeapp.com/deep-dream-generator
Under new management!
…excited to announce that I’m now exclusively represented by JLA Management.
It’s great timing as I’m being asked to present and perform at so many incredible events!
Do please contact Hannah Oldman on +442079072800 for further information.
United Nations Performance May 2019
Diving into Periscope – interactive streaming with a musical edge
I’ve recently started playing with an app called Periscope, giving interactive music concerts at my stage piano.
My ‘cast’ (if that’s what it’s called?) includes talks on music theory, breaking down similarities in familiar tunes and of course playing the odd request – like a classical version of Michael Jackson’s ‘Human Nature’ – or a mashup between Journey’s ‘Don’t Stop Believing’ and The Commodores’ ‘Easy Like Sunday Morning’
It’s really fun – and my usecase seems to be a unique enough to get a mention in the Daily Telegraph‘s round up of the technology.
So, what do I get up to when I cast? you can find out on Mondays at 20:30 UK Time! At least, that’s the plan…
Essentially I’m talking about classical music theory using contemporary tunes – why is something catchy? What songs sound similar? What bits make a tune feel good? The session is mixed with live composition and conversation – content creation and audience interaction in real time.
Giving the audience access to the creative process and also a chance to communicate is pretty much exactly opposite to a traditional classical concert, where I’d be on a stage, far away from the listeners.
I believe it’s possible to demystify music without dissecting it, it’s so fun to explain what’s happening while playing some of the most memorable songs on the planet. I think this kind of informal direct broadcast is a great proving ground until I have my own big budget show where I have a huge grand piano and some notable musical guests to riff with.
Until then, viewers who make the effort to interact and contribute positively are going to shape how this cast evolves. How exciting! What works, what doesn’t, what do people want more of? I’m finding out every day. I’d hope to keep the audience interactivity if a big TV company wants to fund the huge grand piano and notable musical guests version.
For those of you reading this on Thursday 9th April 2015, there’s a replay available until 22:30 tonight, but you’ll need to download the app on an Apple device to watch at the moment. They say there’s an Android version coming. And, if you do visit, please ignore my faffing with the cables at the start, it definitely gets better.
At about 1AM this morning, I think I solved the problem of getting a decent audio feed in and listening at the same time, so Monday’s cast should have really rather good sound quality.
Oh and a final note from the technology presenter in me – streaming from mobiles has been available before – apps like Seesmic and Qik did this many years ago. But now data is cheaper, social media makes things more immediate, plus our connections are generally faster. This means the tech is ripe for mass adoption.
A notable alternative Meerkat has some big names endorsing it – Madonna released a video using this platform recently. I’ll let you know if I get a chance to try it out. And I’m sure there are other players in this area. In coming months we’ll get to see whether a single platform gains dominance, or if these apps can co-exist. It’ll inevitably play out over the next few months. Interesting interactive times!
In which I co-write a musical…
Date: Mon 23 March 2015: Event: My first ever full-length musical !
I’m super-excited to call myself a London Theatre Impresario! A date, a venue, a show, tickets on sale!
Scary as I’m currently only about 70% of the way through writing every single note of the upcoming musical adventure that will be held in conjunction with the inimitable Hack Circus in exactly one month’s time.
We’ve written a story to go with some expert talks and some rather spiffing tunes as well even if I say so myself. Leila happens to be a genius librettist in my opinion and I’m hoping my tunes and orchestration will do those fantastic lyrics justice. She and I have hatched a musical monster of a night out… What a nerve-wracking but satisfying experience this is!
- I need to finish building the scores and orchestration
- we need to finalise the order of songs and talks and audience interactive bits
- I’ll have to learn and sing pretty much all the songs on the night
- Leila and I have to pre-record some short bits of audio to give on-stage a break
- I’m trying to build a home-made instrument that may or may not work on the night
- Eek I need to do all the audio show control while performing (Ableton I’m looking at you)
- people are actually buying tickets so it has to be good because we have a paying audience.
ONE NIGHT ONLY
Talking of which, if you’re in London that evening AND want to be the first to see this quite peculiar and creative take on the musical genre. From the HC site:
We will be travelling in a unique sound-powered tunnelling vessel, currently under development. Please bear in mind: we really don’t know what we will find. We need a strong healthy team. It might be worth getting down to the gym now if you can.
Bring a torch. This is very important. We are expecting it to be dark.
We will be guided on our journey by three experts: monster afictionado sci-fi author Chris Farnell, historian and volcano enthusiast Ralph Harrington and shark-mad comics legend Steve White – but who knows who (or what) else we might encounter?
TICKETS STILL AVAILABLE!
OK, I’m intrigued, tell me more:
(link for mobile users: http://www.hackcircus.com/underworlds )
DID I MENTION TICKETS ARE ON SALE?
Yes!! I’m ready and can’t think of a better way to spend a Monday night!
(link for mobile users: http://www.eventbrite.co.uk/e/hack-circus-underworlds-tickets-15756232315?aff=es2&rank=1 )
^^This link goes to the EventBrite Ticket Page if you’d like to buy a ticket or two.
**For some reason, none of my links work. Head over to hackcircus dot com forward slash underworlds. Oh the humanity…
How to win the TechTent Xmas Quiz without cheating
The odds were most certainly not in my favour.
I was asked to take part in the Tech Tent annual quiz all about the tech news of 2014, pitted against none other than the people who actually make the news – the BBC Technology website team. Sounded to me like a guaranteed failure – these people know everything about the stories of the year – not only would they have written the articles, they would also have researched the stories thoroughly and (worse!) know background and periphery information.
How could I game the system without cheating? My plan below actually worked…
1. UNDERSTANDING THE FORMAT – where the scoring happens
The email came through with the rules: there will be “12 questions each relating to a big tech news story during 2014 – there’s one question for each month BUT they will not come in order. The order was selected by random draw. Most questions are in two parts offering two points, with an extra bonus point available for guessing which in which month the story happened. FWIW, I doubt we’ll get through all 12 questions in the limited time we have. If one team fails to answer a question or part of a question correctly, that q or part of question will be offered to the other team.”It made sense to memorise the news stories and their months as according to the format knowing these represented 12 points straight away.
2. CREATE DATA SET A
I gleaned the main events in technology from Wikipedia by month with a definite nod towards business and consumer electronics. I based the emphasis on quizmaster Rory’s preferences and my knowledge of the stories producer Jat tends to choose for the Tech Tent podcast.
3. CREATE DATA SET B AND SEE WHAT STORIES APPEAR IN BOTH A AND B
The click news scripts from the whole of 2014 were cut and pasted – all the click newsbelts together by month made it really easy to work out the most likely questions in each segment. I amalgamated this with the Wikipedia and The Verge year in review and other sources, then applied weighting depending on information available to ask deeper questions around a topic.
4. FURTHER ANALYSIS TO PINPOINT MOST LIKELY STORIES
Based on my analysis, some stories were more likely than others to work as questions for a quiz –for example Twitch and Apple buying beats were big stories in otherwise quiet months so it made sense to drill down into those. Flappy Bird was also pretty much guaranteed to be a question based on the analysis – and it helped that Rory did rather a few spots on flappy bird in 2014, so again, a likely candidate.
5. EDITORIAL CONSIDERATION
I could also discount any stories that weren’t appropriate for a Christmas quiz tone, which reduced the viable stories further.
6. EXPERT AREAS
It made sense to look at key stories like the Apple launch, top tech mergers and acquisitions and crypto-currency as big stories of 2014. So I memorised as many numbers and month/story combos as possible to maximise those easy points. Knowing about the business of tech meant that I could at least make informed guesses when I wasn’t sure of an answer – which sometimes paid off and sometimes didn’t (Google nearly bought Twitch = correct, The tablet-sized device launch was the Nokia tablet not Microsoft’s Surface pro tablet – nearly)
7. A BIT OF LUCK
As soon as I realised the last question (Number of World Cup Tweets) was directed at the Tech Website team first, it was easy to make an N+1 or N-1 call which would automatically cover more ground than the first guess, which led to eventual (if messy) victory!(It made up for correctly stating the flappy bird developer was making £32,000 per week ($50,000) which was not picked up!)
Yes! Emerged Victorious! Dave and Zoe from the BBC Tech Website team were worthy adversaries – because of the calibre of opponents, I had to up my game in order to even stand a chance against them. Thus I learned that when the odds aren’t great, it’s still worth doing the best you can, a great lesson to bring in 2015. Hurrah! and Happy New Year!
The Sound of Stars
Music inspired by colour/art
An artist I know called Debbie Davies made a giant light-up interactive star for the Burning Man festival this year. I never got to see it in real life, but the pictures were enough to trigger the most amazing melody in me. I was so utterly happy to hear that she loved the composition. I can’t really explain why I wasn’t surprised.
Debbie, this is your star. I told you it was beautiful!
Photo credit duncan.co
Last Sunday I took part in a very unconventional story/experiential theatre event run by a friend of mine. I live-composed the music. The event involved taking an audience on a fictional journey into space. Then (whilst in that context) they were given amazing lectures by real rocket scientists (Dr David McKeown), artists (Toby Harris, Sinead McDonald, Jeffrey Roe and more. The talks were diverse but relevant. Imagine hearing what space travel would feel like – as if you and the whole theatre was in fact in a spaceship – travelling through space. Kate Genevieve, a visual artist talked about the messages sent with the Voyager space probe. A man from SETI (Alan Penny) informed us of the best way to survive first contact in a suitably realist approach. There was more, but I’ll get to that nearer the end.
And I? I powered the spaceship with music.
It’s because of Leila Johnston – Hack Circus is her thing. She asked if I would like to create music to simulate ‘hypersleep’ during extended space travel, I went one step further and wrote the following email reply…
I have a great job of being the hyperspace engineer – the piano keyboard is in fact my console muwahaha
Leila responded with;
Oh I love that. Yes! Play us into hyperspace! What a lovely lovely idea.
This really captured my imagination, so much so that I appeared to send the following response:
Yes, the equations are quite complicated to most people. But hyperspace mathematics calculations actually have more in common with Bach fugues than physics, turns out those aren’t musical pieces but formulae all along. A fantastically complicated spatial equation can be surprisingly easy to solve musically which is why the keyboard is my usual preference for transport consoles. Bach was a hyperspace engineer from the future who got stuck in a time travel incident. Before he got transferred back to his timeline he enjoyed annotating his equations in musical form and confusing the natives.
Though the sustain pedal just puts the kettle on
I found this in my ‘sent’ items the next morning, and hurriedly dashed off an apology for sleep-emailing. Clearly I had really taken my role as Hyperspace Engineer to heart.
For some reason this didn’t put her off, and the event was quite literally a blast.
There were cabin crew. There were flashing lights. There was hazard tape. Dr Lewis Dartnell, an astrobiologist, played some amazing sounds from space that triggered my synaesthesia like you wouldn’t believe.
The sound of Saturn’s rings original, courtesy Cassini Radio & Plasma Wave Science team:
“…the sounds produced are exciting! You can listen to the sound of passing through the ring dust by clicking here. Listen ” (nb the ‘Listen’ link opens up a video file)
Wow! What magical unearthly sounds! What a weird recording! I had to share how wonderful it felt to absorb these strange vibrations! I attempted to convey my synaesthetic response to the sound of Saturn’s rings – what I hear when I hear them… and this is the result.
For those interested, here’s the mission page. Off topic, I’m joyous to report that the Sun in our solar system plays a giant Major 7th.
Remote Controlling a Digger using Virtual Reality
It really did feel like I was ‘inside the machine’ even though the resolution was low!
I composed the music especially for this feature, there is something very pleasing about a digger in C.
Full story here: http://www.bbc.co.uk/news/technology-28425844
Modern Glitching – Auditory Enhancement of Reality with Music
I recently gave a talk at TedXTokyo 2014 about a musical device I built with the aim of giving other people the chance to hear the world as musically as I do. To date, around fifty people have tried my mobile composing inspiration rig with me – mostly with very enthusiastic responses afterwards, and in the more musical/auditory types there’s also a degree of joyful disorientation.
Some of the background to what I think is going on: Around twenty years ago, psychology professor Diana Deutsch discovered what she called the Speech to Song illusion. Essentially, a spoken phrase repeated often enough starts to take on musical qualities. There’s a great Radiolab episode which explains the phenomenon.
For me, I don’t require repetition in order to hear spoken phrases as musical. Speech is intrinsically musical for me, and so is the rest of the world – from cars passing to people typing. I really wanted to share my experience as I find it very beautiful.
It was the day after winning a prize at MusicTechFest’s Boston Hackathon event, which I took part in and filmed for the BBC. I was sharing a small apartment with a bunch of other music obsessives. The day before I left, instead of packing neatly as normal I optimistically chucked everything I could see into my case and hoped for the best.
The idea of adjusting auditory experience or adding a ‘Glitch’ to reality – at least in aural terms – is not a new process. But glitching with modern tech sounded like a great way to reveal the music I hear all the time – plus I wanted to add a more musical classical compositional element to the practice.
Sean Manton and CJ Carr (who was familiar with glitching) were two other music hackers I met at the Hackathon. They were instrumental in my sleep-deprived electronic inspiration.
So, grabbing my iPad and headphone splitters, I built the first iteration of a device that messed with the ambient sound in the room in real time in a pleasurable manner. Raw audio is changed in real-time and enhanced with sound effects, and crucially, I added basic musical elements and phrases that would play simultaneously. A while later, I got the thing working in a way I liked and emerged from my room, eager to try it on other musical / technical people. My ideal system would allow for me to sing and play melody and harmony but I was nowhere near doing that yet.
So, extra headphones bought, splitter in, time to try my rudimentary iPad device on CJ and Sean in a quiet teahouse. It was so much fun! The sounds of tea being made, the door opening, teaspoons hitting cups, amplified and enhanced by repetition! Those sounds were unexpected, made musical and wonderfully tingly. I sang along to the notes in the cafe to accentuate them. The staff at the teahouse got interested, all they could hear was us singing and hitting teaspoons and laughing. So we asked if they wanted to try it then wired them in to see their response – they liked it – a lot.
Going mobile was more interesting – we were physically connected by our headphone cables, so it took a while to maneuver through the door but together we emerged, wired up out into the wild. And, once our headphones were in, we pretty much stayed ‘glitched in’ for at least 5 hours straight. I could hear the music I normally hear but amplified! Wow! I sang in joyous harmony with the world for my cohorts, who joyously joined in. An ear-opening experience indeed, and I expect we were a strange sight, connected together by cables, singing and swaying – especially as only we could hear the glorious harmonic results of our musical musings.
What followed: glitching around a bookshop, glitching through a delicious dinner at a noodle restaurant until we got chucked out at closing time – and (my favourite) glitching on public transport all over Boston. Some time during the evening, I added a recorded drum loop to the experience – an albeit low-tech but incredibly effective way to turn the world into a very funky soundtrack – rhythm along with harmony generated by reality transcended a run-of-the-mill walk through a city, making it a musical recital!
Now, without our headphones in, the world seemed dry and desolate. And, after trying this on six other people with the persuasive line ‘Hey, you wanna do some digital drugs, guys?’ to gratifying results, it didn’t take long for us to ascertain this was indeed a pleasurable and slightly psychedelic auditory experience – not only as a participant, but also as a listener. The three of us decided to take modern glitching further with a bit more technological clout.
A quick stop on the way back to the hacker apartment meant we now had extra kit. And, by 0100, Sean had plugged his Raspberry Pi computer into the TV – programming on the Pi with PureData. We made some tea and ate bread with the most delicious honey (the honey was in Bb major 6th) and kept working. By then it was 0300 and my taxi was due to arrive at 0615, we only had a few hours left!
We all wanted to add fine-grain control to this strange and wonderful auditory experience. CJ had brought his FM transmitter and binaural microphone/headphones and we plugged everything into my Mac. I wanted to do more than just sing the city, I wanted to play it too. That meant configuring something that could take multiple inputs – MIDI and Audio at the same time.
Finally at 0400, and full of incredible quantities of tea, bread and honey, we were now running a glitching instance on Ableton Live, with a binaural microphone / headphone setup and my iRig Keys midi controller hooked up. I started building musical stems right then and there.
The latest version has more than just repetition, my new glitching device can harmonise and play with the world in a much deeper way – and I walk around a city first to get what key its in and compose something beautiful that goes with the natural sounds around me. Then I load those sounds up – I can then trigger them when I hear something in the right key, so a motorbike going past in B flat will mean I trigger my ‘B flat, traffic’ piano composition. The main problem is that the laptop gets really hot, also I’m covered in wires so it looks a little strange.
And this is what glitching sounds like – some of these examples have music in, others don’t.
The tech is still very much hacked together, but there’s more documented in the talk.
WHAT HAPPENS NEXT?
CJ, Sean and I are all enthusiastic about sharing the joys of glitching – and we’re all working on versions of glitching devices. We’re hoping to create a resource online for anyone interested to play with the idea in their own way. I’m going to list everything I use in my hacked-together inelegant solution in another post.
I want an app that does this! I want to create a glitchpad! Beautiful musical stems that trigger automatically when friends walk through that city with this app! I want to be invited to perform ‘glitching’ concerts in cities around the world!
(for reference, I’ve reposted the TEDxTokyo video here)
More on this story as it unfolds….