The AI Song Contest

A code – be it a chord, melody or rhythm in someone’s head or a disk – can output into puffs of air to create this exquisite, yet ultimately unquantifiable substance we know as music. So how can a machine help us write music for humans?

The gorgeous accidents of the musician’s fingers slipping across their instrument, falling into a new collection of notes – or that warm feeling when a voice breaks between notes of a melody that make the whole tune sweeter – this is the intimate sound of a flawed and beautiful musician, their power, their individuality, their magic – the song inside the tune as Christina Aguilera once said.  

So how can you go from that mystery of music to asking a machine to write a hit song? Well, my team ‘Smorgasborg’, one of 38 entries to this year’s AI Song Contest decided to explore this question.

Voting closes on July 1st – listen to as many entries as you like – and vote up the ones you enjoy the most!

This years’ participants are a fantastic collection of artists, technologists and machine learning enthusiasts, all of whom approached the brief differently – make music – using machines! Now, as a musician learning the elements of coding, I approached the challenge from a musical perspective – choosing to discover “How can we use accessible AI tools to augment the human creative process rather than replace it?”

I’d taught myself AI composing with Style Transfer during my cancer treatment and subsequent shielding here in the UK. It’s surprising how much time there is for learning when utterly stationary, or unable to leave the house, or indeed when it’s impossible to sleep through the night… so why not stay up and run code every few minutes?

My initial results were interesting but not what I would call ‘easy listening’. This time would be different.

Like many musicians, I miss jamming with real people in real time, that physiological response to sound that makes us stand with others in front of massive speakers ‘in flow state’ – that feeling of being somewhere or nowhere – ‘in the zone’.

AI’s potential goes much further than just audio generating audio – which is what I’d been doing – and it just didn’t feel enough like a collaboration. After all, we could have asked the machine to produce random sounds and then selected the best. But instead, we split the song- some elements created by AI, some by the musician- and NO autotune!

Here’s a flowchart showing the relationship between human- and machine-generated content.

My team discussed many issues – even at the level of deciding what popular music is – for example, dominant music theory is based on Western, European tradition. This was partly why we elected to start with AI lyric generation rather than music, as it kept orchestral and melodic options open. We felt AI music would have been more likely to generate western diatonic sounds, as the data sets are currently largely lacking in music outside these descriptors. We wanted to explore some influences outside the popular western sounds.

Here are a few seeding prompts in bold below: For a while I was tempted to go with really simple songs. Then, during a team call, I read through a few that were strangely poetic”.

Example Prompts | Completions from GPT-3 (safe outputs) follow:

Do people want a catchy tune, a riff that can fly them to the moon? | Or would they rather just see me in my underpants?…” 

“dance dance I wanna | see you move dance dance dance! (Oh!) It’s time to make it right (Oh!) We’re gonna make it…”

write a love song about 2 machines | falling in luv. 1 hour later. BAM BAM BAM BAM BAM BAM “i just wan 2 say, that’s one of the best songs i’ve ever heard” “I KNOW RIGHT” “i mean, i would probably pay like $1.99 just to get that song on my phone.” “do it, do it now, i’ll wait” BAM BAM BAM BAM BAM BAM “i did it” “KEEP THE CHANGE…” 

After further experimentation, the below completion caught our attention.  

feel the bass dance all night headphone on smartphone light walking like a zombie in the dark pulsating body nightlife make you move vibrating body sending shiver to the core….”

These words… sounded most like a story, they had evocative phrases like ‘walking like a zombie’” We fed this modified prompt back into the engine and generated our lyrics which remain unchanged  throughout the songwriting process.  

feel the bass dance all night headphones on smartphone light walking like a zombie in the dark | ‘til the streetlight burns the sunrise behind your back you were here now you’re gone There’s a girl I know who knows what she wants.

I was inspired by the words and created a first draft of the melody, but was getting stuck on ‘Shine like a diamond dust in the universe’. We wanted to use lyrics verbatim to stay faithful to the AI, but were stumped on how to parse this particular musical phrase.So we used OpenAI’s Jukebox trained on Adele to suggest various new melodic lines. 

At first I used a model to output 30 seconds of music at a time – but my first attempts were frustrating – it didn’t create tunes that made theoretical sense! After more false starts, I realised co-composing suited me more, given my mainly musical background. Supervising every 4-6 secs added my own musical preferences to the generative result. 

After 21 attempts (and more crashes!), attempt 22 inspired me to re-scan the lyric lines –

|| Shine like a diamond  ||  Dust in the universe became

|| Shine || Like a diamond dust  ||  In the universe.

Yes! I gleefully thanked the program out loud even though it was 01:30AM, and sang a guide melody and piano accompaniment into Logic Pro X. I felt no need to upsample as I wasn’t planning to output audio and just needed to hear the melodic lines.

Google’s NSynth -one of the settings used with Ableton | Imaginary Soundscapes – with the image used to generate fireworks in the chorus

The bass, piano and pad are all generated via the NSYNTH sound. I was inspired by team mate Leila saying the song was set “In Space” and chose sounds based on this thought – resulting in ethereal and floating pads, with sharp shards of passing comet dust! Continuing the theme, we also used AI-generated audio from Imaginary-soundscape, an online engine designed to add suggested soundscapes to (normally earth) landscapes. We used an image of space from Unsplash and the AI returned audio – fireworks!  You can hear these alongside the chorus.

If you’d like to help us become Top of the Bots – please vote here – no need to register! Team Smorgasborg is excited to be part of the AI Song Contest!

A selection of AI tools used in the creative process: we also used Deep Music Visualizer and Tokkingheads for the music video

GPT-3 – Lyric generation from word prompts and questions  https://beta.openai.com 

Jukebox (OpenAI) – neural networks style transfer for solving musical problems https://jukebox.openai.com 

NSYNTH – machine learning based synthesiser for sound generation  https://magenta.tensorflow.org/nsynth 

Imaginary Soundscape –  AI generated soundscapes from images   https://www.imaginarysoundscape.net

DreamscopeApp – deep dream image generator https://dreamscopeapp.com/deep-dream-generator 

Music Video for Team Smorgasborg, LJ, Dav and Leila.

“I knew the song was finished when Logic gave me the “System Overload” message.”

-very late at night

3 Surprising AI Music Mashups that will make you question your musical tastes

24-hour streaming AI-generated heavy metal on YouTube completely fascinated me – created by the eccentric Dadabots, half of whom I’ve regularly collaborated with on various strange musical projects. Their outputs inspired me to start my own journey of intersecting music with machine learning.

I’ve been composing since I was a kid on whatever platform I could find. Classically trained with a music degree while hungry for as much new music as possible makes for a strange hybrid, a musician and performer trying to understand a technologist’s world.

Amid much struggling and general frustration and many false starts, the stubbornness and late night wrangling paid off. I had my first track and plucked up the courage to share some of my experiments online.

So, here’s one of my first flirtations with Music and Machine Learning on Instagram – the Beatles singing ‘Call Me Maybe’ – because for some reason I thought it needed to exist. And, buoyed by my coding success, I learned how to generate some eye-bending video based on pitch and tempo too.

Each track takes quite a few hours to generate – even 45 seconds or so is a whole evening of attention. The way I’ve been doing it is heavily supervising the code, I need to intervene every few seconds to suggest a new direction for the algorithm in order for it to fit the direction I want it to go in. A lot of the decisions I’m making are not technical – they’re based on my musical knowledge. Then I listen repeatedly to the slowly lengthening audio to see if there’s a recognisable tune being created. Is it sounding like something a human can sing? Plus the ‘upsampling’ process, where some of the noise is removed, can take many hours. A lot of the time I’ll crash out of the virtual machine I’m using because I’m on the free tier. Sometimes I’ll lose everything.

Sounds frustrating, and it’s even more annoying in practice. Yet I find the ultimately infuriating nature of co-composing this way rather addictive. And, wow, when it actually does work, the results are incredibly rewarding.

So, ‘my’ new song made up of thousands of tiny bites of Beatles was compiled. And it is undeniably the Beatles singing ‘Call Me Maybe’ – so much so that a few of my friends thought this could easily be a demo tape or an unheard song if not for the lyrics.

My work received admiration from those familiar with AI music generation – they could tell how much effort was required to create it. And as well as praise, this short tune also generated unsettling feelings for others – which weirdly excited me – to have made something so conversation-worthy – especially in a field as wide as AI and Machine Learning felt like I was onto something, that my musical approach could add value in its own way.

Here’s another one – Queen singing ‘Let It Go’.

So why do I think this might make you question your musical tastes? Well, many of us are quite specific about the music we like. But if a fifty-year-old Beatles recording can be rehashed for a 21st Century Audience, would this track encourage a non-Beatles listener to explore more of this kind of music? Or would a devout 1960s music fan be persuaded to venture outside their comfort decade into the world of sugary pop music? I think it does.

Here’s U2 singing ‘Bat Out Of Hell’.

I’m surprised how much the original artist maintains their presence in each of these examples. And I’m somewhat tickled that the processing and supervision of each track makes this a very labour-intensive activity – not unlike standard music production.

As a new composing method, I am in awe of the sheer amount of work that must have gone into creating this program, and the brilliant minds behind it who conceived and created such a formidable tool for co-creation.

It even seems possible to train the AI on any kind of music as long as the artist has made enough material to be sampled adequately. Which is great news for those of us keen to create cross-cultural artworks – even though there are thousands of artists in the current Jukebox library, the content does appear to skew toward English-speaking music – a useful reminder that bias is built in to every system with humans at one end of it. So one of my next quests will be to see whether I can create my own training set (which might prove taxing on the free tier).

Finally, from a musical perspective, human composers still have quite a few advantages over machines, though generating music with AI is like a whole band writing all its parts at once, which can be very satisfying, if erratic. Sometimes the algorithm is temperamental – and doesn’t work at all. Other times, sublimely beautiful chords and ad-libs come out. No one can know whether the next track is a hit or a miss.

Even controlling the output is gloriously elusive: for example I can’t force a tune to go up or down at any point (though I can choose one of the alternatives that fits roughly where I’d like the tune to go). I don’t have much choice over the rate or meter of the lyrics – though there is some leeway when paginating them in the code. And changing the rate of intervention also affects what’s being generated – in short, the illusion of pulling order from chaos, a pleasing reflection of what composing music means to me.

In quite a few instances the AI has surprised me musically, and that is intriguing enough on its own for me to want to continue creating and co-composing with a machine. With so many possibilities in this field right now, I’m looking forward to exploring more.

What’s The Best World Cup England Football Song?

OK, I enjoy the world cup, and understand the off-side rule, but generally prefer playing football to watching it. But when my friend from Great British Chefs posted a query: What’s The Best World Cup England Football Song? World in Motion or Three Lions? – for many days now I’ve been thinking about it. So much so that my response simply does not fit into a social media reply… Mecca, here’s your answer.

“World in Motion” works brilliantly as a standalone pop song, but is it ‘catchy’?  

Harmonically, “World In Motion” is musically complex (apart from the ‘rap’) and there’s a lot going on in terms of chords, percussion and orchestration. It’s interesting, but also demands effort from the listener, meaning cognitively you have to think about where the song is going and pay attention to it. 

Melodically, “World In Motion” is quite gentle – there’s no descending melodies or musical ‘jumps’; it’s tuneful but stays in one place. Yes, the rap segment was distinctive and served to make the song very memorable – but not for musical reasons.  

“World In Motion” is also a little slow for singing along – most people sing football songs while excited, so their heart rate will be higher – faster tempo songs will come to mind more easily in a heightened state like this!

Apart from tempo, both songs have a lot in common – two choruses: ‘Love’s got the World in Motion / Let’s hear it for England’ and ‘Three Lions on a shirt / It’s Coming Home’,  both largely in major (happy) keys, and both are quite sparsely orchestrated. Both songs also do not require particularly good vocal skills.

And so, to catchiness –  have you heard anyone singing “World In Motion” in the street? Personally I haven’t, but “It’s Coming Home” is everywhere.

So what’s happening here? How are they different?

I think the answer lies in melody. Despite great orchestration and production of New Order’s “World in Motion”, the “Three Lions” musical phrases are much more likely to be sung in the wild. They are easier to remember, easier to sing along to and more physically pleasant to sing out loud.

I’ll tell you how this works: The descending tune of the “Three Lions on a shirt” call is followed by an upward response “Jules Rimet still gleaming”.  Wow, super catchy and melodically pleasant –  it’s a classic gospel song call and response: You know where it’s going, there’s a clear musical path that you only need to hear once to sing along, and you’re excited to hear it resolve. A great pop example of this is ‘Twist and Shout’

The second is the clincher “It’s coming home, it’s coming home, it’s coming…” – what an anthem! There’s a built-in melodic jump – you can hear these in a lot of Abba songs (Winner Takes it All, Take a Chance on Me) and many existing football chants. There’s also repetition – this simple phrase is a natural ear-worm, there’s no effort involved in remembering this even after hearing it once.  Listening to this phrase takes low cognitive effort in contrast to “World In Motion”, plus there’s yet another embedded ‘call and response’ too. 

Finally ‘It’s Coming Home’  is a semi-unresolved melody. Getting to the end of the phrase, you want to sing it again. You can see this unresolved effect at work in notable examples like ‘Song 2’ by Blur, ‘My Shorrona’ and ‘Living on a Prayer’. Catchy? Absolutely. And now you know why my considered answer is “Three Lions”, musically the most suited football song for England’s World Cup 2018 journey.

Relaxing Music for Giving Birth

Available now for download on iTunes Amazon and CDBaby

I’ve been away from work and the internet for a while working on a massive project – building a baby! Having always been completely fascinated by music’s power to move us and change our perceptions, I thought there would be lots of music specifically for giving birth  – but nothing came up. Which was ironic, I thought, considering that for conceiving a baby there are any number of musical accompaniments available! So I created music especially for me and anyone else going through the intense experience of labour, birth and early parenthood. 

This album is very special to me – I wanted music that had calmness at its heart to support the incredible and inevitable journey from pregnancy to parent, but I think it’s also a very enjoyable listen if I just want to zone out and remember how to breathe.

Talking of which, this instrumental music complements all kinds of breathing rhythms during stages of labour and birth. I composed it while pregnant and only completed it half an hour before labour!

Originally I wasn’t going to release this music or talk publicly about my experience (it’s so personal!) but below I explain exactly why I chose to do so.

So yes, I had a baby – he is awesome. And for anyone interested in why I wrote such an album, I’ve shared my very personal story about conquering a lifelong phobia of giving birth below the track listing. If you’re not interested that’s fine too –  in any case it’s good to be back, baby! Normal streaming, tweeting and writing about music, inventions, technology and synaesthesia will soon resume.

Relaxing Music for Giving Birth Tracklist

 

 

TRACK 1: Incarnation. Labour, Birth and Calm
Over an hour long and can be repeated seamlessly throughout labour. will play on multiple devices and speakers without sounding out of tune or out of time. It also works as a continuous loop. I didn’t want music to get in the way of my breathing or the physiological process of giving birth – this was actually the hardest part, making sure it was musical and rhythmic while still leaving space for getting into the zone. I had this track on repeat throughout my labour.

Plus – Bonus Tracks for after the big day to give the new family a gentle soundtrack for those first utterly indescribable weeks….

TRACK 2: Serenity. Soundtrack to a Contented Baby

Serene piano sounds soothe and support a calm environment – contains a unique ‘white noise blanket’ – to soften any unexpected sudden sounds from outside that might startle a new baby. It seems to soothe my little one!

TRACK 3: Relaxation: Sleepy Parents, Sleepy Baby

Encouraging even breathing; aims to elongate any rare moments of calmness and sleep – not that you’ll get much sleep over the first months of parenthood! I found this really helped my little one settle. A sleepy soundtrack to chill out with the new arrival. For babies AND parents.

No apologies for massive wall of text below!

HOW THIS ALBUM WAS BORN

(NB don’t worry there are no scary bits. It contains music, a little tech and a deeply personal story behind the album)

Continue reading

Diving into Periscope – interactive streaming with a musical edge

I’ve recently started playing with an app called Periscope, giving interactive music concerts at my stage piano.

My ‘cast’ (if that’s what it’s called?) includes talks on music theory, breaking down similarities in familiar tunes and of course playing the odd request – like a classical version of Michael Jackson’s ‘Human Nature’ – or a mashup between Journey’s ‘Don’t Stop Believing’ and The Commodores’ ‘Easy Like Sunday Morning’

It’s really fun – and my usecase seems to be a unique enough to get a mention in the Daily Telegraph‘s round up of the technology.

So, what do I get up to when I cast? you can find out on Mondays at 20:30 UK Time! At least, that’s the plan…

Essentially I’m talking about classical music theory using contemporary tunes – why is something catchy? What songs sound similar? What bits make a tune feel good? The session is mixed with live composition and conversation – content creation and audience interaction in real time.

Giving the audience access to the creative process and also a chance to communicate is pretty much exactly opposite to a traditional classical concert, where I’d be on a stage, far away from the listeners.

I believe it’s possible to demystify music without dissecting it, it’s so fun to explain what’s happening while playing some of the most memorable songs on the planet. I think this kind of informal direct broadcast is a great proving ground until I have my own big budget show where I have a huge grand piano and some notable musical guests to riff with.

Until then, viewers who make the effort to interact and contribute positively are going to shape how this cast evolves. How exciting! What works, what doesn’t, what do people want more of? I’m finding out every day. I’d hope to keep the audience interactivity if a big TV company wants to fund the huge grand piano and notable musical guests version.

For those of you reading this on Thursday 9th April 2015, there’s a replay available until 22:30 tonight, but you’ll need to download the app on an Apple device to watch at the moment. They say there’s an Android version coming. And, if you do visit, please ignore my faffing with the cables at the start, it definitely gets better.

At about 1AM this morning, I think I solved the problem of getting a decent audio feed in and listening at the same time, so Monday’s cast should have really rather good sound quality.

Oh and a final note from the technology presenter in me – streaming from mobiles has been available before – apps like Seesmic and Qik did this many years ago. But now data is cheaper, social media makes things more immediate, plus our connections are generally faster. This means the tech is ripe for mass adoption.

A notable alternative Meerkat has some big names endorsing it – Madonna released a video using this platform recently. I’ll let you know if I get a chance to try it out. And I’m sure there are other players in this area. In coming months we’ll get to see whether a single platform gains dominance, or if these apps can co-exist. It’ll inevitably play out over the next few months. Interesting interactive times!

In which I co-write a musical…

Date: Mon 23 March 2015:   Event: My first ever full-length musical !

TICKETS AVAILABLE!

I’m super-excited to call myself a London Theatre Impresario! A date, a venue, a show, tickets on sale!

Scary as I’m currently only about 70% of the way through writing every single note of the upcoming musical adventure that will be held in conjunction with the inimitable Hack Circus in exactly one month’s time.

We’ve written a story to go with some expert talks and some rather spiffing tunes as well even if I say so myself. Leila happens to be a genius librettist in my opinion and I’m hoping my tunes and orchestration will do those fantastic lyrics justice. She and I have hatched a musical monster of a night out… What a nerve-wracking but satisfying experience this is!

TERRIFYING REALITY:

  • I need to finish building the scores and orchestration
  • we need to finalise the order of songs and talks and audience interactive bits
  • I’ll have to learn and sing pretty much all the songs on the night
  • Leila and I have to pre-record some short bits of audio to give on-stage a break
  • I’m trying to build a home-made instrument that may or may not work on the night
  • Eek I need to do all the audio show control while performing (Ableton I’m looking at you)
    and;
  • people are actually buying tickets so it has to be good because we have a paying audience.

ONE NIGHT ONLY

Talking of which, if you’re in London that evening AND want to be the first to see this quite peculiar and creative take on the musical genre. From the HC site:

We will be travelling in a unique sound-powered tunnelling vessel, currently under development. Please bear in mind: we really don’t know what we will find. We need a strong healthy team. It might be worth getting down to the gym now if you can.

Bring a torch. This is very important. We are expecting it to be dark.

We will be guided on our journey by three experts: monster afictionado sci-fi author Chris Farnell, historian and volcano enthusiast Ralph Harrington and shark-mad comics legend Steve White – but who knows who (or what) else we might encounter?

From: Hack Circus

TICKETS STILL AVAILABLE!

OK, I’m intrigued, tell me more:

hackcircus.com/underworlds/ 

(link for mobile users: http://www.hackcircus.com/underworlds )

DID I MENTION TICKETS ARE ON SALE?

Yes!! I’m ready and can’t think of a better way to spend a Monday night!

SHUT UP AND TAKE MY MONEY!!!

(link for mobile users: http://www.eventbrite.co.uk/e/hack-circus-underworlds-tickets-15756232315?aff=es2&rank=1 )

^^This link goes to the EventBrite Ticket Page if you’d like to buy a ticket or two.

**For some reason, none of my links work. Head over to hackcircus dot com forward slash underworlds. Oh the humanity…

The Sound of Stars

Music inspired by colour/art

An artist I know called Debbie Davies made a giant light-up interactive star for the Burning Man festival this year. I never got to see it in real life, but the pictures were enough to trigger the most amazing melody in me. I was so utterly happy to hear that she loved the composition. I can’t really explain why I wasn’t surprised.

Debbie, this is your star. I told you it was beautiful!

Photo credit  duncan.co

a great way to compose!

Modern Glitching – Auditory Enhancement of Reality with Music

I recently gave a talk at TedXTokyo 2014 about a musical device I built with the aim of giving other people the chance to hear the world as musically as I do. To date, around fifty people have tried my mobile composing inspiration rig with me – mostly with very enthusiastic responses afterwards, and in the more musical/auditory types there’s also a degree of joyful disorientation.

Some of the background to what I think is going on: Around twenty years ago, psychology professor Diana Deutsch discovered what she called the Speech to Song illusion. Essentially, a spoken phrase repeated often enough starts to take on musical qualities. There’s a great Radiolab episode which explains the phenomenon.

For me, I don’t require repetition in order to hear spoken phrases as musical. Speech is intrinsically musical for me, and so is the rest of the world – from cars passing to people typing. I really wanted to share my experience as I find it very beautiful.

 

STORY TIME!

It was the day after winning a prize at MusicTechFest’s Boston Hackathon event, which I took part in and filmed for the BBC. I was sharing a small apartment with a bunch of other music obsessives.  The day before I left, instead of packing neatly as normal I optimistically chucked everything I could see into my case and hoped for the best.

The idea of adjusting auditory experience or adding a ‘Glitch’ to reality – at least in aural terms – is not a new process.  But glitching with modern tech sounded like a great way to reveal the music I hear all the time – plus I wanted to add a more musical classical compositional element to the practice.

Sean Manton and CJ Carr (who was familiar with glitching) were two other music hackers I met at the Hackathon. They were instrumental in my sleep-deprived electronic inspiration.

So, grabbing my iPad and headphone splitters,  I built the first iteration of a device that messed with the ambient sound in the room in real time in a pleasurable manner. Raw audio is changed in real-time and enhanced with sound effects, and crucially, I added basic musical elements and phrases that would play simultaneously. A while later, I got the thing working in a way I liked and emerged from my room, eager to try it on other musical / technical people. My ideal system would allow for me to sing and play melody and harmony but I was nowhere near doing that yet.

So, extra headphones bought, splitter in,  time to try my rudimentary iPad device on CJ and Sean in a quiet teahouse. It was so much fun! The sounds of tea being made, the door opening, teaspoons hitting cups, amplified and enhanced by repetition! Those sounds were unexpected, made musical and wonderfully tingly. I sang along to the notes in the cafe to accentuate them. The staff at the teahouse got interested, all they could hear was us singing and hitting teaspoons and laughing. So we asked if they wanted to try it then wired them in to see their response – they liked it – a lot.

 

Going mobile was more interesting – we were physically connected by our headphone cables, so it took a while to maneuver through the door but together we emerged, wired up out into the wild. And, once our headphones were in, we pretty much stayed ‘glitched in’ for at least 5 hours straight. I could hear the music I normally hear but amplified! Wow! I sang in joyous harmony with the world for my cohorts, who joyously joined in. An ear-opening experience indeed, and I expect we were a strange sight, connected together by cables, singing and swaying – especially as only we could hear the glorious harmonic results of our musical musings.

What followed: glitching around a bookshop, glitching through a delicious dinner at a noodle restaurant until we got chucked out at closing time – and (my favourite) glitching on public transport all over Boston. Some time during the evening, I added a recorded drum loop to the experience – an albeit low-tech but incredibly effective way to turn the world into a very funky soundtrack – rhythm along with harmony generated by reality transcended a run-of-the-mill walk through a city, making it a musical recital!

Now, without our headphones in, the world seemed dry and desolate. And, after trying this on six other people with the persuasive line ‘Hey, you wanna do some digital drugs, guys?’ to gratifying results, it didn’t take long for us to ascertain this was indeed a pleasurable and slightly psychedelic auditory experience – not only as a participant, but also as a listener. The three of us decided to take modern glitching further with a bit more technological clout.

A quick stop on the way back to the hacker apartment meant we now had extra kit. And, by 0100, Sean had plugged his Raspberry Pi computer into the TV – programming on the Pi with PureData. We made some tea and ate bread with the most delicious honey (the honey was in Bb major 6th) and kept working. By then it was 0300 and my taxi was due to arrive at 0615, we only had a few hours left!

We all wanted to add fine-grain control to this strange and wonderful auditory experience. CJ had brought his FM transmitter and binaural microphone/headphones and we plugged everything into my Mac.   I wanted to do more than just sing the city, I wanted to play it too. That meant configuring something that could take multiple inputs – MIDI and Audio at the same time.

Finally at 0400, and full of incredible quantities of tea, bread and honey, we were now running a glitching instance on Ableton Live, with a binaural microphone / headphone setup and my iRig Keys midi controller hooked up. I started building musical stems right then and there.

The latest version has more than just repetition, my new glitching device can harmonise and play with the world in a much deeper way – and I walk around a city first to get what key its in and compose something beautiful that goes with the natural sounds around me. Then I load those sounds up – I can then trigger them when I hear something in the right key, so a motorbike going past in B flat will mean I trigger my ‘B flat, traffic’ piano composition. The main problem is that the laptop gets really hot, also I’m covered in wires so it looks a little strange.

And this is what glitching sounds like – some of these examples have music in, others don’t.

The tech is still very much hacked together, but there’s more documented in the talk.

 

WHAT HAPPENS NEXT?

CJ, Sean and I are all enthusiastic about sharing the joys of glitching – and we’re all working on versions of glitching devices. We’re hoping to create a resource online for anyone interested to play with the idea in their own way.  I’m going to list everything I use in my hacked-together inelegant solution in another post.

I want an app that does this! I want to create a glitchpad! Beautiful musical stems that trigger automatically when friends walk through that city with this app! I want to be invited to perform ‘glitching’ concerts in cities around the world!

(for reference, I’ve reposted the TEDxTokyo video here)

More on this story as it unfolds….

Living with perfect pitch and Synaesthesia – what it’s really like

I was at a party last week, and a fellow dinner guest asked me what having perfect pitch was actually like. They wanted to know if it was just knowing what an ‘A’ was – and whether it could be learned. They were musical and seemed genuinely interested – so I decided for once to give them the full, no-holds-barred explanation. It’s complicated, and I get asked this a lot, hence this post.

Yes, having perfect pitch includes knowing whether something is an A or an A flat – that’s also the case for excellent relative pitch which is something that can be learned with time and effort. But for me perfect or absolute pitch is more than that.

With the caveat that this is my personal experience, and it might be different for fellow sufferers/carriers, this is how it feels for me to be a composer with perfect pitch.

Train from Gothenburg to Stockholm is in B Major – the trees outside are beautiful. Composing on the train is a wonderful experience. I love the sound of the train and how it interacts with the landscapes. Trees and lakes and the sea are generally in major keys so it feels uplifting and inspiring to me! This music was recorded in two takes – first the strings, then the piano sound.

 

Now, I’d like you to imagine you’re chatting with your conversation partner. But instead of speaking and hearing the words alone, each syllable they utter has a note, sometimes more than one. They speak in tunes and I can sing back their melody. Once I know them a little bit, I can play along to their words as they speak them, accompanying them on the piano as if they’re singing an operatic recitative. They drop a glass on the floor, it plays a particular melody as it hits the tiles. I’ll play that melody back – on a piano, on anything. I can accompany that melody with harmony, chords – or perhaps compose a variation on that melody – develop it into a stupendous symphony filled with strings, or play it back in the style of Chopin, Debussy or Bob Marley. That car horn beeps an F major chord, this kettle’s in A flat, some bedside lights get thrown out because they are out of tune with other appliances. I can play along to every song on the radio whether or not I’ve heard it before, the chord progressions as open to me as if I had the sheet music in front of me. I can play other songs with the same chords and fit them with the song being played. Those bath taps squeak in E, this person sneezes in E flat. That printer’s in D mostly. The microwave is in the same key as the washing machine.

 

For me perfect pitch is not knowing notes, it’s about living in a world where everyone resonates, every thing has music. Everything and everyone weaves together a fantastic audio symphony that I have no choice but to absorb. For me if there isn’t music somewhere, I’ll add it in. Say someone is walking along a station platform, I almost unconsciously compose a tune fitting into their footsteps – it’ll generally be in the same key as the resonance of the station. While I hear a piece of music I generally imagine a counter-melody to complement the existing melody everyone else hears. It was only on talking to my friend Jonathan in Boston that I found out these things are not typical.

It’s odd – though I’m a composer by nature, I also love encoding the melodies and harmonies I hear in music for other people to appreciate, for example – the sound of a sunrise – or the beautiful noise of an aeroplane at 33,000 feet in D major.

I recently live-composed some classical music for each of my friend’s children at a naming ceremony. They had very different personalities and I captured them each in a short musical piece recorded for posterity. It never fails to amaze me how many people agree with my musical perception of someone – even someone young! There must be something I pick up on that is there, intrinsically, inside everyone.

When I taste things, I also hear music, mainly chords – sugar and desserts almost always in major key and chocolate and coffee are particularly complex sounds, with overtones and harmonics. I love broccoli and cauliflower which are a cycle of fifths. Sushi tastes like power chords on an acoustic guitar. Lemon meringue pie is a concoction of A major chords and inversions, 7ths and minors. I’ve ‘played’ tastes to a bunch of very gifted musicians who agreed with my interpretation of doughnuts, eggs and the like. I love delicious food mainly because of the pleasurable sounds it generates for me. Roller coaster rides also kick my synaesthesia into overdrive, oh, the harmonies and melodies of weightlessness and acceleration, I’d love to live-compose in variable weight conditions like that!

For me every single piece of life is flooded with sound – so much so that I didn’t realise for many years that this is not the case for everyone.

Auditory is most certainly my main sense.

 

Finally a few other strange characteristics that may or may not be attributable to perfect pitch – listed below in case any fellow perfect-pitchers would like to add their comments!

Picking up a language is easy – once you hear which notes people associate with particular things, it’s generally just a question of working out which scale they are using. My grammar is almost always terrible but I’ll pick up vocabulary words quickly. Optimism and the desire to communicate take over once I’ve decoded where words begin and end. For me, a laugh is almost always in a major key – crying is almost always in a minor key, regardless of language.

Other traits!

  • difficulty recognising people visually – especially if I meet them again out of the original context. I can however (given enough of a sample) recognise people by gait or voice.
  • hardly ever get motion sickness
  • great sense of direction
  • rather clumsy if I’m not paying attention
  • ultra-high scoring spatial awareness and pattern recognition skills – but also incredibly unobservant in everyday situations
  • I’m rather good at opening locks!

Hopefully I’ve given you an insight into the condition. It’s clear to me now that we all encode the world differently – and in my case very intensively and musically. Though I feel surprisingly vulnerable sharing these thoughts with the wider world, it’s also a pleasure to finally explain what it’s like to have perfect pitch.

 

Music = sometimes better than words

(cross post from my soundcloud account)

 

Right now, a few people I really like are going through some tough times. I sometimes find it comparatively difficult to express my emotions in words – even through the richness of language there are times when words are a poor substitute for music – a super-conductor of emotion and meaning.

I truly know how it feels when no-one can reach me, and I don’t want to be reached. So this is for you, and anyone else currently undergoing adversity. I think that even if you, like me, are experienced in how to deal with hardship, it doesn’t make it any easier when it happens.

This composition to me represents wordless, deep support to my friends, and also to myself. It’s reassurance that even in the most dark of situations, where it might feel bleak and desolate, grey and hopeless – after some time has passed, a glimmer of a smile, a glance of understanding, a random act of kindness from a stranger could be all that’s needed to transform that grey world into something more habitable – infusing it, finally, with much-missed slivers of light and colour.

This music was live-composed in just one take.