I’ve recently started playing with an app called Periscope, giving interactive music concerts at my stage piano.
My ‘cast’ (if that’s what it’s called?) includes talks on music theory, breaking down similarities in familiar tunes and of course playing the odd request – like a classical version of Michael Jackson’s ‘Human Nature’ – or a mashup between Journey’s ‘Don’t Stop Believing’ and The Commodores’ ‘Easy Like Sunday Morning’
It’s really fun – and my usecase seems to be a unique enough to get a mention in the Daily Telegraph‘s round up of the technology.
So, what do I get up to when I cast? you can find out on Mondays at 20:30 UK Time! At least, that’s the plan…
Essentially I’m talking about classical music theory using contemporary tunes – why is something catchy? What songs sound similar? What bits make a tune feel good? The session is mixed with live composition and conversation – content creation and audience interaction in real time.
Giving the audience access to the creative process and also a chance to communicate is pretty much exactly opposite to a traditional classical concert, where I’d be on a stage, far away from the listeners.
I believe it’s possible to demystify music without dissecting it, it’s so fun to explain what’s happening while playing some of the most memorable songs on the planet. I think this kind of informal direct broadcast is a great proving ground until I have my own big budget show where I have a huge grand piano and some notable musical guests to riff with.
Until then, viewers who make the effort to interact and contribute positively are going to shape how this cast evolves. How exciting! What works, what doesn’t, what do people want more of? I’m finding out every day. I’d hope to keep the audience interactivity if a big TV company wants to fund the huge grand piano and notable musical guests version.
For those of you reading this on Thursday 9th April 2015, there’s a replay available until 22:30 tonight, but you’ll need to download the app on an Apple device to watch at the moment. They say there’s an Android version coming. And, if you do visit, please ignore my faffing with the cables at the start, it definitely gets better.
At about 1AM this morning, I think I solved the problem of getting a decent audio feed in and listening at the same time, so Monday’s cast should have really rather good sound quality.
Oh and a final note from the technology presenter in me – streaming from mobiles has been available before – apps like Seesmic and Qik did this many years ago. But now data is cheaper, social media makes things more immediate, plus our connections are generally faster. This means the tech is ripe for mass adoption.
A notable alternative Meerkat has some big names endorsing it – Madonna released a video using this platform recently. I’ll let you know if I get a chance to try it out. And I’m sure there are other players in this area. In coming months we’ll get to see whether a single platform gains dominance, or if these apps can co-exist. It’ll inevitably play out over the next few months. Interesting interactive times!
I recently gave a talk atTedXTokyo 2014about a musical device I built with the aim of giving other people the chance to hear the world as musically as I do. To date, around fifty people have tried my mobile composing inspiration rig with me – mostly with very enthusiastic responses afterwards, and in the more musical/auditory types there’s also a degree of joyful disorientation.
Some of the background to what I think is going on: Around twenty years ago, psychology professor Diana Deutsch discovered what she called the Speech to Song illusion. Essentially, a spoken phrase repeated often enough starts to take on musical qualities. There’s a great Radiolab episode which explains the phenomenon.
For me, I don’t require repetition in order to hear spoken phrases as musical. Speech is intrinsically musical for me, and so is the rest of the world – from cars passing to people typing. I really wanted to share my experience as I find it very beautiful.
The idea of adjusting auditory experience or adding a ‘Glitch’ to reality – at least in aural terms – is not a new process. But glitching with modern tech sounded like a great way to reveal the music I hear all the time – plus I wanted to add a more musical classical compositional element to the practice.
Sean Manton and CJ Carr (who was familiar with glitching) were two other music hackers I met at the Hackathon. They were instrumental in my sleep-deprived electronic inspiration.
So, grabbing my iPad and headphone splitters, I built the first iteration of a device that messed with the ambient sound in the room in real time in a pleasurable manner. Raw audio is changed in real-time and enhanced with sound effects, and crucially, I added basic musical elements and phrases that would play simultaneously. A while later, I got the thing working in a way I liked and emerged from my room, eager to try it on other musical / technical people. My ideal system would allow for me to sing and play melody and harmony but I was nowhere near doing that yet.
So, extra headphones bought, splitter in, time to try my rudimentary iPad device on CJ and Sean in a quiet teahouse. It was so much fun! The sounds of tea being made, the door opening, teaspoons hitting cups, amplified and enhanced by repetition! Those sounds were unexpected, made musical and wonderfully tingly. I sang along to the notes in the cafe to accentuate them. The staff at the teahouse got interested, all they could hear was us singing and hitting teaspoons and laughing. So we asked if they wanted to try it then wired them in to see their response – they liked it – a lot.
Going mobile was more interesting – we were physically connected by our headphone cables, so it took a while to maneuver through the door but together we emerged, wired up out into the wild. And, once our headphones were in, we pretty much stayed ‘glitched in’ for at least 5 hours straight. I could hear the music I normally hear but amplified! Wow! I sang in joyous harmony with the world for my cohorts, who joyously joined in. An ear-opening experience indeed, and I expect we were a strange sight, connected together by cables, singing and swaying – especially as only we could hear the glorious harmonic results of our musical musings.
What followed: glitching around a bookshop, glitching through a delicious dinner at a noodle restaurant until we got chucked out at closing time – and (my favourite) glitching on public transport all over Boston. Some time during the evening, I added a recorded drum loop to the experience – an albeit low-tech but incredibly effective way to turn the world into a very funky soundtrack – rhythm along with harmony generated by reality transcended a run-of-the-mill walk through a city, making it a musical recital!
Now, without our headphones in, the world seemed dry and desolate. And, after trying this on six other people with the persuasive line ‘Hey, you wanna do some digital drugs, guys?’ to gratifying results, it didn’t take long for us to ascertain this was indeed a pleasurable and slightly psychedelic auditory experience – not only as a participant, but also as a listener. The three of us decided to take modern glitching further with a bit more technological clout.
A quick stop on the way back to the hacker apartment meant we now had extra kit. And, by 0100, Sean had plugged his Raspberry Pi computer into the TV – programming on the Pi with PureData. We made some tea and ate bread with the most delicious honey (the honey was in Bb major 6th) and kept working. By then it was 0300 and my taxi was due to arrive at 0615, we only had a few hours left!
We all wanted to add fine-grain control to this strange and wonderful auditory experience. CJ had brought his FM transmitter and binaural microphone/headphones and we plugged everything into my Mac. I wanted to do more than just sing the city, I wanted to play it too. That meant configuring something that could take multiple inputs – MIDI and Audio at the same time.
Finally at 0400, and full of incredible quantities of tea, bread and honey, we were now running a glitching instance on Ableton Live, with a binaural microphone / headphone setup and my iRig Keys midi controller hooked up. I started building musical stems right then and there.
The latest version has more than just repetition, my new glitching device can harmonise and play with the world in a much deeper way – and I walk around a city first to get what key its in and compose something beautiful that goes with the natural sounds around me. Then I load those sounds up – I can then trigger them when I hear something in the right key, so a motorbike going past in B flat will mean I trigger my ‘B flat, traffic’ piano composition. The main problem is that the laptop gets really hot, also I’m covered in wires so it looks a little strange.
And this is what glitching sounds like – some of these examples have music in, others don’t.
The tech is still very much hacked together, but there’s more documented in the talk.
WHAT HAPPENS NEXT?
CJ, Sean and I are all enthusiastic about sharing the joys of glitching – and we’re all working on versions of glitching devices. We’re hoping to create a resource online for anyone interested to play with the idea in their own way. I’m going to list everything I use in my hacked-together inelegant solution in another post.
I want an app that does this! I want to create a glitchpad! Beautiful musical stems that trigger automatically when friends walk through that city with this app! I want to be invited to perform ‘glitching’ concerts in cities around the world!
(for reference, I’ve reposted the TEDxTokyo video here)
So, I hosted a concert last month with the National Orchestra of Wales at St David’s Hall, Cardiff. It was amazing fun. After the event, I snuck onto the rather lovely grand piano to give my fingers a bit of exercise. Here’s my classical version of David Bowie ‘Changes’, a great song in any genre. Impromptu filming by Martin Daws, Young People’s Laureate for Wales on his mobile.
I do rather love taking music and giving it a classical twist – I like to call the process ‘Classifying’.
Of course this music is copyright David Bowie – you can buy the original through this link
Right now, a few people I really like are going through some tough times. I sometimes find it comparatively difficult to express my emotions in words – even through the richness of language there are times when words are a poor substitute for music – a super-conductor of emotion and meaning.
I truly know how it feels when no-one can reach me, and I don’t want to be reached. So this is for you, and anyone else currently undergoing adversity. I think that even if you, like me, are experienced in how to deal with hardship, it doesn’t make it any easier when it happens.
This composition to me represents wordless, deep support to my friends, and also to myself. It’s reassurance that even in the most dark of situations, where it might feel bleak and desolate, grey and hopeless – after some time has passed, a glimmer of a smile, a glance of understanding, a random act of kindness from a stranger could be all that’s needed to transform that grey world into something more habitable – infusing it, finally, with much-missed slivers of light and colour.
There’s a composition I’ve had in my head for a while now, it refused to be recorded because I was unhappy when I made a mistake, missed the click track, couldn’t get the timing right – I gave myself many many reasons not to capture it.
However, this morning all bets are off. I woke up at 6.00am realising that if I didn’t get this down, the perfect version of this beautiful tune will just remain contained in my head. I lay in bed for a few more hours before breaking ranks, and hitting record here in my studio. So, here’s an imperfect version of the wonderful music that is never far from my world, complete with mistakes, fluffs and stuff I would like to change. Also, because the track is so complex, the MIDI is glitching in parts! The song has inspired me to get more RAM or if necessary, get a new machine.
I have decided the fluffs and mistakes make the song alive – it’s OK that we have scars, it shows we have experienced the world and it’s left its mark on us. I feel as if I have let out a huge breath I didn’t even know I was holding.
It was recorded in just one take, and I’m going to post it before I change my mind.
More than anything I want to perform my live compositions at a beautiful piano in a wonderful recital hall. I would create musical pieces and as the composition continues the music will be inspired by how the audience respond – a fabulous musical feedback loop!
At a recent event, the Music Tech Fest in Boston, I took part in a 24-hour hackathon.
Ostensibly I was filming it for the show I present on, BBC Click. But as well as recording it for the programme, the experience and the people I met that weekend left a deep and lasting impression on me.
For the first time, I was surrounded by those who live comfortably in the centre of the Venn Diagram of Music and Technology – I found it to be an incredibly nourishing few days. I was able to talk openly about my synaesthesia and the very sensitive musical side of me that I don’t normally talk about during my day job. That’s since changed – this week’s show is all about Music Technology.
Plus I finally had the guts to do some live composition in front of people I hardly knew – and their response was incredibly positive, which led me down the path of putting live-composed piano music up on the web.
Each piece of music was recorded live, in just one take.
For years I spent a lot of time fixing every little thing in my compositions – a bit like proof-reading a book for spelling mistakes, but here I’ve deliberately left the mistakes in, this goes up completely untouched by anything – I don’t even record to a click track. And actually, it feels kind of exposed and fantastic all at once to send this out to the world.