Music-Powered Spaceship

Last Sunday I took part in a very unconventional story/experiential theatre event run by a friend of mine. I live-composed the music. The event involved taking an audience on a fictional journey into space. Then (whilst in that context) they were given amazing lectures by real rocket scientists (Dr David McKeown), artists (Toby HarrisSinead McDonaldJeffrey Roe and more. The talks were diverse but relevant. Imagine hearing what space travel would feel like – as if you and the whole theatre was in fact in a spaceship – travelling through space. Kate Genevieve, a visual artist talked about the messages sent with the Voyager space probe. A man from SETI (Alan Penny) informed us of the best way to survive first contact in a suitably realist approach. There was more, but I’ll get to that nearer the end.

And I? I powered the spaceship with music.

It’s because of Leila Johnston – Hack Circus is her thing. She asked if I would like to create music to simulate ‘hypersleep’ during extended space travel, I went one step further and wrote the following email reply…

I have a great job of being the hyperspace engineer – the piano keyboard is in fact my console muwahaha

Leila responded with;

Oh I love that. Yes! Play us into hyperspace! What a lovely lovely idea.

This really captured my imagination, so much so that I appeared to send the following response:

Yes, the equations are quite complicated to most people. But hyperspace mathematics calculations actually have more in common with Bach fugues than physics, turns out those aren’t musical pieces but formulae all along. A fantastically complicated spatial equation can be surprisingly easy to solve musically which is why the keyboard is my usual preference for transport consoles. Bach was a hyperspace engineer from the future who got stuck in a time travel incident. Before he got transferred back to his timeline he enjoyed annotating his equations in musical form and confusing the natives.

Though the sustain pedal just puts the kettle on

I found this in my ‘sent’ items the next morning, and hurriedly dashed off an apology for sleep-emailing. Clearly I had really taken my role as Hyperspace Engineer to heart.

For some reason this didn’t put her off, and the event was quite literally a blast.

There were cabin crew. There were flashing lights. There was hazard tape. Dr Lewis Dartnell, an astrobiologist, played some amazing sounds from space that triggered my synaesthesia like you wouldn’t believe.

The sound of Saturn’s rings original, courtesy Cassini Radio & Plasma Wave Science team:
“…the sounds produced are exciting! You can listen to the sound of passing through the ring dust by clicking here. sound Listen ” (nb the ‘Listen’ link opens up a video file)

Wow! What magical unearthly sounds! What a weird recording! I had to share how wonderful it felt to absorb these strange vibrations! I attempted to convey my synaesthetic response to the sound of Saturn’s rings – what I hear when I hear them…  and this is the result.

 

For those interested, here’s the mission page. Off topic, I’m joyous to report that the Sun in our solar system plays a giant Major 7th.

I won at the MusicTechFest London Hackathon 2014!

THE CHALLENGE

MTV: Hack The Gig: Music fans’ current digital interface with live performances is generally passive and flat. Using the provided exclusive stems, recorded especially for the Music Tech Fest, invent completely new ways for audiences to experience a filmed concert. The winner will get to develop their project with the MTV UK Digital Media team

  

MY ENTRY (it won!)

I had 24 hours to create something – and this is what I made:

Absorb more data in Less time! Welcome to my ADHDTV concept (Always Deliver Heavy Data*)! Innovative ways to consume content – especially music content – with a real emphasis on the viewer/listener being able to directly influence what they are being shown.

It’s a bit rough and ready (anything is with 24 hours’ continuous work) and of course these are just concepts that would benefit from a design approach and a bit more development of these ideas. The prize: develop this further with the MTV team, something I’m really excited about.

My concept (which hit me at 0330 after much caffeine) consists of the following premise:

We are bombarded with information and there’s loads of ways to filter it. But what if we want to have it all?  Watch everything, listen to everything? Why not say goodbye to curation and discretion, and instead drink from the firehose – and say hello to

 

Always Deliver Heavy Data TV or ADHDTV*

 

As it’s aimed at the ultra-connected, two-screen viewing ‘high consumption on mobile’ uber-trendy crowd, I’ve also come up with a hipster term that makes me both proud and disgusted with myself.  “Non-Linear Sideways-Relevant Transmedia”.

The basic premise is that the viewer is able to take a much more active role in the content they are served. This can be done through the following interactive modes. I mocked up each of these in Final Cut Express on my rather wobbly MacBookPro. For the audio manipulation I used Logic Studio 9.

 

VIEWER MODES

ALL KILLER
Press a button to go straight to the chorus / hook.  

INVERSE DJ / BLURRED LINES
If the artist is being interviewed and you like the background music, toggle the volume between the interview and the music underneath. Could even swipe up or press a button (say, the spacebar) to transition to the music video attached to the background music.

OVERLOAD ME
Plays at 1.5, 2 and 4x speed without changing pitch. Listen to ALL the music!

PEER PLEASURE
Bits that other people have marked as good, much like highlighting a favourite section of text. People can skip to the bit that most people marked as pleasurable. 

EXTRAS SERVED instead of Searching
As well as song by the artist, the option to switch to remixes, songs that sound really similar, fan-made footage or tributes on youtube – but in time (quantised to the beat) so the listener doesn’t experience ‘jarring’. 

ANTISOCIAL
With this option engaged, viewers will only see clips of the artist talking about their music – any mention of social media, events, fan interaction or non-musical content won’t get viewed. Can also work the other way around. 

NB MTV provided the video : the Artist’s name is Kwabs. 

Links:

http://www.musictechfest.org
http://www.mtv.com/uk
http://www.kwabsmusic.com

*I am reliably informed by Adam John Williams that this is known as a ‘Backronym’ 

Remote Controlling a Digger using Virtual Reality

It really did feel like I was ‘inside the machine’ even though the resolution was low!

I composed the music especially for this feature, there is something very pleasing about a digger in C.

Full story here: http://www.bbc.co.uk/news/technology-28425844

Hack Circus / Synaesthesia Interview!

Click here to listen to an interview about my synaesthesia and music composing! This was for the wonderful Leila Johnston’s Hack Circus podcast …

And I scored some of the conversation!

Here’s the music on its own, which turned out rather nicely.

a great way to compose!

Modern Glitching – Auditory Enhancement of Reality with Music

I recently gave a talk at TedXTokyo 2014 about a musical device I built with the aim of giving other people the chance to hear the world as musically as I do. To date, around fifty people have tried my mobile composing inspiration rig with me – mostly with very enthusiastic responses afterwards, and in the more musical/auditory types there’s also a degree of joyful disorientation.

Some of the background to what I think is going on: Around twenty years ago, psychology professor Diana Deutsch discovered what she called the Speech to Song illusion. Essentially, a spoken phrase repeated often enough starts to take on musical qualities. There’s a great Radiolab episode which explains the phenomenon.

For me, I don’t require repetition in order to hear spoken phrases as musical. Speech is intrinsically musical for me, and so is the rest of the world – from cars passing to people typing. I really wanted to share my experience as I find it very beautiful.

 

STORY TIME!

It was the day after winning a prize at MusicTechFest’s Boston Hackathon event, which I took part in and filmed for the BBC. I was sharing a small apartment with a bunch of other music obsessives.  The day before I left, instead of packing neatly as normal I optimistically chucked everything I could see into my case and hoped for the best.

The idea of adjusting auditory experience or adding a ‘Glitch’ to reality – at least in aural terms – is not a new process.  But glitching with modern tech sounded like a great way to reveal the music I hear all the time – plus I wanted to add a more musical classical compositional element to the practice.

Sean Manton and CJ Carr (who was familiar with glitching) were two other music hackers I met at the Hackathon. They were instrumental in my sleep-deprived electronic inspiration.

So, grabbing my iPad and headphone splitters,  I built the first iteration of a device that messed with the ambient sound in the room in real time in a pleasurable manner. Raw audio is changed in real-time and enhanced with sound effects, and crucially, I added basic musical elements and phrases that would play simultaneously. A while later, I got the thing working in a way I liked and emerged from my room, eager to try it on other musical / technical people. My ideal system would allow for me to sing and play melody and harmony but I was nowhere near doing that yet.

So, extra headphones bought, splitter in,  time to try my rudimentary iPad device on CJ and Sean in a quiet teahouse. It was so much fun! The sounds of tea being made, the door opening, teaspoons hitting cups, amplified and enhanced by repetition! Those sounds were unexpected, made musical and wonderfully tingly. I sang along to the notes in the cafe to accentuate them. The staff at the teahouse got interested, all they could hear was us singing and hitting teaspoons and laughing. So we asked if they wanted to try it then wired them in to see their response – they liked it – a lot.

 

Going mobile was more interesting – we were physically connected by our headphone cables, so it took a while to maneuver through the door but together we emerged, wired up out into the wild. And, once our headphones were in, we pretty much stayed ‘glitched in’ for at least 5 hours straight. I could hear the music I normally hear but amplified! Wow! I sang in joyous harmony with the world for my cohorts, who joyously joined in. An ear-opening experience indeed, and I expect we were a strange sight, connected together by cables, singing and swaying – especially as only we could hear the glorious harmonic results of our musical musings.

What followed: glitching around a bookshop, glitching through a delicious dinner at a noodle restaurant until we got chucked out at closing time – and (my favourite) glitching on public transport all over Boston. Some time during the evening, I added a recorded drum loop to the experience – an albeit low-tech but incredibly effective way to turn the world into a very funky soundtrack – rhythm along with harmony generated by reality transcended a run-of-the-mill walk through a city, making it a musical recital!

Now, without our headphones in, the world seemed dry and desolate. And, after trying this on six other people with the persuasive line ‘Hey, you wanna do some digital drugs, guys?’ to gratifying results, it didn’t take long for us to ascertain this was indeed a pleasurable and slightly psychedelic auditory experience – not only as a participant, but also as a listener. The three of us decided to take modern glitching further with a bit more technological clout.

A quick stop on the way back to the hacker apartment meant we now had extra kit. And, by 0100, Sean had plugged his Raspberry Pi computer into the TV – programming on the Pi with PureData. We made some tea and ate bread with the most delicious honey (the honey was in Bb major 6th) and kept working. By then it was 0300 and my taxi was due to arrive at 0615, we only had a few hours left!

We all wanted to add fine-grain control to this strange and wonderful auditory experience. CJ had brought his FM transmitter and binaural microphone/headphones and we plugged everything into my Mac.   I wanted to do more than just sing the city, I wanted to play it too. That meant configuring something that could take multiple inputs – MIDI and Audio at the same time.

Finally at 0400, and full of incredible quantities of tea, bread and honey, we were now running a glitching instance on Ableton Live, with a binaural microphone / headphone setup and my iRig Keys midi controller hooked up. I started building musical stems right then and there.

The latest version has more than just repetition, my new glitching device can harmonise and play with the world in a much deeper way – and I walk around a city first to get what key its in and compose something beautiful that goes with the natural sounds around me. Then I load those sounds up – I can then trigger them when I hear something in the right key, so a motorbike going past in B flat will mean I trigger my ‘B flat, traffic’ piano composition. The main problem is that the laptop gets really hot, also I’m covered in wires so it looks a little strange.

And this is what glitching sounds like – some of these examples have music in, others don’t.

The tech is still very much hacked together, but there’s more documented in the talk.

 

WHAT HAPPENS NEXT?

CJ, Sean and I are all enthusiastic about sharing the joys of glitching – and we’re all working on versions of glitching devices. We’re hoping to create a resource online for anyone interested to play with the idea in their own way.  I’m going to list everything I use in my hacked-together inelegant solution in another post.

I want an app that does this! I want to create a glitchpad! Beautiful musical stems that trigger automatically when friends walk through that city with this app! I want to be invited to perform ‘glitching’ concerts in cities around the world!

(for reference, I’ve reposted the TEDxTokyo video here)

More on this story as it unfolds….

Musical CV: About me 2014

So, I’m a freelance presenter and music composer/hacker.  I do a lot for Click, the BBC’s tech show but I’ve also hosted BBC Orchestra events and most recently hosted my first Radio 3 show, which was great fun. I love doing projects where music and technology meet, so any excuse to do more is fallen upon with great joy.

These are the things I love.

1)     Music composition and performance –  I do a lot of classical piano and orchestral composition – including spontaneous classical piano composition in pretty much any style.  It just comes out like that, I can’t explain it, but I’m OK with showing it off now.  I really enjoy giving live recitals! https://soundcloud.com/ljrich/140420-flying-through-colour  – recently performed at BBC NBH much to the surprise of some of my work colleagues…

Here’s an informal performance from a few weeks ago:

 

2)      As well as presenting on TV (hard work but lots of fun) I enjoy hosting live events – a few weeks back I had the fabulous experience of hosting a classical orchestral concert including the National Orchestra of Wales playing the Doctor Who Theme. I also give keynote speeches on technology and social trends. I grew the @BBCClick twitter account to nearly 2 million followers, so I used to give talks about how to do that until I realised it’s much more fun to talk about future trends, music innovation and host events instead.

 

3)      Music hacking – tech/music innovation  – I filmed a feature for the BBC in Boston which involved entering MusicTechFest‘s Hackathon competition and staying up for 24 hours – I won one of the top prizes! http://www.bbc.co.uk/news/technology-27067106

 

4)      The two things I liked most about my music degree were composition and critical music analysis. I do like explaining why songs work and sound good…  music theory, but with a contemporary twist. Here’s a radio pilot I made a while back

 

5)      I recently gave a talk at TedXTokyo 2014 about a musical device I built with the aim of giving other people the chance to hear the world like I do.  I built the first iteration of the device in my room while sharing a tiny apartment with a bunch of other music obsessives – the process is ‘Glitching‘ – not a new technique, but certainly easier to do with today’s tech. I’ve augmented traditional glitching with musical inserts based on what key the world is in. People doing it report the practice as a pleasurable and slightly psychedelic auditory experience. More of the story is documented in the talk, and I’m working on an epic blog post which explains a lot more.  I love classical composing in the wild! I want to do ‘glitching’ concerts in cities around the world! 

 

6)      I’m very interested in new musical interfaces and software synthesisers too – these deserve their own blog post.

 

7)     And Finally…  here’s a link to even more BBC stuff I get up to and finally, here’s a link to loads of free Music I’ve composed.  

 

Democratic Dance Music – an idea

EDIT: This is now ACTUALLY happening at the Science Museum! I have enlisted the help of Adam John Williams, Robert Wollner and Emi Mitchell to make this work. 

I devised the experiment (the first iteration written below) and I’ve also worked out the data points for collection. I’m composing the majority of music stems that will make up the musical segment of the feature.

Robert Wollner is creating a computer program that will let dancers enter data through their mobile phones. That data gets passed on to Adam and Emi. 

Adam is creating a live music computer program that will generate dance music based on my music stems and data from Rob’s program. Emi is working on visual display and how people are going to interact with their phones. 

As if that wasn’t enough, this is also going to be filmed by BBC Click!

Want to come along ? Click Here for Science Museum Session Details. The earlier sessions will be much easier to get into. Just turn up 5 minutes before each time.

 

ORIGINAL BLOG POST FOLLOWS:

Who knows best about augmenting musical experience? The musician or the listener? I want to work out exactly the specifications for the perfect dance anthem with the help of the people on the dance floor.

Traditionally the DJ is expected to steer an audience into emotional rapture during their set. They decide whether to play a fast-paced highly orchestrated sequence, or a slow textural ambient section. They are driving the experience as it were. But might it be possible to determine how to make the ultimate ‘tingle-generating’ feel-good floor filler by gathering data directly from the audience, after all, they are the most emotionally invested in the experience.

There is some tech around I think – some bracelets which log the audience’s passive response and biometric data which is really cool – but I’m interested in what happens if we introduce conscious participation so a simple button press would be all that’s required. It would be based entirely on someone’s conscious experience of the music.  Then we could gather data based on the results!

 

EXPERIMENT

For this to work we’d need to generate live responsive dance music.

While dancing, each audience member/participant holds a ‘voting’ button. EDIT this is now your smartphone!

Each person presses the button when they wish for the music to become more intense.

I’ve chosen 80% – but that number isn’t that important as long as it’s a clear majority. It would work like this.  When 80% of the audience have pressed the button, it would indicate to the composers/performers that now is the time for ‘the drop’ i.e.  adding greater orchestration much to the pleasure of the listeners – in the case of dance music this would be when more drums, synths, and particularly bass come in.

So, when 80% of the audience want the drop, 100% of the audience get it. I would be interested in finding out which group feel the most pleasurable response – the first 80% who have asked for it, or the last 20% who won’t be expecting it. And the final person pressing the button would get the full effect of the music being responsive to their request!

My thought would be to run an experiment in three parts

  1.   no interaction at all.
  2.   Interaction but no feedback: i.s. the audience cannot see how close they are to the 80% required to trigger the drop, so there is no visible measure of anticipation
  3.   Interaction and real-time visible indicator, for example a screen showing how many people have asked for the drop – which means there is a visible measure of anticipation.

So, would the audience experience music differently if they were consciously involved in its creation? How much time it takes for an average crowd to ‘consent’ to the drop? And would it be fun or reduce the experience to a button pressing exercise? Should we use a different method of gathering data such as a Kinect camera detecting a positive movement? Edit: we are planning to measure how much your phone moves while you dance – this will be done using the phone’s motion sensor. we are calling it the ‘wiggle index’ 

 

WHY I WANT TO DO THIS

The experiment plays with the age old musical idea of tension and resolution in music, and whether there’s a universal point at which people desire the point of resolution or whether people are happy to be prescribed that point by musical creators. Here’s a great simple example to follow tension and resolution: ‘Twinkle Twinkle Little Star’ – along with a physically accurate version to remind you of the tune.

So, creating tension: (first note) ‘Twinkle Twinkle Little Star” (goes up from home note = tension, but still in the ‘home’ key, so not too tense)

“how I wonder what you are”  (small key change to introduce distance then back down to the first note: resolution).

Tension and resolution occurs though rhythm, harmony, and melody, and not just in short musical phrases (like Twinkle Twinkle) but also in longer forms such as pop music – verse/chorus  or classical – such as exposition/development/recapitulaton.

I’ve greatly simplified this explanation for brevity, but I do believe that the best composers are creating layers upon layers of tension and resolution in different ways – in my opinion the most wonderful music is skilled in moving us between these states both in expected and unexpected ways. The tingle!

 

Video

Impromptu Classical Piano version of David Bowie’s “Changes”

So, I hosted a concert last month with the National Orchestra of Wales at St David’s Hall, Cardiff. It was amazing fun. After the event, I snuck onto the rather lovely grand piano to give my fingers a bit of exercise. Here’s my classical version of David Bowie ‘Changes’, a great song in any genre. Impromptu filming by Martin Daws, Young People’s Laureate for Wales on his mobile.

I do rather love taking music and giving it a classical twist – I like to call the process ‘Classifying’.

Of course this music is copyright David Bowie – you can buy the original through this link

TedXTokyo: Music, Technology and Synaesthesia

A few weeks ago, I gave a talk at TEDxTokyo on synaesthesia – and how it helps me compose music spontaneously – a few minutes in you’ll see me using the piano to describe my synaesthetic responses when entering a new city.

 

I also composed some music during my journey on the Shinkansen – Japan’s legendary bullet train.

 

 

Ever since I’ve started composing more frequently, lots more opportunities to combine music with presenting have come up… I hosted my first show on Radio 3:  Saturday Classics plus I compered #NOWYourTurn, the BBC National Orchestra of Wales concert at St David’s Hall, Cardiff.

I am looking forward to doing more thoroughly enjoyable things!