Democratic Dance Music – an idea

EDIT: This is now ACTUALLY happening at the Science Museum! I have enlisted the help of Adam John Williams, Robert Wollner and Emi Mitchell to make this work. 

I devised the experiment (the first iteration written below) and I’ve also worked out the data points for collection. I’m composing the majority of music stems that will make up the musical segment of the feature.

Robert Wollner is creating a computer program that will let dancers enter data through their mobile phones. That data gets passed on to Adam and Emi. 

Adam is creating a live music computer program that will generate dance music based on my music stems and data from Rob’s program. Emi is working on visual display and how people are going to interact with their phones. 

As if that wasn’t enough, this is also going to be filmed by BBC Click!

Want to come along ? Click Here for Science Museum Session Details. The earlier sessions will be much easier to get into. Just turn up 5 minutes before each time.

 

ORIGINAL BLOG POST FOLLOWS:

Who knows best about augmenting musical experience? The musician or the listener? I want to work out exactly the specifications for the perfect dance anthem with the help of the people on the dance floor.

Traditionally the DJ is expected to steer an audience into emotional rapture during their set. They decide whether to play a fast-paced highly orchestrated sequence, or a slow textural ambient section. They are driving the experience as it were. But might it be possible to determine how to make the ultimate ‘tingle-generating’ feel-good floor filler by gathering data directly from the audience, after all, they are the most emotionally invested in the experience.

There is some tech around I think – some bracelets which log the audience’s passive response and biometric data which is really cool – but I’m interested in what happens if we introduce conscious participation so a simple button press would be all that’s required. It would be based entirely on someone’s conscious experience of the music.  Then we could gather data based on the results!

 

EXPERIMENT

For this to work we’d need to generate live responsive dance music.

While dancing, each audience member/participant holds a ‘voting’ button. EDIT this is now your smartphone!

Each person presses the button when they wish for the music to become more intense.

I’ve chosen 80% – but that number isn’t that important as long as it’s a clear majority. It would work like this.  When 80% of the audience have pressed the button, it would indicate to the composers/performers that now is the time for ‘the drop’ i.e.  adding greater orchestration much to the pleasure of the listeners – in the case of dance music this would be when more drums, synths, and particularly bass come in.

So, when 80% of the audience want the drop, 100% of the audience get it. I would be interested in finding out which group feel the most pleasurable response – the first 80% who have asked for it, or the last 20% who won’t be expecting it. And the final person pressing the button would get the full effect of the music being responsive to their request!

My thought would be to run an experiment in three parts

  1.   no interaction at all.
  2.   Interaction but no feedback: i.s. the audience cannot see how close they are to the 80% required to trigger the drop, so there is no visible measure of anticipation
  3.   Interaction and real-time visible indicator, for example a screen showing how many people have asked for the drop – which means there is a visible measure of anticipation.

So, would the audience experience music differently if they were consciously involved in its creation? How much time it takes for an average crowd to ‘consent’ to the drop? And would it be fun or reduce the experience to a button pressing exercise? Should we use a different method of gathering data such as a Kinect camera detecting a positive movement? Edit: we are planning to measure how much your phone moves while you dance – this will be done using the phone’s motion sensor. we are calling it the ‘wiggle index’ 

 

WHY I WANT TO DO THIS

The experiment plays with the age old musical idea of tension and resolution in music, and whether there’s a universal point at which people desire the point of resolution or whether people are happy to be prescribed that point by musical creators. Here’s a great simple example to follow tension and resolution: ‘Twinkle Twinkle Little Star’ – along with a physically accurate version to remind you of the tune.

So, creating tension: (first note) ‘Twinkle Twinkle Little Star” (goes up from home note = tension, but still in the ‘home’ key, so not too tense)

“how I wonder what you are”  (small key change to introduce distance then back down to the first note: resolution).

Tension and resolution occurs though rhythm, harmony, and melody, and not just in short musical phrases (like Twinkle Twinkle) but also in longer forms such as pop music – verse/chorus  or classical – such as exposition/development/recapitulaton.

I’ve greatly simplified this explanation for brevity, but I do believe that the best composers are creating layers upon layers of tension and resolution in different ways – in my opinion the most wonderful music is skilled in moving us between these states both in expected and unexpected ways. The tingle!

 

Remote Controlling a Digger using Virtual Reality

It really did feel like I was ‘inside the machine’ even though the resolution was low!

I composed the music especially for this feature, there is something very pleasing about a digger in C.

Full story here: http://www.bbc.co.uk/news/technology-28425844

Protected: Programme

This content is password protected. To view it please enter your password below:

By LJRich Posted in Geek Life
a great way to compose!

Modern Glitching – Auditory Enhancement of Reality with Music

I recently gave a talk at TedXTokyo 2014 about a musical device I built with the aim of giving other people the chance to hear the world as musically as I do. To date, around fifty people have tried my mobile composing inspiration rig with me – mostly with very enthusiastic responses afterwards, and in the more musical/auditory types there’s also a degree of joyful disorientation.

Some of the background to what I think is going on: Around twenty years ago, psychology professor Diana Deutsch discovered what she called the Speech to Song illusion. Essentially, a spoken phrase repeated often enough starts to take on musical qualities. There’s a great Radiolab episode which explains the phenomenon.

For me, I don’t require repetition in order to hear spoken phrases as musical. Speech is intrinsically musical for me, and so is the rest of the world – from cars passing to people typing. I really wanted to share my experience as I find it very beautiful.

 

STORY TIME!

It was the day after winning a prize at MusicTechFest’s Boston Hackathon event, which I took part in and filmed for the BBC. I was sharing a small apartment with a bunch of other music obsessives.  The day before I left, instead of packing neatly as normal I optimistically chucked everything I could see into my case and hoped for the best.

The idea of adjusting auditory experience or adding a ‘Glitch’ to reality – at least in aural terms – is not a new process.  But glitching with modern tech sounded like a great way to reveal the music I hear all the time – plus I wanted to add a more musical classical compositional element to the practice.

Sean Manton and CJ Carr (who was familiar with glitching) were two other music hackers I met at the Hackathon. They were instrumental in my sleep-deprived electronic inspiration.

So, grabbing my iPad and headphone splitters,  I built the first iteration of a device that messed with the ambient sound in the room in real time in a pleasurable manner. Raw audio is changed in real-time and enhanced with sound effects, and crucially, I added basic musical elements and phrases that would play simultaneously. A while later, I got the thing working in a way I liked and emerged from my room, eager to try it on other musical / technical people. My ideal system would allow for me to sing and play melody and harmony but I was nowhere near doing that yet.

So, extra headphones bought, splitter in,  time to try my rudimentary iPad device on CJ and Sean in a quiet teahouse. It was so much fun! The sounds of tea being made, the door opening, teaspoons hitting cups, amplified and enhanced by repetition! Those sounds were unexpected, made musical and wonderfully tingly. I sang along to the notes in the cafe to accentuate them. The staff at the teahouse got interested, all they could hear was us singing and hitting teaspoons and laughing. So we asked if they wanted to try it then wired them in to see their response – they liked it – a lot.

 

Going mobile was more interesting – we were physically connected by our headphone cables, so it took a while to maneuver through the door but together we emerged, wired up out into the wild. And, once our headphones were in, we pretty much stayed ‘glitched in’ for at least 5 hours straight. I could hear the music I normally hear but amplified! Wow! I sang in joyous harmony with the world for my cohorts, who joyously joined in. An ear-opening experience indeed, and I expect we were a strange sight, connected together by cables, singing and swaying – especially as only we could hear the glorious harmonic results of our musical musings.

What followed: glitching around a bookshop, glitching through a delicious dinner at a noodle restaurant until we got chucked out at closing time – and (my favourite) glitching on public transport all over Boston. Some time during the evening, I added a recorded drum loop to the experience – an albeit low-tech but incredibly effective way to turn the world into a very funky soundtrack – rhythm along with harmony generated by reality transcended a run-of-the-mill walk through a city, making it a musical recital!

Now, without our headphones in, the world seemed dry and desolate. And, after trying this on six other people with the persuasive line ‘Hey, you wanna do some digital drugs, guys?’ to gratifying results, it didn’t take long for us to ascertain this was indeed a pleasurable and slightly psychedelic auditory experience – not only as a participant, but also as a listener. The three of us decided to take modern glitching further with a bit more technological clout.

A quick stop on the way back to the hacker apartment meant we now had extra kit. And, by 0100, Sean had plugged his Raspberry Pi computer into the TV – programming on the Pi with PureData. We made some tea and ate bread with the most delicious honey (the honey was in Bb major 6th) and kept working. By then it was 0300 and my taxi was due to arrive at 0615, we only had a few hours left!

We all wanted to add fine-grain control to this strange and wonderful auditory experience. CJ had brought his FM transmitter and binaural microphone/headphones and we plugged everything into my Mac.   I wanted to do more than just sing the city, I wanted to play it too. That meant configuring something that could take multiple inputs – MIDI and Audio at the same time.

Finally at 0400, and full of incredible quantities of tea, bread and honey, we were now running a glitching instance on Ableton Live, with a binaural microphone / headphone setup and my iRig Keys midi controller hooked up. I started building musical stems right then and there.

The latest version has more than just repetition, my new glitching device can harmonise and play with the world in a much deeper way – and I walk around a city first to get what key its in and compose something beautiful that goes with the natural sounds around me. Then I load those sounds up – I can then trigger them when I hear something in the right key, so a motorbike going past in B flat will mean I trigger my ‘B flat, traffic’ piano composition. The main problem is that the laptop gets really hot, also I’m covered in wires so it looks a little strange.

And this is what glitching sounds like – some of these examples have music in, others don’t.

The tech is still very much hacked together, but there’s more documented in the talk.

 

WHAT HAPPENS NEXT?

CJ, Sean and I are all enthusiastic about sharing the joys of glitching – and we’re all working on versions of glitching devices. We’re hoping to create a resource online for anyone interested to play with the idea in their own way.  I’m going to list everything I use in my hacked-together inelegant solution in another post.

I want an app that does this! I want to create a glitchpad! Beautiful musical stems that trigger automatically when friends walk through that city with this app! I want to be invited to perform ‘glitching’ concerts in cities around the world!

(for reference, I’ve reposted the TEDxTokyo video here)

More on this story as it unfolds….

Musical CV: About me 2014

So, I’m a freelance presenter and music composer/hacker.  I do a lot for Click, the BBC’s tech show but I’ve also hosted BBC Orchestra events and most recently hosted my first Radio 3 show, which was great fun. I love doing projects where music and technology meet, so any excuse to do more is fallen upon with great joy.

These are the things I love.

1)     Music composition and performance –  I do a lot of classical piano and orchestral composition – including spontaneous classical piano composition in pretty much any style.  It just comes out like that, I can’t explain it, but I’m OK with showing it off now.  I really enjoy giving live recitals! https://soundcloud.com/ljrich/140420-flying-through-colour  – recently performed at BBC NBH much to the surprise of some of my work colleagues…

Here’s an informal performance from a few weeks ago:

 

2)      As well as presenting on TV (hard work but lots of fun) I enjoy hosting live events – a few weeks back I had the fabulous experience of hosting a classical orchestral concert including the National Orchestra of Wales playing the Doctor Who Theme. I also give keynote speeches on technology and social trends. I grew the @BBCClick twitter account to nearly 2 million followers, so I used to give talks about how to do that until I realised it’s much more fun to talk about future trends, music innovation and host events instead.

 

3)      Music hacking – tech/music innovation  – I filmed a feature for the BBC in Boston which involved entering MusicTechFest‘s Hackathon competition and staying up for 24 hours – I won one of the top prizes! http://www.bbc.co.uk/news/technology-27067106

 

4)      The two things I liked most about my music degree were composition and critical music analysis. I do like explaining why songs work and sound good…  music theory, but with a contemporary twist. Here’s a radio pilot I made a while back

 

5)      I recently gave a talk at TedXTokyo 2014 about a musical device I built with the aim of giving other people the chance to hear the world like I do.  I built the first iteration of the device in my room while sharing a tiny apartment with a bunch of other music obsessives – the process is ‘Glitching‘ – not a new technique, but certainly easier to do with today’s tech. I’ve augmented traditional glitching with musical inserts based on what key the world is in. People doing it report the practice as a pleasurable and slightly psychedelic auditory experience. More of the story is documented in the talk, and I’m working on an epic blog post which explains a lot more.  I love classical composing in the wild! I want to do ‘glitching’ concerts in cities around the world! 

 

6)      I’m very interested in new musical interfaces and software synthesisers too – these deserve their own blog post.

 

7)     And Finally…  here’s a link to even more BBC stuff I get up to and finally, here’s a link to loads of free Music I’ve composed.  

 

Video

Impromptu Classical Piano version of David Bowie’s “Changes”

So, I hosted a concert last month with the National Orchestra of Wales at St David’s Hall, Cardiff. It was amazing fun. After the event, I snuck onto the rather lovely grand piano to give my fingers a bit of exercise. Here’s my classical version of David Bowie ‘Changes’, a great song in any genre. Impromptu filming by Martin Daws, Young People’s Laureate for Wales on his mobile.

I do rather love taking music and giving it a classical twist – I like to call the process ‘Classifying’.

Of course this music is copyright David Bowie – you can buy the original through this link

TedXTokyo: Music, Technology and Synaesthesia

A few weeks ago, I gave a talk at TEDxTokyo on synaesthesia – and how it helps me compose music spontaneously – a few minutes in you’ll see me using the piano to describe my synaesthetic responses when entering a new city.

 

I also composed some music during my journey on the Shinkansen – Japan’s legendary bullet train.

 

 

Ever since I’ve started composing more frequently, lots more opportunities to combine music with presenting have come up… I hosted my first show on Radio 3:  Saturday Classics plus I compered #NOWYourTurn, the BBC National Orchestra of Wales concert at St David’s Hall, Cardiff.

I am looking forward to doing more thoroughly enjoyable things!

 

Non Standard Jazz

 

Marko, one of my friends asked if I would live-compose some Jazz…

So here it is, an early morning optimistic look at the day ahead. I rather enjoy the crunchy chords! This is one of those times where it feels like I’m just listening to what another part of my brain is playing. I’m uploading it, fluffs and all!

As ever, this was composed in just one take.

I’m really enjoying live-scoring. I can’t wait to start doing more while I’m on the move!

Composing with a head cold

(crosspost from my Soundcloud account)

I spent the day outside, it was beautiful and green and sunny. I am not a natural gardener – in fact for the first time I planted something in a pot!  I did enjoy putting the compost in, though the sound of the roots being pulled unsettled me! I loved the smell of jasmine flowers and the chords behind the greenery and fragrance.  I also ate a lot of sugary treats and drank the most delicious single-estate oolong tea, a riot of tastes and experiences!

But at the same time I’m also experiencing a head cold that has bunged up my ears in a very strange way. So this is the sound version of what my day felt like through my cold. I’m impatient to get back to full working order so I can keep on with composing!

Listening to this back, I can hear all the cupcakes and chocolate as well as the feel and taste of the wet compost and getting scratched by nature.

 

It’s been just over a month since I’ve started putting my ‘live-composing’ online. Some FAQ’s

1) how long does it take to compose?

It just comes out like that. I think about something, and play it – in one go. In fact, I don’t even feel like I think about it. It just – happens!

2) What kit do you use?

At home it’s currently Logic 9 on an old Mac Pro, with Ivory II synth for the piano, and an old Yamaha P90 or Roland FP2 with sustain pedal to ‘play’ the notes.

On the road it’s either MacBook Pro running Ableton which I’m just getting the hang of, or Logic 9. I also use the iRig Keys (a small keyboard that has a sustain pedal input) and a sustain pedal. I also have an iPad set up with Garageband which is frustrating but OK in a pinch, kind of like a notepad.