An exciting use. of knowledge from my time in the NASA Datanauts program, here’s a 4 minute a story about sustainability in space for the BBC. Watch to find out how to clean up space – because we can’t use a vacuum cleaner in a vacuum…
I scored all the music for this feature, I really enjoy doing this when I have the time !
A code – be it a chord, melody or rhythm in someone’s head or a disk – can output into puffs of air to create this exquisite, yet ultimately unquantifiable substance we know as music. So how can a machine help us write music for humans?
The gorgeous accidents of the musician’s fingers slipping across their instrument, falling into a new collection of notes – or that warm feeling when a voice breaks between notes of a melody that make the whole tune sweeter – this is the intimate sound of a flawed and beautiful musician, their power, their individuality, their magic – the song inside the tune as Christina Aguilera once said.
So how can you go from that mystery of music to asking a machine to write a hit song? Well, my team ‘Smorgasborg’, one of 38 entries to this year’s AI Song Contest decided to explore this question.
Voting closes on July 1st – listen to as many entries as you like – and vote up the ones you enjoy the most!
This years’ participants are a fantastic collection of artists, technologists and machine learning enthusiasts, all of whom approached the brief differently – make music – using machines! Now, as a musician learning the elements of coding, I approached the challenge from a musical perspective – choosing to discover “How can we use accessible AI tools to augment the human creative process rather than replace it?”
I’d taught myself AI composing with Style Transfer during my cancer treatment and subsequent shielding here in the UK. It’s surprising how much time there is for learning when utterly stationary, or unable to leave the house, or indeed when it’s impossible to sleep through the night… so why not stay up and run code every few minutes?
Like many musicians, I miss jamming with real people in real time, that physiological response to sound that makes us stand with others in front of massive speakers ‘in flow state’ – that feeling of being somewhere or nowhere – ‘in the zone’.
AI’s potential goes much further than just audio generating audio – which is what I’d been doing – and it just didn’t feel enough like a collaboration. After all, we could have asked the machine to produce random sounds and then selected the best. But instead, we split the song- some elements created by AI, some by the musician- and NO autotune!
Here’s a flowchart showing the relationship between human- and machine-generated content.
My team discussed many issues – even at the level of deciding what popular music is – for example, dominant music theory is based on Western, European tradition. This was partly why we elected to start with AI lyric generation rather than music, as it kept orchestral and melodic options open. We felt AI music would have been more likely to generate western diatonic sounds, as the data sets are currently largely lacking in music outside these descriptors. We wanted to explore some influences outside the popular western sounds.
Here are a few seeding prompts in bold below: For a while I was tempted to go with really simple songs. Then, during a team call, I read through a few that were strangely poetic”.
Example Prompts | Completions from GPT-3 (safe outputs) follow:
“Do people want a catchy tune, a riff that can fly them to the moon? | Or would they rather just see me in my underpants?…”
“dance dance I wanna | see you move dance dance dance! (Oh!) It’s time to make it right (Oh!) We’re gonna make it…”
“write a love song about 2 machines | falling in luv. 1 hour later. BAM BAM BAM BAM BAM BAM “i just wan 2 say, that’s one of the best songs i’ve ever heard” “I KNOW RIGHT” “i mean, i would probably pay like $1.99 just to get that song on my phone.” “do it, do it now, i’ll wait” BAM BAM BAM BAM BAM BAM “i did it” “KEEP THE CHANGE…”
After further experimentation, the below completion caught our attention.
“feel the bass dance all night headphone on smartphone light walking like a zombie in the dark pulsating body nightlife make you move vibrating body sending shiver to the core….”
These words… sounded most like a story, they had evocative phrases like ‘walking like a zombie’” We fed this modified prompt back into the engine and generated our lyrics which remain unchanged throughout the songwriting process.
“feel the bass dance all night headphones on smartphone light walking like a zombie in the dark | ‘til the streetlight burns the sunrise behind your back you were here now you’re gone There’s a girl I know who knows what she wants.
I was inspired by the words and created a first draft of the melody, but was getting stuck on ‘Shine like a diamond dust in the universe’. We wanted to use lyrics verbatim to stay faithful to the AI, but were stumped on how to parse this particular musical phrase.So we used OpenAI’s Jukebox trained on Adele to suggest various new melodic lines.
At first I used a model to output 30 seconds of music at a time – but my first attempts were frustrating – it didn’t create tunes that made theoretical sense! After more false starts, I realised co-composing suited me more, given my mainly musical background. Supervising every 4-6 secs added my own musical preferences to the generative result.
After 21 attempts (and more crashes!), attempt 22 inspired me to re-scan the lyric lines –
|| Shine like a diamond || Dust in the universe became
|| Shine || Like a diamond dust || In the universe.
Yes! I gleefully thanked the program out loud even though it was 01:30AM, and sang a guide melody and piano accompaniment into Logic Pro X. I felt no need to upsample as I wasn’t planning to output audio and just needed to hear the melodic lines.
Google’s NSynth -one of the settings used with Ableton | Imaginary Soundscapes – with the image used to generate fireworks in the chorus
The bass, piano and pad are all generated via the NSYNTH sound. I was inspired by team mate Leila saying the song was set “In Space” and chose sounds based on this thought – resulting in ethereal and floating pads, with sharp shards of passing comet dust! Continuing the theme, we also used AI-generated audio from Imaginary-soundscape, an online engine designed to add suggested soundscapes to (normally earth) landscapes. We used an image of space from Unsplash and the AI returned audio – fireworks! You can hear these alongside the chorus.
If you’d like to help us become Top of the Bots – please vote here – no need to register! Team Smorgasborg is excited to be part of the AI Song Contest!
A selection of AI tools used in the creative process: we also used Deep Music Visualizer and Tokkingheads for the music video
I’ve been away from work and the internet for a while working on a massive project – building a baby! Having always been completely fascinated by music’s power to move us and change our perceptions, I thought there would be lots of music specifically for giving birth – but nothing came up. Which was ironic, I thought, considering that for conceiving a baby there are any number of musical accompaniments available! So I created music especially for me and anyone else going through the intense experience of labour, birth and early parenthood.
This album is very special to me – I wanted music that had calmness at its heart to support the incredible and inevitable journey from pregnancy to parent, but I think it’s also a very enjoyable listen if I just want to zone out and remember how to breathe.
Talking of which, this instrumental music complements all kinds of breathing rhythms during stages of labour and birth. I composed it while pregnant and only completed it half an hour before labour!
Originally I wasn’t going to release this music or talk publicly about my experience (it’s so personal!) but below I explain exactly why I chose to do so.
So yes, I had a baby – he is awesome. And for anyone interested in why I wrote such an album, I’ve shared my very personal story about conquering a lifelong phobia of giving birth below the track listing. If you’re not interested that’s fine too – in any case it’s good to be back, baby! Normal streaming, tweeting and writing about music, inventions, technology and synaesthesia will soon resume.
Relaxing Music for Giving Birth Tracklist
TRACK 1: Incarnation. Labour, Birth and Calm Over an hour long and can be repeated seamlessly throughout labour. will play on multiple devices and speakers without sounding out of tune or out of time. It also works as a continuous loop. I didn’t want music to get in the way of my breathing or the physiological process of giving birth – this was actually the hardest part, making sure it was musical and rhythmic while still leaving space for getting into the zone. I had this track on repeat throughout my labour.
Plus – Bonus Tracks for after the big day to give the new family a gentle soundtrack for those first utterly indescribable weeks….
TRACK 2: Serenity. Soundtrack to a Contented Baby
Serene piano sounds soothe and support a calm environment – contains a unique ‘white noise blanket’ – to soften any unexpected sudden sounds from outside that might startle a new baby. It seems to soothe my little one!
TRACK 3: Relaxation: Sleepy Parents, Sleepy Baby
Encouraging even breathing; aims to elongate any rare moments of calmness and sleep – not that you’ll get much sleep over the first months of parenthood! I found this really helped my little one settle. A sleepy soundtrack to chill out with the new arrival. For babies AND parents.
No apologies for massive wall of text below!
HOW THIS ALBUM WAS BORN
(NB don’t worry there are no scary bits. It contains music, a little tech and a deeply personal story behind the album)
I was asked to take part in the Tech Tent annual quiz all about the tech news of 2014, pitted against none other than the people who actually make the news – the BBC Technology website team. Sounded to me like a guaranteed failure – these people know everything about the stories of the year – not only would they have written the articles, they would also have researched the stories thoroughly and (worse!) know background and periphery information.
How could I game the system without cheating? My plan below actually worked…
1. UNDERSTANDING THE FORMAT – where the scoring happens The email came through with the rules: there will be “12 questions each relating to a big tech news story during 2014 – there’s one question for each month BUT they will not come in order. The order was selected by random draw. Most questions are in two parts offering two points, with an extra bonus point available for guessing which in which month the story happened. FWIW, I doubt we’ll get through all 12 questions in the limited time we have. If one team fails to answer a question or part of a question correctly, that q or part of question will be offered to the other team.”It made sense to memorise the news stories and their months as according to the format knowing these represented 12 points straight away.
2. CREATE DATA SET A
I gleaned the main events in technology from Wikipedia by month with a definite nod towards business and consumer electronics. I based the emphasis on quizmaster Rory’s preferences and my knowledge of the stories producer Jat tends to choose for the Tech Tent podcast.
3. CREATE DATA SET B AND SEE WHAT STORIES APPEAR IN BOTH A AND B
The click news scripts from the whole of 2014 were cut and pasted – all the click newsbelts together by month made it really easy to work out the most likely questions in each segment. I amalgamated this with the Wikipedia and The Verge year in review and other sources, then applied weighting depending on information available to ask deeper questions around a topic.
4. FURTHER ANALYSIS TO PINPOINT MOST LIKELY STORIES
Based on my analysis, some stories were more likely than others to work as questions for a quiz –for example Twitch and Apple buying beats were big stories in otherwise quiet months so it made sense to drill down into those. Flappy Bird was also pretty much guaranteed to be a question based on the analysis – and it helped that Rory did rather a few spots on flappy bird in 2014, so again, a likely candidate.
5. EDITORIAL CONSIDERATION
I could also discount any stories that weren’t appropriate for a Christmas quiz tone, which reduced the viable stories further.
6. EXPERT AREAS
It made sense to look at key stories like the Apple launch, top tech mergers and acquisitions and crypto-currency as big stories of 2014. So I memorised as many numbers and month/story combos as possible to maximise those easy points. Knowing about the business of tech meant that I could at least make informed guesses when I wasn’t sure of an answer – which sometimes paid off and sometimes didn’t (Google nearly bought Twitch = correct, The tablet-sized device launch was the Nokia tablet not Microsoft’s Surface pro tablet – nearly)
7. A BIT OF LUCK
As soon as I realised the last question (Number of World Cup Tweets) was directed at the Tech Website team first, it was easy to make an N+1 or N-1 call which would automatically cover more ground than the first guess, which led to eventual (if messy) victory!(It made up for correctly stating the flappy bird developer was making £32,000 per week ($50,000) which was not picked up!)
Yes! Emerged Victorious! Dave and Zoe from the BBC Tech Website team were worthy adversaries – because of the calibre of opponents, I had to up my game in order to even stand a chance against them. Thus I learned that when the odds aren’t great, it’s still worth doing the best you can, a great lesson to bring in 2015. Hurrah! and Happy New Year!
So, I’m a freelance presenter and music composer/hacker. I do a lot for Click, the BBC’s tech show but I’ve also hosted BBC Orchestra events and most recently hosted my first Radio 3 show, which was great fun. I love doing projects where music and technology meet, so any excuse to do more is fallen upon with great joy.
These are the things I love.
1) Music composition and performance – I do a lot of classical piano and orchestral composition – including spontaneous classical piano composition in pretty much any style. It just comes out like that, I can’t explain it, but I’m OK with showing it off now. I really enjoy giving live recitals! https://soundcloud.com/ljrich/140420-flying-through-colour – recently performed at BBC NBH much to the surprise of some of my work colleagues…
Here’s an informal performance from a few weeks ago:
2) As well as presenting on TV (hard work but lots of fun) I enjoy hosting live events – a few weeks back I had the fabulous experience of hosting a classical orchestral concert including the National Orchestra of Wales playing the Doctor Who Theme. I also give keynote speeches on technology and social trends. I grew the @BBCClick twitter account to nearly 2 million followers, so I used to give talks about how to do that until I realised it’s much more fun to talk about future trends, music innovation and host events instead.
3) Music hacking – tech/music innovation – I filmed a feature for the BBC in Boston which involved entering MusicTechFest‘s Hackathon competition and staying up for 24 hours – I won one of the top prizes! http://www.bbc.co.uk/news/technology-27067106
4) The two things I liked most about my music degree were composition and critical music analysis. I do like explaining why songs work and sound good… music theory, but with a contemporary twist. Here’s a radio pilot I made a while back
5) I recently gave a talk atTedXTokyo 2014about a musical device I built with the aim of giving other people the chance to hear the world like I do. I built the first iteration of the device in my room while sharing a tiny apartment with a bunch of other music obsessives – the process is ‘Glitching‘ – not a new technique, but certainly easier to do with today’s tech. I’ve augmented traditional glitching with musical inserts based on what key the world is in. People doing it report the practice as a pleasurable and slightly psychedelic auditory experience. More of the story is documented in the talk, and I’m working on an epic blog post which explains a lot more. I love classical composing in the wild! I want to do ‘glitching’ concerts in cities around the world!
6) I’m very interested in new musical interfaces and software synthesisers too – these deserve their own blog post.
7) And Finally… here’s a link to even more BBC stuff I get up to and finally, here’s a link to loads of free Music I’ve composed.
EDIT: This is now ACTUALLY happening at the Science Museum! I have enlisted the help of Adam John Williams, Robert Wollner and Emi Mitchell to make this work.
I devised the experiment (the first iteration written below) and I’ve also worked out the data points for collection. I’m composing the majority of music stems that will make up the musical segment of the feature.
Robert Wollner is creating a computer program that will let dancers enter data through their mobile phones. That data gets passed on to Adam and Emi.
Adam is creating a live music computer program that will generate dance music based on my music stems and data from Rob’s program. Emi is working on visual display and how people are going to interact with their phones.
As if that wasn’t enough, this is also going to be filmed by BBC Click!
Who knows best about augmenting musical experience? The musician or the listener? I want to work out exactly the specifications for the perfect dance anthem with the help of the people on the dance floor.
Traditionally the DJ is expected to steer an audience into emotional rapture during their set. They decide whether to play a fast-paced highly orchestrated sequence, or a slow textural ambient section. They are driving the experience as it were. But might it be possible to determine how to make the ultimate ‘tingle-generating’ feel-good floor filler by gathering data directly from the audience, after all, they are the most emotionally invested in the experience.
There is some tech around I think – some bracelets which log the audience’s passive response and biometric data which is really cool – but I’m interested in what happens if we introduce conscious participation so a simple button press would be all that’s required. It would be based entirely on someone’s conscious experience of the music. Then we could gather data based on the results!
EXPERIMENT
For this to work we’d need to generate live responsive dance music.
While dancing, each audience member/participant holds a ‘voting’ button. EDIT this is now your smartphone!
Each person presses the button when they wish for the music to become more intense.
I’ve chosen 80% – but that number isn’t that important as long as it’s a clear majority. It would work like this. When 80% of the audience have pressed the button, it would indicate to the composers/performers that now is the time for ‘the drop’ i.e. adding greater orchestration much to the pleasure of the listeners – in the case of dance music this would be when more drums, synths, and particularly bass come in.
So, when 80% of the audience want the drop, 100% of the audience get it. I would be interested in finding out which group feel the most pleasurable response – the first 80% who have asked for it, or the last 20% who won’t be expecting it. And the final person pressing the button would get the full effect of the music being responsive to their request!
My thought would be to run an experiment in three parts
no interaction at all.
Interaction but no feedback: i.s. the audience cannot see how close they are to the 80% required to trigger the drop, so there is no visible measure of anticipation
Interaction and real-time visible indicator, for example a screen showing how many people have asked for the drop – which means there is a visible measure of anticipation.
So, would the audience experience music differently if they were consciously involved in its creation? How much time it takes for an average crowd to ‘consent’ to the drop? And would it be fun or reduce the experience to a button pressing exercise? Should we use a different method of gathering data such as a Kinect camera detecting a positive movement? Edit: we are planning to measure how much your phone moves while you dance – this will be done using the phone’s motion sensor. we are calling it the ‘wiggle index’
WHY I WANT TO DO THIS
The experiment plays with the age old musical idea of tension and resolution in music, and whether there’s a universal point at which people desire the point of resolution or whether people are happy to be prescribed that point by musical creators. Here’s a great simple example to follow tension and resolution: ‘Twinkle Twinkle Little Star’ – along with a physically accurate version to remind you of the tune.
So, creating tension: (first note) ‘Twinkle Twinkle Little Star” (goes up from home note = tension, but still in the ‘home’ key, so not too tense)
“how I wonder what you are” (small key change to introduce distance then back down to the first note: resolution).
Tension and resolution occurs though rhythm, harmony, and melody, and not just in short musical phrases (like Twinkle Twinkle) but also in longer forms such as pop music – verse/chorus or classical – such as exposition/development/recapitulaton.
I’ve greatly simplified this explanation for brevity, but I do believe that the best composers are creating layers upon layers of tension and resolution in different ways – in my opinion the most wonderful music is skilled in moving us between these states both in expected and unexpected ways. The tingle!
The ORDbot Quantum 3D printer (Bart Dring, Wikipedia)
I visited the Restart Project‘s Restart Party at Camden Town Shed for BBC Tech feature, and met some brilliant people who voluntarily fix whatever comes through the doors – be it broken printers, TVs, cameras or even stereo radio cassette players.
Talk that evening was of technology, the joy of fixing things & sci-fi. One of the electrical geniuses (genii?) was rather interested in 3D printing. And as I have read a lot of science fiction books, it seemed natural enough to recommend to Francis a reading list which I thought he’d find interesting.
It’s not the first time – a few years back I chaired the Science Museum’s FutureWorld event – people mentioned specific tech and I recommended books that complemented their area of interest. So I’m going to do the same thing – but online – I hope you like it.
Welcome to Science Fiction for the Tech Addict Part 1! I’m also going to add real life links too for those who want to know more about the technology itself.
PART 1 3D printing concepts: Science Fiction
Idoru – by William Gibson* : Cyberpunk espionage novel written in 1996 exploring what happens when the virtual world can mix with the real world. There’s elements of 3D printing here, but not in the way you’d think.
Makers – by Cory Doctorow* : 3D printing takes a more central role in this 2010 novel about a bunch of entrepreneurs who create a ride. The ride appears to take on a life of its own as more people become aware of it and interact with it. The book hits on very interesting points about how widespread 3D printing might affect society.
Altered Carbon – By Richard Morgan* : Written in 2002, this far-future ultra-violent detective/thriller’s concept has echoes of what society might be like if matter was entirely and completely replicable. Some might say it’s a bit of a stretch from 3D printing, but I reckon it’s a logical extension of the ability to create 3D objects.
*click ’em if you like ’em, these are Amazon Associate Links – if a link has a * by it, clicking might result in a very small payment – which therefore helps me have more time to write posts like this!
PART 2 3D printing in real life: Science Fact
Makerbot : all you need to start your own (pricy!) 3D printing workshop.
Shapeways : 3D printing service which also sells 3D patterns for other people to print out.
UK Hackspaces / Global Hackerspaces : Sociable member-run spaces where people tinker. Some have 3D printers to play with. I’ve visited both the London Hackspace and the Nottingham one so far – both are populated by wonderfully friendly people who have a lot of time for anyone who is interested in this sort of thing.
Of course there are so many more resources online as 3D printing becomes more and more commonplace.
If you have any requests for science fiction based on some of today’s tech ideas, let me know in the comments area and it would be a pleasure to dredge the old brain for something just right for you. Alternatively if you’ve read something that you think should go on the list, please tell me as I’m always looking for new books to read!
A mere 3 hours after my plane landed, I squeezed into 2012’s CES Unveiled Exhibition : the show before the Show, as it were. In fact, the word ‘squeezed’ might be too understated; the place was utterly rammed with jetlagged journos hungry for good food and good stories.
The CES Unveiled Zoo
This year’s theme focused more on household tech rather than ‘bling’ toys. A cleverly designed flat plug which gives wall outlets USB charging facility prompted me to wonder why it wasn’t done before. Another retailer a few metres away had a slightly different, cleverly designed flat plug which did the same thing.
If the Unveiled show was the precursor to CES 2012 proper, then niche-tech i.e. ‘doing one thing well’ looks to be next on our consumer tech lust list. Take Qooq, for example – a recipe-centric tablet. (More info on Qooq from Cnet)
Qooq Recipe Tablet with plates
It plays movies and music, like a lot of the other tablets on the market. But its makers have stuffed the tablet full of High Definition video of ‘gourmet chefs cooking stuff’, and made it more rugged, i.e. ‘kitchen-friendly’. $399 gets you around 3,000 chef-demonstrated recipes, sitting atop a Linux-based OS. The tablet’s been around since 2009, and already sold over 15,000 units in its native France. Further recipes can be streamed from the internet (for a subscription, of course).
Bodymedia's FIT kit.
Heart-rate monitors, pedometers and other body-sensing kit has been around for a while. For a TV feature a few years backfor the BBC, I wore a then-new device from Bodymedia that measured calorie burn-rate. At the time, I had to download my device manually every few days. But the 2012 reboot uses the owner’s smartphone to update results in real time on the web. And, like the designer USB wall plug, competitors aren’t far behind, with another company showing a similar device.
Zensorium's Tinke
Another sensor, Tinke, comes from Zensorium – plugged into an iPhone, it takes your pulse and measures oxygen saturation and respiration levels. Fitness console games have proved there is a market in this area – and the makers are keen to ‘upsell’ the lifestyle aspect of tech like this. Of course it tracks your progress, and gives you the option to compare your score with other users.
Bikn's Tag
Treehouse Labs, a wireless sensor company, showed Bikn (pronounced ‘Beacon’). Remember those old keyrings you had to whistle to find? The modern version uses an app and custom-made iPhone case to trace tagged precious items to within 30 metres or so – a small but significant move toward the inevitable ‘Internet of Things’ that everyone keeps talking about.
What do I think these devices have in common?
Most of these devices focus on just one thing, and base it on something else’s power. The USB Charger uses existing wall sockets, the cooking tablet plugs into the net, the body-sensing and tagging devices tap into the processing power of a smartphone. Each product stands a chance of being successful in the market place because it fulfils a specific need that our ‘do everything’ smartphones can’t quite manage yet. Specialist add-on gadgetry is emerging.
… you end up with around 4 minutes of enjoyable mayhem.
RSS Readers/ Can’t see the link? Watch the video by clicking here.
The Wax Cylinder is how audio was recorded before iPods, before MP3s, before CDs and even before Vinyl.
What does the future hold for audiophiles? Do you think the new cloud-based music model will be the next big thing? I’d love your thoughts on the future of audio consumption…