“How do you fancy dubbing a 2hr 20 min movie in a day”?
“These aren’t the droids you’re looking for.”
“These aren’t the droids I’m looking for”.
I think that’s how Chris Jones does it, you know. To be fair, I did sort of know what I was getting into – 50 films, all individually dubbed, had to be turned into a semi-cohesive sonic whole. So I wasn’t starting completely from scratch, I’m not that mad, and Chris and the editors had added some missing ambiences and spot effects already, the lovely people. Even so, I knew one day wasn’t going to cut it – you’ve used up a quarter of the day just watching the thing. I reckon it was about 4-5 days all in, which is still going it.
As we all know, sound is typically the last chance saloon. The scripts are finished, shooting complete, editing locked. Sound and music can come to the rescue. An emotive score might take centre stage for a while, handily papering over sonic-cracks. Textured backgrounds can add the apocalypse to a simple shot of a character looking up at the sky.
Some of the short films were technically fabulous, some of the others needed a little love where resources had been stretched. My jobs were four-fold – make all the levels match; make all the joins work; patch up any awkwardness and try to bring a some more creative sparkle so the whole thing feels like one big movie. In general, I had to work with whatever we had, getting re-dos wasn’t an option.
In some cases I had a separate music track so had a little more latitude to fix things, but more typically I was working with complete stereo mixes. In days gone by there was almost nothing you could do to unpick a whole mix, but now you can perform mini-miracles (sometimes). The couple on a deserted beach had a few off-camera cries from passers-by. Into iZotope RX9 I went, where you get a frequency-based picture of the waveform, and I was able to paint those out just like a graphics editor removes blemishes from Photoshop. RX or Waves Clarity Vx Pro use machine learning algorithms and are even able to isolate the dialogue from a mix, and with a bit of care I was able to boost the odd buried line of dialogue with the rest of the mix staying intact. Unwanted location hums and roars could often be completely eliminated.
For one film, some of the sound effects in the mix came out of the left speaker only (editors – check the panning of your audio tracks!). I couldn’t centre it without making the centre-stage music mono. I did however have the music on its own, so remembering my physics at school, I inverted the waveform of the music. So the theory goes, the two will mathematically cancel each other out, revealing everything in the mix except the music. In practice it wasn’t perfect, but it was amazingly close – I could then fix the effects, and lay the music back over the top.
Clicks, pops, clunks all had to be found, bombed and sunk. Mostly trivial, but every one takes time. Prioritising is important, knowing what you can let go and what really needs attention to stop something distracting impinging on the viewing and listening experience.
In general, the single biggest mistake new filmmakers can make with sound is not having a mic close enough to the actor. If there is no ADR (the process of replacing on-set dialogue in a studio afterwards), it can be gruesome. It is possible to clean up all kinds of background sounds, but a voice that sounds 50ft away in an echoey room is still a major challenge. Sometimes magic tools can help – a clean church-type of reverb can be eliminated amazingly well, but a more typical smaller roomy sound is still beyond the ability of magic to successfully fix. Another common problem is sharp cuts in the edit between ambience from one shot to the next. Sometimes it’s possible to reduce the ambience and replace with a clean sound effect, or (again) use RX to blur a jarring shot change so the ear doesn’t notice. Unrealistic phone speech, TV or radio sound were other common issues – it wasn’t always possible, but where I could I’d try to separate it from the rest of the mix and make it sound like it was really coming from a device in a room. Audio Ease Speakerphone 2 and Altiverb 7 are digital recreations of real spaces and even real loudspeakers, and they make the process fairly painless.
Another recurring issue was empty backgrounds. We needed to feel that it was the actual end of the world, so I added plenty from my sound effects library bought over decades – the online subscription service Soundly filled in any gaps (you can pay by the month, and is a great resource for filmmakers). So on went car horns, jet planes, sirens, riots, car alarms, you name it. I can overthink this stuff – how many local councils have lovingly maintained WWII air-raid sirens? Are those helicopters trying to fly to the moon or something to get out of the way?
A nice side-effect of building up some parts is that there is a much greater sonic contrast between the different stories. For some it needs to be total chaos, for others the stillness is the thing. One of the best things about Impact50 is the sheer variety. We see all life across the world, and I wanted to accentuate this with the sound. Light and shade is very welcome across the two hours, there is nothing worse than a relentless aural assault. But even for the quiet moments I’d often add very subtle sounds – wind, insects etc, or for interiors a very unobtrusive room tone. Hopefully no-one watching will even notice that sort of thing, but it will feel more real and involving.
Having the music mixed in with everything else really becomes a problem when one short film had been intercut with others. All that lovely atmosphere and music frequently got spliced mid-glory with something totally different. With separate music tracks, the transitions were generally easier – you can lay the music across the edit to make a smoother join. But what to do when you only have a complete mix? Simple cross-fades sound cheap and artificial to me, the tool of last resort I nevertheless had to fall back on occasionally. A trick I used more times than I care to admit is to have a nice artsy reverb – I like Eventide’s Blackhole – and put the final music note before the edit into it. Sounds great – you don’t need to overdo it, but it just makes the end of a clip sound like it was supposed to end, which fading-out tends not to. Once or twice it was fun to go the other way and accentuate a cut by building the sound design into it and then cutting to silence, or putting a transition WHOOSH-BOOM or something to make it dramatic. Occasionally I added a little music of my own (yes I’m also a composer and a screenwriter, I do weddings, Bah Mitzvahs…) to make an intercut sequence feel cohesive. Once again variety is key, trying not to do the same trick again and again.
I felt that the asteroid itself needed a very broadly consistent sound, so we recognise it as we hear it. Every filmmaker had put on their own sound design which I never replaced, but rather embellished a little with some stock effects and some I made myself using a weird synth called Skanner XT by Native Instruments. It has this very cool way of morphing every parameter simultaneously between presets that produces otherworldly sounds, so I sampled some of those.
Now my stereo mix was done, I am shipping it off to a kind colleague Tudor Davies who specialises in Dolby Atmos and surround sound. It simply wasn’t possible to do a proper finessed 5.1 mix given that all the elements were mixed only in stereo, but he has the best tools in the business to magically up-mix from 2 speakers to 6 and hopefully make it sound great in the cinema.
In doing all of these things, I was conscious that I was playing with someone else’s mix that they lovingly crafted. There simply wasn’t time to get everyone’s approval after the whole film had been edited to get it signed off. I hope no-one hates me for the editorial choices I made in the heat of battle – I promise it was always to try and get the best mix with the surrounding clips and for it to be the most involving for the audience.
Screenwriter and Audio Dude