The Roundtable took place in mid-November at the Dolby Theatre on Soho Square with a panel reflecting some of the UK’s most creative operational talent working across a variety of genres.
Over a lively two-and-a-half hours the panel explored the limitations of working from home and the value of client attended sessions, the increasing use of tools to clean up dialogue and mixes, the desirability of automation and how audio might be applied earlier within the production workflow.
Following is just some of the wide-ranging commentary that was discussed. Look out for a further article on televisual.com in early January that reports the panel’s views about the value of further education and the importance of learning Pro Tools. Thank you to Avid and Jigsaw24 Media for supporting this event
The Roundtable Panel included:
content services engineer, Dolby Laboratories
head of audio, Directors Cut Films
head of audio, Picture Shop
re-recording mixer, Formosa Group
head of sound, Vaudeville Sound
audio presales consultant, Jigsaw24 Media
re-recording mixer, Molinare
head of audio, Clear Cut Pictures
owner, composer and re-recording mixer, Bleat
head of audio technology, Splice
managing director and re-recording mixer, HIJACK
MODERATOR: James Bennett
managing director, Televisual
How useful are audio clean-up plug-ins?
Davis: The difficult thing nowadays is we can do so much. Our customers realise how much we can rescue in post and so they’re doing away with sound recordists but, of course, it’s taking us hours longer in prep and in the mix and they’re not prepared to pay for it or they kick up a fuss, but they are saving £400 a day for a sound recordist on every day of your shoot.
Newth: I haven’t had a great deal of experience of synthetic, but I definitely have mic matching and particularly use RX 10. That’s come on leaps and bounds from RX 1.
Fry: There’s a certain amount of separation we can do with RX, but otherwise you end up just rebuilding the mix because it’s easier than trying to clean it all up.
Addis: I think you only have to look at the development of things like iZotope from the initial release to where it is now in RX 10 and it’s come on leaps and bounds, but it’s still not at a point where you can just let it do everything for you and everything is totally context dependent.
The recent Beatles de-mixing and re-mastering is an interesting example. There were some articles about Giles Martin talking about the challenges and the compromises. Whilst you can de-mix everything, he was quite candid in some of his interviews about the fact that if you listen to all of those isolated things in isolation, they don’t work brilliantly. They still have to exist as a cohesive whole because there is bleed between these elements. I think we recognise that challenge from the use of iZotope as well. If you want something, which is just a clean, totally isolated thing, you still can’t do that 100% percent accurately. There will still be aberrations and challenges, but you can bed it into the context of the wider mix and it sits well.
Norwell: I suppose there are applications. I think in terms of originality, no. I think it’s reductive. I think if you’re trying to create something original in a group, you need to hear and react to each other and it’s the ambience of the room together that creates this original thing.
Godwin: When we went into the first lockdown, I was about to start an eight-hour drama and I couldn’t get into the studio. I premixed and mixed from home and signed all of it off as 5.1 with clients on remote. Because it was new, the clients were much more accepting. They were just amazed that we were able to do it and get it delivered.
The expectations of remote working are now much higher. We’ve been doing it for a few years now and clients are less accepting of that hit or miss. They expect it to be similar to the experience that they used to have sitting in the studio and it just isn’t. Reviewing remotely on binaural headphones doesn’t compare to sitting in the theatre.
Fry: Tracklay can be done at home and a premix can probably be done at home without clients. There are certain jobs that will be done remotely. We’ve got a job which is signed off remotely because the client has said, ‘we love that system’.
I wouldn’t want a final mix at home. It’s just not very creative. You can do that remotely, but it’s incredibly clunky and just not very collaborative.
Newth: We did a series during lockdown. We mixed half of the first part, offloaded it with QuickTime. The client would review it and get it back and so on. We work to tight deadlines anyway, but you’re working to even tighter deadlines because you need to get that uploaded ahead of time.
Davis: There’s no spontaneity with it. If you’re working to a tick list and you’re thinking, what do you mean by that and then you can’t just turn to your client and ask.
Addis: I think if you’re talking about technology and ways of working, which invariably comes down to efficiency increase, you’re working on an eight-part series and you’re sat in the room, you might get loads of notes in the first ten minutes and then they sit very quietly back in the room because you’ve already got steer for the whole series and you’re getting a page of edited notes.
Shannon: Having worked where you can’t have internet or phones with you, anything around contact, one of the big question marks is how do you keep things secure, getting them between your home and wherever? And if someone’s sat there at home, how do we know that person isn’t taking content elsewhere?
The value of automation
Godwin: If thinking about an automated premix, which I would then present to the client, my worry is that during the premix, I’m learning what material I’m dealing with. If we’ve got a scene on a beach, I know that I’ve spent an hour trying to get rid of the waves and other ambient sound. The premix is a result of me learning the problems. When I present that to the client and he or she says, ‘what can we do about the beach?’, I already know the inherent problems. If I’m just presenting either an automated premix or someone else’s premix, I have none of that knowledge. I can’t have that conversation with them, I’d have to go and investigate.
Hatfield: There are uses for it. Game engines are using it. You type, ‘I want a cityscape, sirens, sky, wind, this high up…’ and it can then build that out of sound effects. And then you finesse it, ‘this is in France, so we’ll swap American for French sirens’. That can be a basis to start your sound design. The game developers are already creating that content and we’re delivering audio for people to do that.
However, I don’t see a use for it in mixing yet. But in the near future I think it’s going to be a really exciting tool for sound design.
Simpson: In terms of just building up an ambience and any limitations, why couldn’t you use that for a film?
Hatfield: If you created an automated mix, you could and then think of implementing it, but I wouldn’t just let it do its thing and use it to make final decisions.
Norwell: It doesn’t know about story telling.
Newth: I think the AI – as in plugins and sound design – can be fantastic for building atmosphere. You can suddenly go, hang on, I want something more rural sounding or a dirtier sounding urban atmos. Rather than going through your sound library, you can create a soundscape and mix somewhere in between. And that’s exciting because it’s quick and that’s the bottom line for us.
It’s all about deciding on your compromises; what you’re going to put at the forefront and what you’re going to pull back. This is where it’s important that we’re doing it, humans are doing it, because there are so many compromises to make.
Godwin: If you’ve dealt with a piece of dialogue, which is incredibly noisy – let’s say, it’s on the beach again – if the shot shows a huge wave crashing behind the actor, then you’re going to accept that bit of noise behind the dialogue. But if the actor’s stood somewhere, down near a cave or wherever, then you instinctively know that you’re going to have to clean that up a lot more. You’d accept that maybe the dialogue is going to deteriorate slightly because I need it that much clearer. I don’t see how AI would be able to make those decisions.
How can you increase turnaround times?
Fry: There’s certainly pressure on turnaround times on the terrestrial channels. We’ve worked across continents in the past. When we used to do X Factor, that would be around the clock. We’d have people mixing them overseas and then come in the morning and pick them up.
Less though from the streamers. The over-runs and offline delays are the biggest issue for us in post.
Godwin: There’s always a finite amount of time required from being locked to being finished. You’ve got to have time to shoot the Foley, you’ve got to have time to shoot the ADR, you’ve got to have time to do the effects.
Fry: I can mix a show in an hour, just the guide, it depends what the clients want. If they want me to spend five days on it, then it’ll be very different from whoever spends an hour playing with the guide. I don’t think that’s ever changed.
Simpson: We’ve worked on virtual production stages, where it’s motion tracks and they’re shooting everyone in front of the LED screens just to get the lighting. The background has already been done.
I’m wondering if in a situation where the pre-vis has been built or when it’s a series where we’ve got tracks and we know that the offline editor is going to put in all this temp stuff that they’ve found on Freesound…
At HIJACK, we often get involved in workflow and camera shooting plans at a fairly early stage. You know how a DIT will create the LUTs. We don’t do the equivalent for sound, where everything is happening in the shoot. If we knew what the sound content there was going to be, could we create a good audio pre-vis or animatic or something done on reel? If we really planned heavily at the start, could we be doing track laying, knowing roughly what the parameters of the scene are? They know where the cut points are going to be. And when it comes to us, we’ve got less back tracking to try and get to where we want to be?
I’ve heard of this happening on big films where they’ve brought in a sound effects editor for
But I guess it would involve having a proper sound editor at the start. We are all doing re-conforms of re-conforms of shots. There’s a certain amount of flexibility we’ve got to re-cut. But does it make sense to have sound effects being cut right at top?
Godwin: Audio was always affected by the picture. So, you look at the picture and then you decide what audio goes on it.
Miles: I love the idea of getting sound involved sooner because how many times have you been in a mix and you’ve thought, if only they had recorded this that I’m seeing happening on screen? We all know the experience of wild tracks. I do find it quite interesting, this idea. You’re planning your camera moves, you’re planning your VFX, the sound department, you could deliver audio assets for particular items. Your AI can look at your audio bank and go, okay, here’s a sound, here’s the shape of the movement, this is what we’re putting in. And it’s giving you a creative starting point and it’s helping the director think about the audio at the start of the process.
Addis: I think they are fundamentally different disciplines. You’re working where your efficiency gains through doing virtual production with the lighting, you’re doing pre-visualisation, everything is getting to the point where what goes into the camera lens is closer and closer to being the final deliverable. Audio is incredibly layered in comparison. You’ve got a huge number of decisions, and everything is contextual.
Shannon: Audio in many ways, is always reactive. When you think about a door closure, it will always depend on what the picture is. You could say, here’s my previous mix and this is how the doors close, but you’ve still got to make the audio fit the picture in some way, shape or form.
Fry: I just don’t think the clients are in a head space where they want to be thinking about audio at that point. I just think they want to wait until later.
Norwell: But is there a compelling argument to make to a PPS at an early stage, to say, I can save you two days after the lock.
Fry: I don’t think it would save them time, but I do think the product would be better.
Hatfield: We find out when and where the shoot is and we’ll send one of our supervisors to see what’s being filmed and that will be on day one of a rig shoot. And then, we leave them alone. And it just costs us a day to send a guy in.
Simpson: I agree with everyone. Picture is everyone’s priority and agree about the layers and that makes complete sense to me. On set, the priorities are having a lighting camera and a live grade in. And then no there’s no thought given to turning an extractor fan off. I am hearing on higher end productions that they want the dailies to look like and sound like they should.
Addis: I can totally imagine that if you built in an auto generated soundtrack in an offline cutting tool, it will probably help but, again, a huge amount of world building that happens in post production is to fill in the context of what happens off camera. There’s a great quote from Bonnie Wild, about The Mandalorian, that you can add money to a shot through the soundtrack, because you don’t need to see the spaceship until the last second when it lands as four pixels in the distance. Or if you’ve got a cutaway, how do you know that the car is approaching from off screen? You can only automate to a degree.
Shannon: I think you still need time. We’ll do the pre-vis, that’s locked. Now we do the sound for it before we shoot. Will people give the time at that point?
Addis: Unless it’s automated, that’s invariably less efficient.
Godwin: Unless that audio that you use on those dailies then gets used in the final mix.
Newth: You still have to shoot ADR and things like that and I wonder how efficient all of that would be for dailies.
Shannon: You’d need to slot the audio in between those two points. You’d need to have your finished pre-vis and then do the audio to react to it, and say, here they are, and then you shoot it. As long as people allow you that time in the middle, it’s fine, but at the moment people don’t.
Davis: You’re either tracking effects for your dailies or you’re doing it when the pictures are fine or when you’ve got more control over what you’re being presented with.
Simpson: We might find it soulless as a concept, but if the producer is going to follow the pre-vis and says, ‘we’re going to shoot that and then it’s going to cut exactly like that, we’re only shooting the elements we need’, you could probably get an auto edit to just drop in the shots. You’ve got sound that fits to that pre-vis exactly. Yes, there are certain things that might not work – like the sound of the door didn’t work for whatever reason, assuming it was a real door – that can be fixed later, but as less and less of the content is shot in real places, this becomes more relevant.
From big-budget movies and TV show to independent films and documentaries, Pro Tools is universal in audio post production. It offers highly efficient, high-end capabilities to meet the needs of any size studio or creator—from multiroom facilities to home. And it makes collaboration easy across the entire workflow with tightly integrated software, hardware, control surfaces, storage, and plugin compatibility, enabling sound creators and mixers to achieve their creative vision quickly and deliver more memorable sonic experiences that move audiences.
ABOUT JIGSAW24 MEDIA
Jigsaw24 Media is a specialist division of Jigsaw24 and provides services and technology solutions to the media and entertainment, education and corporate sectors. It’s the only UK-based company of its kind that has in-house system integration capabilities, and one of only two Avid Elite partners in both video and audio in the UK. With headquarters in Nottingham, an office and demo space at the heart of London’s post-production community, and a nationwide support team, Jigsaw24 Media provides local services on a national scale.
Formula 1: Drive to Survive, Box to Box Films for Netflix
Re-recording mixer / supervising sound editor (8 episodes, 2019 – 2021) Nick Fry at Picture Shop
My Life as a Rolling Stone, Mercury Studios for BBC
Audio mix by Ben Newth and Nick Ashe at Clear Cut Pictures
DNA Journey, Voltage TV for ITV
Dubbing mixer Kate Davis at Directors Cut Films
Jungle, Nothing Lost for Amazon Prime
Dubbing mix by Rich Simpson at HIJACK
Share this story