You don’t have to be a particularly devoted follower of production technology to have been bombarded by talk of all things 4K of late. Here's why you should care...
It seems everybody is going on about 4K at the moment – but what exactly is it? Well, the basics are straightforward enough – it’s an image with a resolution of 4,096 x 2,160, which equates to 8.8 million pixels. This is over four times the 2 million pixels of 1,920 x 1,080 HD resolution.
This is, of course, just numbers, and means nothing if it doesn’t translate to a much better looking image on screen. Fortunately, it does – 4K is a much more immersive experience than stereoscopic 3D and, while 4K acquisition makes perfect sense for a huge cinema screen display, the difference in clarity of image is also hugely noticeable on a more modest size 4K TV screen.
Consumer TV sets
Back to some numbers again. Consumer 4K (or, rather, Ultra HD) TV sets have a resolution of 3,840 x 2,160, which is slightly lower than the image size captured by 4K cameras and equates to around 8.3 million pixels. This reduction in resolution makes very little difference to the overall impact of the image.
At IBC last year, Sony was amongst a number of manufacturers to showcase a 4K consumer TV set. Its forthcoming (and very expensive) £16.5k 84” consumer 4K TV – the XBR-84X900 (pictured above) – displayed a looped reel of eye catching 4K footage that was enough to convince even the most sceptical viewer of the merits of such a high resolution TV display. There’s every reason to believe consumers will be equally as enthusiastic, despite having only recently invested in HD screens and, even more recently, been prematurely sold the dream of a stereoscopic 3D future.
In order to ease the transition into the consumer market, the dry, techie-sounding moniker of 4K has been binned in favour of Ultra HD. It’s now a waiting game to see how long it takes for Ultra HD displays to come down to a sensible price. And then there’s the thorny issue of how to get 4K images into the home in the first place.
Currently, the reasons for investing in an Ultra HD set are less than compelling – it’s super-expensive and there’s almost nothing available to watch. No Ultra HD players exist and there’s no means of transmitting Ultra HD content to the home.
Why go Ultra HD?
With the barriers to equipping homes with Ultra HD prohibitively high, is there any point in TV producers bothering themselves with 4K at the moment?
In short, yes, particularly for any TV production with a reasonably lengthy shelf life. By shooting and posting in 4K now, you’ll future proof your production for the next generation of screens and playback devices.
DP Nic Morris BSC, who has worked on a long list of top-end dramas, including Being Human, Spooks, Hustle and Robin Hood, is an early adopter of 4K and strongly recommends going down the 4K route when the production merits it. “That’s exactly what I’m doing at the moment,” he says. “We’re shooting in 4K and doing the post in 4K, then at the final delivery it gets down-ressed to 2K and HD. It’s all about future proofing.”
“The advice I give clients depends on the project – if it has a shelf life, my argument is you must shoot in 4K/Ultra HD. On other projects it’s just not relevant – if your revenue stream stops 18 months down the line there’s probably no point in going 4K.”
Whatever the production, though, there could still be a good reason to opt for a 4K camera capturing in 4K/Ultra HD and delivering in HD, as there’s a tangible increase in image quality with HD images shot with a 4K camera.
“Back in the early days of HD, shooting in HD and down converting to SD looked much better than material acquired in SD. The same applies here – if you shoot 4K and down-res, you’re likely to notice an increase in quality,” says Morris.
Natural history producer/director Mark Linfield, who’s just finished shooting a number of docs for Disney, agrees: “The 4K resolution filters down to the HD images – you can see the texture, fur and detail you’re not used to seeing in HD.”
James Tonkin, director and founder of commercials and promos producer Hangman Studios, is another to aspire to capturing images at the best resolution possible: “I like the idea of acquiring at the greatest resolution you can, regardless of the resolution you’ll be delivering in. I’m obsessed with image fidelity and seeing as much information as possible. The more pixels you have the more you’re getting an image closer to real life.”
“The difference is still there in the image, and while I’m not anticipating any jobs next year that will require 4K deliverables, if you’re using a 4K camera, there’s no point in capturing anything less than the top resolution.”
Similarly, prominent cinematographer Geoff Boyle FBKS, who has a good deal of experience shooting with 4K cameras, says: “You would probably understand why I would shoot at 4K for a large screen, but why would I prefer to shoot at 4K for an HD finish? It’s really simple – reducing the image down from 4K to HD makes the HD image look a lot sharper, the noise level is reduced, and the overall look is just so much smoother and rich looking.”
A waiting game
While there’s widespread acknowledgement that 4K/Ultra HD is almost certainly going to be the next big TV revolution, it’s still uncertain how soon this revolution will take place. Consumers are currently being somewhat reluctantly pushed 3D screens so introducing yet another new technology any time soon could really test their patience.
There’s also the small matter of how to easily and affordably transmit Ultra HD images to the home to sort out. Even so, Sony’s Head of AV Media Olivier Bovis believes it still won’t take long for 4K/Ultra HD to become established: “I think 4K will come much sooner than people expect,” he says. “Consumers can already buy a 4K projector or our 84” Bravia screen, and Sony has already collaborated with SES Astra to demonstrate 4K transmission to the home.”
“There’s a couple of bricks which still need to be fine-tuned and the implementation of a new generation codec for transmission is necessary to achieve 4K transmission at an affordable cost,” adds Bovis. “But we are getting really close now and very few things are missing to allow that. Once people start to discover the beauty of 4K interest in it will spread much quicker than anticipated.”
If the take-up of Ultra HD as a consumer format does turn out to be considerably quicker than might have been initially anticipated, there will be a clamour for content, from features to mainstream broadcast productions, that’s been shot and posted in 4K/Ultra HD. So, can we expect production companies to start shooting 4K en masse this year?
Perhaps. The cost of 4K cameras is already comparable to high-end HD models, so there’s only a small premium to pay to get hold of appropriate shooting kit. But it eats up storage space and usually requires an external recorder to store the images, which all adds to the cost. Added to this, there are only a limited number of post houses with the infrastructure in place to handle unwieldy 4K images. So there are cost implications and practical reasons why you may not be in a rush to shoot in 4K just yet.
“It means you can’t shoot and shoot because the size of the image creates much more data. You can only do about four to five takes,” says film director Ben Elia. “But that’s how films used to be made and it’s often the best way. It requires filmmakers to be more precise.”
“I really do hope 4K becomes the standard and the sooner the better,” says Tonkin. “It does put a lot more demand on post workflows but everything will scale accordingly. There are pretty big implications to the pipeline and infrastructure so it won’t happen overnight.”
These technical considerations aren’t going to stop a large number of productions going Ultra HD over the coming years, believes Morris: “In two to three years, virtually everything will be shot in 4K. Storage is becoming cheaper all the time so it’ll become increasingly sensible to originate in 4K/Ultra HD. The choice to shoot HD when you have a 4K camera will make less and less sense as the cost difference to originate at a higher level becomes negligible.”
“The stumbling block until very recently has been the lack of 4K monitors,” he adds. “They are now coming out [one is pictured above] and that’s a real breakthrough, enabling a 4K workflow all the way through, including monitoring. I’ve previously been using a very good 2K monitor to check images on location – it has a button you press to view a section of the 4K image on the 2K screen, but it’s all a bit clumsy.”
Well-respected US-based DP Jon Fauer ASC, is also adamant the 4K takeover is imminent: “As an acquisition format, I think it’s inevitable,” he says. “I think it’s just a natural evolution and will certainly catch its stride very soon.”
Once it becomes more commonplace as an acquisition format, there’s almost inevitably going to be a repeat of the move from SD to HD, with actors concerned about how they will appear in the full glare of crystal clear, super high res images. “It’s got the same implications for the make up department that HD presented when it was new,” says Elia. “Make up has to be perfect and, in general, you have to be precise working with 4K in artificial light as it gives you so much detail.”
There are a growing number of 4K cameras available, after numerous models were announced at NAB and IBC last year. The size of the images created by these cameras produces a huge amount of 4K raw data, so an external 4K recorder box – such as the AJA Ki Pro Quad 4K, Convergent Design Gemini RAW or Codex Vault – is invariably part of the shooting package.
However, Sony has created a new codec – XAVC – which handles 4K images, compressing them to a size that’s possible to store internally on its soon to be released PMW-F55 4K camera (pictured, above right).
Canon’s recently launched 4K model, the EOS-C500, has the same body as its popular C300 camera and outputs 4K raw uncompressed footage to an external recording device. As well as capturing 4K and 2K images, it can work as a full HD camera in the same way as the C300.
Red’s well-established Epic (pictured, above left) is another popular model for capturing 4K images – it even goes beyond 4K, being able to capture 5K images, and is well suited to shooting super slow-mo at high res.
There’s also an inexpensive 4K model available from JVC – the £5k GY-HMQ10 – which records 4K through four separate video streams that are combined using free software to create an editable 4K image.
Other 4K cameras include FOR-A’s super slow-mo FT-ONE, which is built around a CMOS sensor unique to FOR-A and uses memory cartridges that each store 75-seconds of 1,000fps footage. Panasonic, meanwhile, has created a concept 4K Varicam model and a new AVC Ultra 444 codec specifically for handling 4K footage. The camera is likely to see the light of day later this year.
MAIN PICTURE CAPTION: Grace’s Story, shot by Geoff Boyle FBKS in 4K on a Canon EOS-C500
30 pairs of GoPro HD Hero 2 cameras have been sent to the edge of space – 100,000 feet into the air – to capture the Northern Lights for Project Aether, a programme designed to inspire the next generation of scientists, engineers and explorers.
The cameras were sent into the air from Alaska after being strapped to 30 high-altitude balloon rigs. Specially modified planes equipped with skis were used to land on remote glaciers and dogsleds, snowmobiles, snowshoes and helicopters used to track and retrieve the balloons.
Click below to find out more about the project and see clips from the launch…
422 South has created a series of 70 visually-striking animations visualising data such as the path taken by pizza delivery cyclists on a single New York evening for Lion TV's 4x60-minute series America Revealed.
It follows on from 422 South's similar animated visualisations work on Lion's Britain From Above, which TX'ed on the BBC last year. However, says 422 South's creative director Andy Davies, "Everything in America is so much bigger than in the UK – the geography, the distances, the sheer numbers involved."
"As a result, 422 South’s data team received millions of lines of data from diverse sources. The information was first translated into a suitable format for animation, then combined with satellite imagery to build sequences that reveal a unique view of American life."
The imagery created by 422 South includes the frantic trajectory of commuter traffic – air, road, rail and ferry – into Manhattan every morning, the path taken by a pizza delivery cyclists on a single New York evening, and the flow, swirls and eddies of the continent’s wind as recorded by hundreds of weather stations.
The animated visualisations were created using specially developed in-house software as well as Maya and Nuke.
America Revealed will be broadcast on PBS UK (through Sky and Virgin) from 20th June.
UK-based colourist Jason R Moffat on how he graded Nigerian comedy feature Phone Swap to "look beautiful and high quality and look in place with modern western cinema".
Phone Swap, a comedy feature, came to my studio late in December 2012 through director Kunle Afolayan, Nigeria’s rising star of the Nollywood film industry, who initially flew over for a meeting to discuss the project. The brief was seemingly simple, “It needs to look beautiful and high quality, and look in place with modern western cinema”. There were two main spaces in the film, the City and he Rural areas, each with their own feel. We did some test grades during this first session, primarily deciding what film profiles we’d be using on the ‘Red FilmLog’ footage. The director really wanted this film to challenge the generally very poor image quality which has come out of the Nigerian film industry over the last decade.
Were there any particular challenges you had to overcome?
The grade schedule on Phone Swap was quiet intense, I had two weeks in two separate sessions while the director was in the UK, and a couple of short sessions remotely. We ingested the Red Raw footage into the grading system DaVinci Resolve and produced a DPX final conform of the picture and got going. There was some quite beautiful production design on the key sets, some impressive crane and steadicam usage, however the budget was tight, so uncontrolled public spaces were also used in the mix, one of challenges in the grade was to marry the set-pieces with the public spaces. The ability to use Parallel, Serial and Layer nodes all in one workspace was a great advantage on some particularly problematic scenes. Another challenge was a set of editorial changes after the first week of grading.
Can you briefly detail outline your workflow?
On this film I used a simple DPX to DPX collaborative workflow, which meant I was able to grade the same DPX files the VFX team were using. These scenes were updated once they were completed, without necessarily the need to tweak these shots as the assets were updated, this effectively eliminates any QuickTime gamma nightmares which can plague a colourist’s day where VFX are involved. Additionally the use of custom LUTs based on film stock helped give the film a more filmic colourimetry, and the ability to control shadow and highlight roll-off more efficiently. Once the grade was done, we rendered DPX and QuickTime streams for mastering here in London and in Nigeria for the various versions of the film.
What features on DaVinci Resolve did you find particularly helpful?
Being able to use multiple LUTs in one project is something I use a lot in my work, which enables me to mix Linear and LOG footage without any pre-processing. Also the use of multiple tracks allowed us to preview variations on VFX passes as well adding scanned 35mm grain to give some of the scenes more grit. The use of an alpha selection on the grain, which was overlaid on the shot footage, allowed me to control how much grain is present at any one time, all real time, with sound, which is very impressive. More and more it’s becoming standard to ‘have it all’ during a grade, which Resolve delivers; sound and graded picture in realtime.
Director: Kunle Afolayan
Director of Photography: Yinka Edward
Colourist: Jason R Moffat
London’s leading vfx houses Double Negative, Cinesite and MPC have spent the last two and a half years working on the vfx for Walt Disney’s recent release, the sci fi 3d action adventure John Carter.
Directed by Andrew Stanton (Finding Nemo, Wall-E), John Carter is perhaps the biggest vfx-heavy feature film so far to have chosen London for its effects work. It features vfx on a similar scale to Avatar.
The film is set on an imagined version of Mars, with the action taking place in two ‘city states’ – the beautiful Helium, which has a large glass palace in the middle, and the mile-long rusty metal tanker Zodanga, which crawls slowly around the Mars landscape.
Cinesite’s key task was creating these cities and their extensive environments, which amounted to over 830 vfx shots. The company also handled the 2d to 3d conversion of the movie.
Meanwhile, Double Negative created and animated 12-foot tall barbarian creatures called Tharks, along with other creatures that inhabit the planet, and worked on over 1,900 shots for the film. MPC also handled a proportion of the wide-ranging vfx work.
A team of up to 310 people worked on the film at Cinesite, lead by vfx supervisor Sue Rowe, who also attended the studio shoots – studio locations included Pinewood, Shepperton, Long Cross and an ex-Woolworths warehouse – and went on location for the duration of the shoot in Utah for three months last year.
“Zodanga, the bad guy’s city, was based on brutalist architechture, while the city of Helium is beautiful and elegant,” says Rowe. “[Concept designer] Ryan Church did lots of concept images for the city and environment, which gave us a really good starting point.”
“With Helium, to take it from concept to the build in cg, we needed to be true to the scale and materials the environment was built in, and we needed to put in a great level of detail. We had 300 people involved overall over about two and a half years. While we were shooting we were also busy preparing the environments.”
Part of Cinesite’s work involved a battle sequence with two intricately detailed airships: “We had to turn Ryan’s concept drawings for the airships into photo-real cg models – the glass and the cracked surface of the ship were probably the most challenging aspects,” says Rowe. “And the environment we shot in – in Utah – had a very fine red dust, so the ship needed this too.”
“The airships travel on light, so we gave them solar panelled wings, and worked on a shader that gives off different colours (gold to blue and purple) depending on how it hits the light; like the scales of a fish.”
For Double Negative, animation supervisor Steve Aplin says: “It was a huge undertaking for us, with many different characters, including runts (baby Tharks), full-size Tharks, Thoats (a creature with eight legs and a broad, flat tail) and Woola, the side-kick dog.”
“The principal race we were dealing with was the Tharks. We not only had hero action performances with the Tharks, but also shots where there were thousands of them on screen at any one time,” says Aplin. “For the background Tharks, we created 800 animation cycles, dropping them in and switching them out. The closer the Tharks were to the camera the more involved we were with the animation. For a mid-range character we would drop in the cycles by hand. And for the ones close to the camera we used motion capture to give them very detailed animated facial expressions.”
“We used a stereo camera to capture the facial details of the actors playing the Tharks, and tracked the left and right images and transposed them onto the cg Thark faces. The 3d mesh of the actors’ face gave a very natural feeling result,” says Aplin. “The difference between an animated feature and vfx is, in vfx there’s a live action character next to a cg character, so the cg character has to have the same level of fidelity in its face. So you have to capture very subtle movements.”
When it came to representing the Thoat characters on set, “We had to figure out a big contraption to replicate what they would look like,” explains Aplin. “We did test cycles, which gave us the measurements for the creatures, and then Chris Corbould of the special effects team created a vehicle with a skeleton on top of a saddle, which was given inputs for the creatures’ movements, derived from our animation. We got a pretty similar motion to what we were after using this.”
One of the biggest challenges for the production team on ITV1 drama Kidnap and Ransom was turning Cape Town into an authentic looking Kashmir. Here's how it was done.
The second series of ITV1 drama Kidnap and Ransom is set in Srinagar, Kashmir, and centres on the hijacking of a tourist bus that crashes into a busy Kashmir market square. All the colours, vibrancy and architectural details are exactly as you would associate with India, but, due to budget restrictions, the series was almost entirely filmed in Cape Town, South Africa.
It was down to the production team, and in particular the efforts of production designer Robert Van De Coolwyk, to turn Cape Town into Kashmir – in a limited timescale and with a limited budget.
“We did a lot of research on what the place should look like, and the key thing we had to do was concentrate on signage and colour and dressing things up,” says Van De Coolwyk. “You have to go with what you find and adapt it to suit. Indian buses are quite specific in their colour and decoration, so we gave the bus a blue exterior and things like headdress covers.”
The buildings in the Cape Town market square required less work than might be anticipated to make them appear like they were in Kashmir, as Van De Coolwyk explains: “India has a lot of English colonial architecture, which is the same in South Africa. We put shutters onto certain buildings and added lattice work, and put on colours to more closely match the architecture between India and South Africa.”
Van De Coolwyk had five weeks of prepping in South Africa prior to the shoot to try and get everything looking right. In this time, exec producer Rachel Gesua went over to do a technical recce and take the writer around the square to see how they could make it work.
“I was a little scared at the start about how we could make it work,” says Gesua. “The maths didn’t work out to shoot in India and turning Cape Town into India was going to be challenging as British audiences have a good idea of what India should look like.”
But once Van De Coolwyk’s began adapting Cape Town into Kashmir, Gesua’s mind was put at rest: “The impression of India relating to the colours and textures was spot on, and the lattice gates and fabric were exactly right,” she says. “Robert did a remarkable job, it’s incredibly authentic. In the end I never felt we were compromising very much. The whole attention to detail of the art department is really impressive – the shop and street signs are incredibly accurate.”
The square where the bulk of the action happens has a train station that continued to be in constant use throughout the 10 days of the shoot. So the production team had to be creative in positioning the bus in such a way that masked a lot of the background activity.
“We had to be clever in the way the bus was positioned and only shoot it in certain directions, avoiding shooting behind where the station was,” says Gesua. “Towards the east and west of the square, Robert created these fantastic big walls plastered with Indian posters and signage, and this obscured most of the station and pedestrian traffic.”
“We had to do some ADR to cover the loudspeaker announcements at the railway station,” adds Gesua. “The rest of the post production work has mostly just been dropping stuff onto TV screens, erasing Cape Town phone numbers and things like that.”
Beyond the major hurdle of getting everything to look like India, Gesua says this series of Kidnap and Ransom was always going to be a difficult production to get right: “It was a huge challenge to shoot something such as this. It was a very tight schedule, shooting 16 actors on a bus from their point-of-view, and including stunts, car chases and crashes would have been challenging wherever you were.”
And this is where being in South Africa finally worked to their advantage: “Shooting there was very easy. We had a great extras coordinator, which was another area that could have given the game away, and the costume and make up was spot on,” says Gesua. “Crews over there are very conscientious and quick – things take half a day in South Africa that would take three or four days in the UK. It’s easy to get permissions – it’s a lot more ‘can do’.”
2011 was a reasonable year for the UK’s post industry. Only one big player fell by the wayside – Pepper – and the Ascent brand was lost after its takeover by Deluxe, but on the whole it was business as usual. A lot of the focus has been on ensuring post houses can handle and offer expert support for new file-based workflows for the cameras that have taken over production over the last year.
However, with the economy on the verge of another collapse, production budgets continuing to recede and significant ongoing investment still required to keep facilities up to date and able to cope with the ever increasing image sizes being pumped out by the next generation of cameras, will things remain relatively rosy this year?
“2012 will be a year of consolidation for post houses – the rapid transition to tapeless workflows has begun to settle down and though these new formats have changed the dynamics of client and supplier relationships, they have ultimately led to creative benefits for both parties,” believes Rowan Bray, md, Prime Focus.
“Post houses have had to respond by quickly upskilling their teams, changing infrastructure, and developing new workflows to allow for increasingly complex acquisition choices,” she adds. “In 2012, the ongoing need for highly skilled workflow managers will be just as prevalent. By rights, budgets need to reflect all the changes that have occurred but it will continue to be a challenge to receive budgets that match the requirements of most programmes.”
Bray sees the ongoing investment necessitated by file-based workflows as not an entirely negative thing: “Facilities will continually be required to update their storage infrastructure and technical training to meet client needs. But this has, perhaps for the first time, created a new differentiator between the established post houses and the one-man bands.”
Meanwhile, Envy’s md Dave Cadle remains optimistic about the immediate future: “The post landscape is very different to two to three years ago, which is exciting for all of us. We are going to be looking for more space as the demand for more facilities is very high.”
A marker-less desktop motion capture system was used by UK-based filmmaker/animator Ian Chisholm to create his machinima (a film made using the graphics engines from video games) epic Clear Skies III.
Chisholm, who works in IT and describes himself as “just some ordinary Joe without any background or training in film” learnt the skills required to make his films as he went along.
He used iPi Soft’s entry-level mo cap system to create character animations, which he then applied to the graphics engines of some well-known computer games.
iPi Soft's system accurately captures human motion data using inexpensive, off-the-shelf cameras and doesn’t require sensor suits or green screen stages.
''I started the Clear Skies series about six years ago,'' says Chisholm. ''I'd just started doing some basic video work when I discovered I could use blue screening to composite video footage together.”
“I'd always wanted to tell a full story, and by using the Eve Online graphics engine for exterior space and ship shots, and the Half Life 2 engine for interior sets and characters, I managed to achieve that.”
It took Chisholm over two years to make the first instalment of Clear Skies: “I learnt everything in the Half Life 2 development kit, wrote my first script, build the sets, shot and created the film itself,” he says.
''I continually challenge myself technically and creatively, and Clear Skies III is the culmination of what I learned producing the previous two films,” he adds.
“Practically every line of dialogue and every movement was motion captured using iPi Soft. Not only was this fun, but it also raised the bar on the performances I could deliver using the Half Life 2 characters – I could add more personality and dimension to the characters, rather than be limited to the built-in gestures that come with the game.”
The mo cap system also made it possible for Chisholm to capture whole body motion and walk around in a small area and interact with items. He created "action sequences and dramatic moments – gunfights, fistfights, character interaction – that wouldn't have been remotely possible without it,'' believes Chisholm.