Sponsored by Adobe Premiere Pro

With its latest application releases, Adobe Premiere Pro offers a unique suite of features and capabilities that sets it apart from any other non-linear editing application. Adobe has significantly updated Premiere Pro to enrich and empower the editor’s experience. Editors can work faster and complete a broader range of post-production tasks and achieve enhanced creative outcomes directly within the software.

Combined the latest developments represent a watershed moment in the range of what can be achieved within a non-linear editing application, and the speed with which it can be executed, by the editor working in isolation or with other editors within a multi-seat context.

Adobe’s Senior Vice President, Digital Media, Ashley Still said, “the latest innovations across Premiere Pro, After Effects and Frame.io will empower video professionals to do their best work more quickly, efficiently and beautifully.”


Some of the high profile shows and films recently edited on Adobe Premiere Pro

The Sum of the Parts

Before outlining how super-efficient Adobe Premiere Pro has become for core day to day dialogue editing as well as the exceptional Enhance Speech feature and localisation work, here’s a quick run-down of just some of the latest upgrades:
• With Generative Extend, editors can extend clips on the timeline with up to two seconds of additional photorealistic frames and 10 seconds of audio to overcome common editing challenges and so better serve the overall narrative (in beta).
• Introduced at the Adobe MAX conference, the Firefly Video Model allows editors to work with commercially safe-to-use image and text-to-video generative AI content (also in beta).
• A fresh, more intuitive user interface design and with the Properties Panel comes the flexibility to set the editor’s UI menu based on the project where the most used keys and shortcuts are immediately apparent and available.
• Faster performance, with hardware optimised AVC and HEVC playback and 3x faster ProRes rendering when exporting files.
• More RAW formats and native file support for Sony, Arri, Red and Canon cameras without the need for labour-intensive third-party encoding.
• Log format support including Arri, Sony, Red and Canon, Panasonic, Fujifilm, Nikon, Leica, DJI and GoPro.
• A new Premiere Colour Management feature currently offered in beta which offers a simplified and elegant automated colour workflow with a choice of six presets for Rec. 709 (SDR) and seamless switching to HDR (both PQ and HLG) with tone mapping for even more creative control.
• Better After Effects colour and contrast integration with Premiere Pro thanks to Dynamic Link.
• A new built-in effect with on-screen controls called Crop enabling the editor to adjust the aspect ratios of multiple clips across a timeline from one instruction.

Generative Extend and the Firefly Video Model have received a lot of positive press and widespread industry attention, not least for offering what many consider to be the first practical use cases and commercially sensitive generative AI iterations within the market. You can read more about these developments in a recent post on Televisual here).

(https://www.televisual.com/news/explore-the-new-adobe-firefly-video-model-and-generative-extend/)


Adobe’s new Premiere Pro user interface in dark and light mode

Text-Based Editing

Since first showing its AI-powered Text-Based Editing feature for Premiere Pro in early 2023, Adobe has delivered a series of upgrades from a combination of user feedback and ‘assistive AI’ integration.

Many producers and directors, particularly those working in factual genres, prefer to create a paper edit from the recorded interview or dialogue. The chosen dialogue excerpts to be used can additionally be rearranged ahead of being automatically synced with the video recording within Premiere Pro. By highlighting the speech or interview text you want, Premiere Pro automatically creates a rough-cut edit right down to switching between two or more camera files on a locked timeline.

Adobe followed this up with a filler word detection algorithm which automatically
identifies and highlights “ums” and “ohs” (and other filler words you choose to omit)
that you can then delete or finesse with a single click, even when using multi-channel
audio files.

Anything up to 70% of a factual editor’s time can be taken up by dialogue editing. For most producers, directors and editors dialogue editing is a necessary ‘process’ that is labour intensive rather than craft. This elegant but simply executed automation frees up more time for considered alternative edits or for more creative work.

Adobe’s Text Based Editing in action, linking dialogue with video clips

Enhance Speech

This author first used what is now Enhance Speech within a Premiere Pro project in the Summer of 2023. At the time it was an extension designed for cleaning up poorly recorded podcast audio. The interviewee had delivered some considered thoughts late of an evening with pouring rain on canvas, voices off and even a bucket of bottles being emptied into a bin. To all intents and purposes, the interview was unusable. Audio enhancement was not new back then but, having tried it before, it had not delivered usable results. The historical issue had been over-processing’ that made voices sound unnatural, tinny or robotic, and certain background noises (like music or overlapping voices) were harder for algorithms to separate cleanly. In one pass with Adobe’s plug-in the voice was as clean as if recorded in a professional environment and, critically to the ear of this author, the enhanced voice was entirely ‘authentic’ and impossible to differentiate from other recorded interviews with the speaker or the voice within the noisy recorded audio.

Enhance Speech is now available within Premiere Pro to Enterprise customers (in beta) and includes the ability to batch process raw audio files, saving considerable time, while retaining and enhancing the timbre and warmth of the original voice, as if recorded within a controlled environment.

Automated Localisation

The past couple of years have seen new applications promising synthetic authentically dubbed voices and separately the ability to manipulate or animate lip movements in sync with the (translated) spoken word.

Adobe has brought these two elements together within the Firefly Video Model to create the just announced Dubbing & Lip Sync API and UI.

Adobe’s Dubbing & Lip Sync API and UI (currently in beta for enterprise) is a game-changing tool that makes video content more immediately and practically available to a global audience by automating dubbing and lip synchronisation. Using generative AI, it translates dialogue into different languages while adjusting lip movements, tone, and cadence to closely match the original speaker, creating a realistic synchronisation that maintains the speaker’s voice and style.

This feature is available through Adobe’s Firefly suite and streamlines the process of producing high-quality, localised video content without the need for extensive manual dubbing adjustments. Studios and content creators can now reach new audiences at a fractional cost by saving time on localisation tasks from specialist providers.

Adobe’s dubbing and lip-sync technology aligns with its responsible AI principles to prevent misuse and to prioritise authenticity in media translation. (Please see the last Adobe post here)

(https://www.televisual.com/news/explore-the-new-adobe-firefly-video-model-and-generative-extend/)

Pippa Considine

Share this story

Share Televisual stories within your social media posts.
Be inclusive: Televisual.com is open access without the need to register.
Anyone and everyone can access this post with minimum fuss.