Subscribe Online  

Blogs&
comment

Future TV tech predictions from the 50s and 60s

Here's a link to an interesting little blog that looks at some predictions from 1957 through to the mid-60s about how technology might evolve over the next few decades.

It includes two or three TV-related predictions, including the 'electronic home library' (pictured below), centred on the ability to record TV programmes if you were out when they were on air, the ability to watch 3D TV and other wonderful futuristic developments that might eventually become reality.



There's also the prediction, from 1957, that one day we'd be able to make face-to-face phone calls, including a handy picture (see below) to show what it might look like.



It's interesting to see how the likes of Skype, Sky+ boxes, Smart TVs, etc, which we now take for granted, were envisaged 40-50 years before they became commonplace.

Click here for the full article.

Posted 24 June 2014 by Jake Bickerton

How to do live subtitling

As you might imagine, subtitling live shows and live news channels isn’t without its challenges. To get it right requires a dedicated, professional, skilled approach. To find out how it’s done, what the typical pitfalls of live subtitling are and how to resolve them, check out this informative article from Red Bee Media’s IT Manager, Access Services, Hewson Maxwell.

 

Translating what’s said on live television into readable text – it sounds simple enough. But take a moment to consider the challenges of providing live subtitling across multiple channels, with breaking news, regional broadcasts and over-running sporting contests, then add to that mix the understandably high standards of timeliness and accuracy expected by viewers, lobby groups and Ofcom alike and it might start to seem more complex.


We, and the industry as a whole, are always looking at ways to address the challenges we face with live-subtitling through ongoing investment in innovation and technology.

 

Volume of work

A huge challenge presented by live subtitling comes from its sheer scale; 24 hours a day of live output for news channels, over 150 daily hours of live output, and a workforce of subtitlers spread across many continents and also working from home.

 

Ensuring accuracy

Most truly live subtitling is generated using voice recognition software, most commonly Dragon NaturallySpeaking. There are huge benefits to this approach.  When coupled with the re-speaking method, whereby a subtitler repeats everything spoken on screen in a clear and level voice, with punctuation and colour commands, the recognition and output is broadly excellent.  

Furthermore, good speech engines are easy to use, so it is relatively easy to recruit and train people to subtitle well.  
 

However, historically, there have been some downsides. The best voice recognition engines avoid releasing text until they receive enough context to be sure of what is being said. This can lead to a significant gap of output on air, followed by an instantaneous release of a large chunk of text.  Slowing this glut of text down to a readable speed, as most software does, leads to de-synchronisation of subtitles and video and the need to edit or omit sections to catch up.


Regardless of the amount of context, voice recognition software will always struggle to understand some of the time, particularly with unusual terminology or names – the kind of vocabulary commonplace in live news.  Traditional subtitling software uses housestyles to automatically replace common typographical errors, such as “empire state-building” for “Empire State Building” and the program’s comprehension errors, such as “Nicholas so cosy” for “Nicolas Sarkozy”.
 

Subtitlers also define macros for alternate forms of sound-alike words, so they can tell viewers whether it is chilly out, or they’re making a chilli, or the sports event is happening in Chile.  But macros and housestyles are set up prior to going on air, and are therefore of little use when a new name emerges in a breaking news story.

 

Innovation in subtitling

Over the last three years or so, we’ve been looking at how we can improve the quality of live subtitling. We believe technology and innovation will be key drivers to achieving this goal and as such, we have been investing heavily in building a bespoke platform and software that we believe will help to address some of the challenges listed above. Unique functionality such as the ability to integrate with broadcaster schedules and a re-speaking interface designed to be the fastest on the market are just some of the improvements we’ve been focused on. 
 

Live subtitling will continue to be a complicated, imperfect and expensive process for some time to come, but with our new software, Subito, we believe we’ll be able to deliver greater quality to the audience, and begin to add extra utility and lower costs for broadcasters.  We know there will always be more that can be done, and we remain committed to both integrating current technological advancements and driving the next set forward.  We will continue to do so until we reach a day when live subtitles are barely distinguishable from prepared ones.

 

Posted 20 June 2014 by Jake Bickerton
Showing 1 - 2 Records Of 2
1
 

About this Author

  • Features Editor, Televisual
     Jake is features editor at Televisua...
  • Total Posts: 2

Recent Posts by This Author

Archives

Subscribe






















Televisual Media UK Ltd 23 Golden Square, London, W1F 9JP
©2009 - 2017 Televisual. All rights reserved
Use of this website signifies your agreement to the Terms of Use | Disclaimer