Researchers at Microsoft have come up with a tech have developed a technique for converting first-person videos, such as those captured with GoPros or by cyclist’s helmet cams, into smooth timelapse footage.
The Microsoft Hyperlapse software, which the software giant plans to release as a downloadable app, analyses video content before increasing the speed and adding new frames to smooth out camera jumps.
The app could be useful to producers looking for a quick way to smooth timelapse footage created by first-person videos.
An algorithm eliminates the erratic camera shake that tends to be present in always-on cameras such as those made by GoPro, which are increasingly popular in TV production.
The miniature cameras are very simple to use but can suffer from camera shake and changing lighting conditions.
Traditional stabilisation methods and simple frame sub-sampling techniques don’t work well with first-person videos as the shakiness gets exacerbated as the footage is sped up.
Many camera manufacturers such as Sony (SteadyShot) have image stabilization software built into camcorders, and professional editing systems have image stabilization technology built in.
The Microsoft Research team worked on a system that reconstructs the journey and develops a new, virtual camera path for the output video that is rendered from the input footage.
The research team, which includes Johannes Kopf, Michael Cohen, and Richard Szeliski, said: "There are three key parts to the process. The first is scene reconstruction, which involves developing a 3D model of an environment based on the captured frames.
"Once the model has been built, the software will create an optimised path for the camera.
"Finally, the image is rendered at ten times the original speed using stitching and blending of selected frames from the original footage.
"As the prevalence of first-person video grows, we expect to see a greater demand for creating informative summaries from the typically long video captures.
"Our hyperlapse work is just one step forward. As better semantic understanding of the scene becomes available, either through improved recognition algorithms or through user input, we hope to incorporate such information, both to adjust the speed along the smoothed path, the camera orientation, or perhaps to simply jump over uninformative sections of the input."
David Wood
Share this story