pixels and pictures
18.2K views | +0 today
Follow
pixels and pictures
Exploring the digital imaging chain from sensors to brains
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

Nvidia’s AI creates amazing slow motion video by ‘hallucinating’ missing frames

Nvidia’s AI creates amazing slow motion video by ‘hallucinating’ missing frames | pixels and pictures | Scoop.it

Nvidia’s researchers developed an AI that converts standard videos into incredibly smooth slow motion.

The broad strokes: Capturing high quality slow motion footage requires specialty equipment, plenty of storage, and setting your equipment to shoot in the proper mode ahead of time.

Slow motion video is typically shot at around 240 frames per second (fps) — that’s the number of individual images which comprise one second of video. The more fps you have, the better the image quality.

 

The impact: Anyone who has ever wished they could convert part of a regular video into a fluid slow motion clip can appreciate this.

If you’ve captured your footage in, for example, standard smartphone video format (30fps), trying to slow down the video will result in something choppy and hard to watch.

Nvidia’s AI can estimate what more frames would look like and create new ones to fill space. It can take any two existing sequential frames and hallucinate an arbitrary number of new frames to connect them, ensuring any motion between them is kept.

Philippe J DEWOST's insight:

AI is slowing down, and it is not what you think.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

New algorithm lets photographers change the depth of images virtually

New algorithm lets photographers change the depth of images virtually | pixels and pictures | Scoop.it

Researchers have unveiled a new photography technique called computational zoom that allows photographers to manipulate the composition of their images after they've been taken, and to create what are described as "physically unattainable" photos. The researchers from the University of California, Santa Barbara and tech company Nvidia have detailed the findings in a paper, as spotted by DPReview.

 

In order to achieve computational zoom, photographers have to take a stack of photos that retain the same focal length, but with the camera edging slightly closer and closer to the subject. An algorithm and the computational zoom system then spit out a 3D rendering of the scene with multiple views based on the photo stack. All of that information is then “used to synthesize multi-perspective images which have novel compositions through a user interface” — meaning photographers can then manipulate and change a photo’s composition using the software in real time.

 

 

 

The researchers say the multi-perspective camera model can generate compositions that are not physically attainable, and can extend a photographer’s control over factors such as the relative size of objects at different depths and the sense of depth of the picture. So the final image isn’t technically one photo, but an amalgamation of many. The team hopes to make the technology available to photographers in the form of software plug-ins, reports DPReview.

Philippe J DEWOST's insight:

Will software become more successful than lightfield cameras ?

No comment yet.