archyworldys

Nvidia DLSS 3 promises higher frame rates by creating new frames – Update 2 – Computer – News

Edit 3: I have response from Nvidia:

DLSS3 can work with native frames or DLSS2 Super Resolution Frames. There the user has the choice

But it is not interpolation, but AI frame generation. A future frame is generated. For this, the data from the Optical Flow Accelerator are immensely important.

So it seems my first assumption was correct: they use (possibly) DLSS to render a frame, and follow it up with a ‘predicted’ future frame. I’ll dive into the podcast in a minute, but adjust the article (again) asap.

Cc: @Osiummaster, @M1tch, @Alfredo on @A Lurker

Edit 2: I’m still looking, and now I’m starting to doubt again. In my opinion, this image suggests that DLSS 3.0 renders a frame as shown below, and then ‘predicts’ a new, generated frame.

However, Nvidia also says that DLSS 3.0 generates ‘intermediate frames’, which in turn can mean that DLSS watches the movement between two frames and ‘pastes’ an extra frame in between. That indeed looks more like ‘normal’ interpolation (as was first suggested in the article). That’s also how Digital Foundry explains it in a hands-on. So I suspect that the latter is the story after all. I’ll adjust the article, make that known in an update, and check with Nvidia immediately tomorrow (just in case).

CC: @M1tch, @Alfredo on @A Lurker

No, they use the current frame and a previous frame (which has already been shown earlier), mainly to visualize movements of objects. At least, that’s how DLSS 2.0 does it. Nvidia itself says that this new 3.0 version builds on the principles of DLSS 2.0, so I suspect that aspect is unchanged

Read also  Advisors for Personal Loans EFE / Labora in Victoria

Edit to clarify that a bit further:

In its current form, an RTX GPU simply renders a frame almost completely, up to the ‘post-processing step’. Then that latest frame goes through the DLSS 2.0 ‘autoencoder’, which scales the frame up by combining details from that frame with details from previous frames. Use is also made of so-called motion vectors and information about the depth of a frame.

After that, that latest frame just goes through the normal post-processing steps (with things like bloom, motionblur, etc. being added, for example). All that does indeed add some extra latency, but that is nothing when rendering a completely new frame.

I once wrote an (in my humble opinion) interesting Plus backstory on this if you want more details, but above is the TL;DR :)

[Reactie gewijzigd door AverageNL op 21 september 2022 10:58]

Trending