I can only hope that RTX will look better in next years and performance impact will be less.
Also I hope either DLSS gets better or dies.
EDIT: https://www.youtube.com/watch?v=-mEP5k_-zso
When I look at this I prefer RTX off most of the times.
Comment has been collapsed.
It's sole purpose is to get better. It's basically an AI chewing away at code all day in massive farm set-ups.
It's even in the name itself. Deep Learning Super Sampling
RTX is a new tech. And we need to give DLSS time to smoothen it out. Not wish for it to die 😋
Comment has been collapsed.
DXR is a DirectX API for ray-traced rendering. RTX is a specific implementation of that API that utilises hardware cores of the 20x cards. Nothing stops other methods of implementing the same API, using your CPU, GPU, AMD, cloud-based processing power, whatever. The game should not know the difference, but of course there should different performance settings depending on the method of implementing these calculations.
If I understand it correctly :D
Comment has been collapsed.
There's no such thing as “hardware RTX”. It is all software based, just that Nvidia offers hardware tailored specifically to deal with these calculations — so it runs faster, but if powerful enough, then generic CPU, GPU what have you, can deal with it too. Performance will differ, but then it depends on what the calculations are.
Even “hardware” cannot trace every single ray in the scene (it is impossible, since there can be infinite rays, with infinite number of bounces), so it depends heavily on approximation and denoising. So, as I understand, generic hardware running the same software will need to rely on approximation / denoising heavier, that's all. Considering that most shadows are softened anyway, this may be good enough for shadow only or Global Illumination applications (as in Metro, Tomb Raider), and reflections depend on geometry, so can be limited to just tracing rays for the triangles, not for the whole scene. It is possible to further simplify these tasks and achieve a “good enough” result.
Dedicated specialized hardware is undeniably better, but it's not the only way.
Comment has been collapsed.
Thank you for the detailed explanation. I'm aware ray tracing existed back on Amiga too (not real time, of course) so I know it's nothing new/hardcoded, I was just trying to keep it simple and make the "dedicated chip vs driver supported" distinction. But yeah, this will help people understand it much easier.
Comment has been collapsed.
GPUs in general are good with parallelized matrix calculations. Tensors are a generalisation of matrices. So I wonder if Intel/AMD others won't pick the trend up and make next gen CPUs deal better with matrices and tensors. That would help with a lot of machine learning / AI tasks, and wouldn't be specific to graphics only — but would help with the graphics too.
Comment has been collapsed.
Tempted to call that a fake subject; as far as I understand it, it was a known fact that AMD, having better compute performance, would be in about the same league whenever DXR became available, while not actively advertising a proprietary solution.
That is, unless you took at face value the cantankerous propaganda spewed by that CEO thing at nVidia.
Comment has been collapsed.
1 Comments - Last post 24 minutes ago by EvilAaron
18 Comments - Last post 33 minutes ago by EvilAaron
159 Comments - Last post 1 hour ago by MeguminShiro
11 Comments - Last post 3 hours ago by Dunther
15 Comments - Last post 3 hours ago by UltimateArck
3 Comments - Last post 4 hours ago by Lugum
10 Comments - Last post 4 hours ago by DeliberateTaco
131 Comments - Last post 13 minutes ago by Eiion
408 Comments - Last post 1 hour ago by Griske14
26 Comments - Last post 1 hour ago by NeptuneZero
8,622 Comments - Last post 2 hours ago by steveywonder75
35 Comments - Last post 3 hours ago by Grogglz
1,566 Comments - Last post 3 hours ago by ExcelElmira
16,370 Comments - Last post 3 hours ago by MaxiBoi1357
So apparently a new thing is this. Crytek runs an "almost as good as hardware RTX" illumination on AMD, then nVidia reveals 10x0 series will get DXR, which is in-driver support for the more or less the same thing, some sort of real-time illumination run on existing cores... without dedicated rtx cores... not as good but okayish...
Any thoughts?
Some sources:
https://www.youtube.com/watch?v=bfyBtGXU41I
https://www.pcgamer.com/nvidia-pascal-gpus-ray-tracing-drivers-in-april/
Comment has been collapsed.