Perhaps because interpolation between what is essentially 2 almost identical still images of the character standing still& looking in the distance is trivial?
Do amdfags actually believe this? Try using an older amd card and see if it's supported. Nvidia has insanely good driver coverage, even on Linux as long as you aren't a Foss fanatic and use a stable distro without monthly kernel updates.
There's a reason most non consumer GPU farms use Nvidia and not it's not just because of CUDA.
I was referring to DLSS upscaling, which reduces latency.
DLSS frame gen (interpolation) obviously increases latency, but if you use it along with DLSS upscaling, you get a comparable or even lower input latency compared to native
I was referring to DLSS upscaling, which reduces latency.
DLSS frame gen (interpolation) obviously increases latency, but if you use it along with DLSS upscaling, you get a comparable or even lower input latency compared to native
just wait for the next feature. AI input generation, guessing what input you would make and feeding it in to reduce input latency
> AI input generation > leads to negative input latency > time travel created inside GPU creates a singularity that destroys planet earth
t-thanks Nvidia
False. I tried DLSS and it's a blurry mess with frames that look phony af. I can tell when an Image is up scaled specially cause the text looks horrible. Most games that use DLSS or FSR even upscale the UI which is abusurd. The ones that only use if for texture resolution are a bit better but they still look worse than native.
You clearly never tried it at 4K quality or you think FSR looks the same as DLSS which isn't anywhere close to DLSS output. DLSS is great, FSR is a blurry mess similar to TAA.
Your proof is static images. Try actually playing a game, once there is motion native is clearly better. You are a fucking retard.
Wrong, 4K DLSS quality looks better than native because of shitty temporal AA techniques. You're the fucking retard.
Basically, instead of marking more powerful GPUs, nvidia has started to upscale frames from low resolution, create interpolated frames from low framerates and even use fake (AI guesstimated) rays in raytracing.
Eh, I got it for free as part of my overpriced GPU.
Gonna play it in a couple of months when the bugfixes from modders are out.
Still, I'm not very interested when there's no lizard pussy in this game
>Runs fine on Radeon
Nvidia didn’t allocate the resources to make Starfield run well. israelitevidia is an AI company now, gamers will just have to buy AMD and Intel.
Like it or not, AI is the future. Why the fuck would you keep bruteforcing shading every single fucking pixel more times per second when AI is becoming advanced enough to just imagine the inbetween frames? when it can imagine an upscaled image of superior quality?
>denying reality
2070 super has 2x the transistors as a 1080 despite the exact same core count and bus width, they could be doubling performance every generation but instead they just add more useless AI cores
They're betting on AI/ML being the future, I tend to agree with them. Have you even been paying attention to the progress in ML these past years?
3 weeks ago
Anonymous
its called a video card, they save money on professional cards by reusing dies.
thats great about ML but guess what? its gay as fuck I have to spend $1500 on a 4090 for good stable diffusion performance. If they made a die that was only tensor cores it could have half the power consumption and half the size, or be the same size and at least twice as fast. They're literally fleecing both markets at once
>Nvidiot who too stupid not to see truth that right in front of them
Nvidia knows that graphics in general is a mature market with no growth but only inevitable decline as demand destruction from iGPU makes discrete SKUS redundant for more and more customers.
If your engineers can make ML/datacenter ASICs do graphics on the same that keep up with the compention despite this setup? Why not do it if it dramatically cuts down on R&D costs and time?
I'm sorry for shidding in your shoes and for calling you not a real woman, DLSS3. You are the naggerkike of all naggerisraelite. I'm sorry for not being racist, misogynist and meat eating ENOUGH, I will do better from now on. Praise Odin. Death to Zog. Yeet trannies.
If I had an RTX 40 series GPU I would go try it right now but I don't so I cannot formulate an opinion on it visually and tolerance with input latency other than seeing videos of it in use
DLSS 3.5 with ray tracing actually improves visual quality.
It gets rid of the denoiser and uses the tensor cores instead.
The result is higher detail GI not achievable with any other technique.
The level of AMDdrone cope must be insufferable at this point.
The tube with the panels on top are light sources and the colored highlights are a result.
DLSS off is not a reference image but a biased real time global illumination engine, so called """path tracing""" (it's not) with a denoiser running on cuda cores.
This engine is not capable of producing a reference image but the one of the right would be closest if there was one. So if you were to render this in an actual offline path tracer the DLSS 3.5 image would be more similar to it than anything else.
srsly learn about the basics of light transport methods before posting.
> reference image is not a reference image
Okay. > it would have been the same as DLSS 3.5
How about you present an actual properly rendered and traced image and then start talking about how DLSS compares? Because by now a properly traced one looks nothing like the guessworked upscale by nVidia.
> no "reference" > but it would totally look like our guesstimate slop
Sure thing, rabbi.
3 weeks ago
Anonymous
You could have googled what unbiased rendering means and but you didn't.
Oh well I tried.
3 weeks ago
Anonymous
> you can google
Lol, should I "do my own research" and you will not "waste time"?
Point is there is no "reference" according to you, so any bullshit about how it would totally look like Nvidia's slop because it's just so good, even Jensen said so is nothing more shilling.
It's not a reference, it's the old raytracing without scaling - the image itself is just supposed to show the evolution of the DLSS feature set, not how the new integrated denoiser improves quality.
Because the purpose of this slide was feature set and performance, other slides show quality improvements (using the same raytracing model but with the old/new denoiser).
> the purpose
To show that it's supposedly better, since it's an advertising material. I'd like to see how it looks compared to the proper rendered reference image, so I can see which one is actually closer to the ground truth.
There are comparisons on this page, only 2 are what you are looking for (the car headlights and reflections):
https://www.nvidia.com/en-au/geforce/news/nvidia-dlss-3-5-ray-reconstruction/
> the purpose
To show that it's supposedly better, since it's an advertising material. I'd like to see how it looks compared to the proper rendered reference image, so I can see which one is actually closer to the ground truth.
>I'd like to see how it looks compared to the proper rendered reference image, so I can see which one is actually closer to the ground truth.
It's a denoiser, it doesn't change the look, only the quality of the reconstruction.
> no comparison of the same image with 3.5
What a coincidence.
3 weeks ago
Anonymous
There are 2, with the old denoiser showing significant artifacting.
3 weeks ago
Anonymous
Maybe I'm retarded, but can you copy it here?
I see a pic of headlights with reference and shitty denoiser, but not with DLSS 3.5
3 weeks ago
Anonymous
Keep scrolling down (or ctrl-f for "In the following scene from Cyberpunk 2077, the inaccurate headlight illumination"). But yes, you can use that small snippet if you want to compare to an actual reference.
3 weeks ago
Anonymous
That one compares to a different DLSS version and not to the reference.
It's all very misleading.
3 weeks ago
Anonymous
It's comparing to CPDRs denoiser.
If you just don't believe in their denoiser tech, you can read the papers it's based on:
https://research.nvidia.com/publication/2021-07_rearchitecting-spatiotemporal-resampling-production
https://research.nvidia.com/publication/2022-07_generalized-resampled-importance-sampling-foundations-restir
https://research.nvidia.com/publication/2023-03_joint-neural-denoising-surfaces-and-volumes
3 weeks ago
Anonymous
> if you don't believe then
They could simply show the proper reference, old tech and new tech side by side. But they only show reference <-> old and new <-> old (on different scenes, of course, so no direct comparison can be made).
Smells extremely fishy.
3 weeks ago
Anonymous
Because it's an ad for an unreleased game not a paper, and improvements over the old method are obvious.
3 weeks ago
Anonymous
> because it's an ad
No shit. How convenient that there is no clear comparison. But obviously, the newest is much better than real, just trust us, guys. No, we won't show it, stop being antisemitic.
3 weeks ago
Anonymous
They don't claim it's better than fully resolving - I have to ask again, do you know what a denoiser is?
3 weeks ago
Anonymous
> They don't claim
Yet retards in this thread do and say that it's way closer to the "true" version without showing any such version in comparisons.
3 weeks ago
Anonymous
It is closer to the true version, because the old version has significant artifacting, and their denoiser has much less artifcating. You don't need many comparisons when the flaws are so great and well know.
3 weeks ago
Anonymous
> it's closer to the truth > you don't need many comparison
I'd be okay with a couple. Hell, even a single one that compares "truth", old and new directly and clearly on the same scene. Preferably the one they like to tout with the pink coloration.
3 weeks ago
Anonymous
Also, bonus points if they can prove that the true image was not used as part of the training set.
>game developers will get lazier with optimising their products and rely on gpu tech to make them playable >nvidia will continue to kneecap gens of cards by not allowing the new dlss versions to be on previous cards ensuring the consumer has to pay for yet another inflated cost priduct
Gamers may be more stupid than cryptards
>game developers will get lazier with optimising their products and rely on gpu tech to make them playable
DLSS and FSR are the solutions to that not the cause of it.
The actual cause is the increased reliance on third party assets.
Before game assets were used to be made in house, which meant they could re-use textures and shaders.
Today studios rely on importing assets from a variety of sources all come with their own textures and custom shaders.
That means more vram requirement and less a less efficient rendering pipeline due to the large amount of shaders.
Depends on how far you go back. The trend started in the 2010s before that there definitly was texture re-use.
Furthermore shaders re-use can happen on a low level with high level representations being different. But this still requires a bunch of context switching which is more expensive on a GPU than on a CPU.
>Depends on how far you go back. The trend started in the 2010s before that there definitly was texture re-use.
Textures rarely were, and it was dependent on artist not copying textures outside of the engine (there are duplicate textures in Quake, for example). This system is no different now. >Furthermore shaders re-use can happen on a low level with high level representations being different. But this still requires a bunch of context switching which is more expensive on a GPU than on a CPU.
That's because shader graphs were given to artists, who will create slightly different and incompatible shaders, but this is internal to companies (e.g. most skins in Fortnite have unique shaders), not to external content libraries (such as Megascans) which will share shaders or have shaders simple enough to be converted with a script.
You are mistaken, and sorely misunderstand why denoisers are necessary, if you think a proper solution is available.
Here's is Intel's very similar solution, which does compare to a reference, though that reference is also noisy:
why do they always insist on using RNG for rays? half the problems with the 'noisy' image is that the same pixel is constantly changing despite the character and world being still
>entire thread of poorfags coping
Honestly i doubt i would ever use it because i have a 4090 but i'm very impressed. If you haven't tried it your opinion is kind of irrelevant.
Cyberpunk with path-tracing is actually impressive - looks amazing while running at 120fps. With this newest 3.5 update i don't notice any artifacts anymore, it's pretty much flawless. Yes, CP2077 is shit, but it's a gorgeous tech demo.
It sucks that proprietary software is really superior and if you want to boycott nvidia for its practices and monopoly go ahead. That doesn't change the fact that frame-generation, at least nvidias, is really impressive. AI just makes sense to save on ressources and get better looking games.
Re DLSS "ray reconstruction" denoiser. DLSS now does raytrace denoising and upscaling in one bigass kernel, rather than separate steps. This is guaranteed to improve quality if implemented correctly, due to less information loss, if you were going to use upscaling anyway.
no
>made up frames
>input lag
tranny technology
don't worry, everyone will get tranny tech soon enough
Those pictures look exactly the same
Perhaps because interpolation between what is essentially 2 almost identical still images of the character standing still& looking in the distance is trivial?
That's the point nagger, same quality for 3x performances
rude
Boy I sure do love me some buzzword wars.
NVIDIA are the new apple
then I kneel
AMD has better support for Nvidia cards than Nvidia has for Nvidia cards
Do amdfags actually believe this? Try using an older amd card and see if it's supported. Nvidia has insanely good driver coverage, even on Linux as long as you aren't a Foss fanatic and use a stable distro without monthly kernel updates.
There's a reason most non consumer GPU farms use Nvidia and not it's not just because of CUDA.
They have been since 2010s. Leather Jacket Daddy wants to be the second coming of Steve Jobs.
I sleep
>my gpu is not supported
nah
>there's people playing their games with upscaling, interpolation and high input lag
We truly are in the end times.
> AI upscaling looks better than native rendering
> lower input lag than native even with frame gen because GPU is rendering at a lower resolution
anti-AI fags get the rope
>false
>false
I don't want what you're selling. inb4 cherry picked examples
> lower input lag
> DLSS
How many leather jackets do you own?
the absolute state of brainlets, if you render at a lower resolution, you get more frames, and therefore lower input lag
Anon, the fake frames are INTERPOLATED, not extrapolated.
I was referring to DLSS upscaling, which reduces latency.
DLSS frame gen (interpolation) obviously increases latency, but if you use it along with DLSS upscaling, you get a comparable or even lower input latency compared to native
just wait for the next feature. AI input generation, guessing what input you would make and feeding it in to reduce input latency
> AI input generation
> leads to negative input latency
> time travel created inside GPU creates a singularity that destroys planet earth
t-thanks Nvidia
zoomertoddlers would love this since its not like they can make any conscious decisions anyway and would rather the "game" play itself for them
False. I tried DLSS and it's a blurry mess with frames that look phony af. I can tell when an Image is up scaled specially cause the text looks horrible. Most games that use DLSS or FSR even upscale the UI which is abusurd. The ones that only use if for texture resolution are a bit better but they still look worse than native.
4K DLSS Quality is objectively better than native 4K, this is a scientific fact that was tested many times
Except no one plays at 4k......
I only play games in my 55 inch 4K 120hz micro-LED Samsung TV
congrats you're the perfect candidate for dlss
> native 4K
Do any modern games even do native? I thought they all had mandatory TAA or some other kind of Vaseline filter.
every game has the option to render natively, not sure what you're talking about
Your proof is static images. Try actually playing a game, once there is motion native is clearly better. You are a fucking retard.
>once there is motion native is clearly better
you're thinking o FSR. DLSS 2+ looks great even at motion
You clearly never tried it at 4K quality or you think FSR looks the same as DLSS which isn't anywhere close to DLSS output. DLSS is great, FSR is a blurry mess similar to TAA.
Wrong, 4K DLSS quality looks better than native because of shitty temporal AA techniques. You're the fucking retard.
oh maw gaedwd ees that a bermintide reference??!?!
how does nvidia manage to get their fangirls to worship them when the launch tech that essentially screws them over?
By repeating the word AI over and over.
sorry I'm too poor to understand this, explain?
Basically, instead of marking more powerful GPUs, nvidia has started to upscale frames from low resolution, create interpolated frames from low framerates and even use fake (AI guesstimated) rays in raytracing.
don't forget to mention AMD is doing the exact same shit, but at a lower quality
Which is sad. But you don't want to lose market of retards.
Still, I'll give kudos to AMD even if only for HIP and ROCm.
indeed but they're just following nvidia since its selling
also amds can be used on any afaik
>can be used on any afaik
only because it looks like dogshit
being a 4090 owner I have no need to apologize for a poorfag solution I will never use
>play star field
>runs like shit even worse because dlss3 is a thing now so you can run the most optimized garbage, and run the cope setting.
Star homosexual doesn't officially support DLSS tho, modders had to step in
Eh, I got it for free as part of my overpriced GPU.
Gonna play it in a couple of months when the bugfixes from modders are out.
Still, I'm not very interested when there's no lizard pussy in this game
>Runs fine on Radeon
Nvidia didn’t allocate the resources to make Starfield run well. israelitevidia is an AI company now, gamers will just have to buy AMD and Intel.
FSR 3
I only have a 3080 so DLSS3 can go fuck itself.
AMD will give you fsr3 so you're chilling
fsr is garbage just like any other amd technology
>we can't make chips fast enough to render stuff smoothly so we'll cheat and upscale instead
sad
Name one good game that makes use of it
starfield
Thanks for the horrifying npcs in 4k nvidia
>trannyfield
>good game
>30 years of GPU innovation
>fake frames
Like it or not, AI is the future. Why the fuck would you keep bruteforcing shading every single fucking pixel more times per second when AI is becoming advanced enough to just imagine the inbetween frames? when it can imagine an upscaled image of superior quality?
Wrong, it is just Nvidia's way of financing their ML-tier hardware. They are using gaymers as the dumping ground for yield and design rejects.
>t. AMDrone
>denying reality
2070 super has 2x the transistors as a 1080 despite the exact same core count and bus width, they could be doubling performance every generation but instead they just add more useless AI cores
They're betting on AI/ML being the future, I tend to agree with them. Have you even been paying attention to the progress in ML these past years?
its called a video card, they save money on professional cards by reusing dies.
thats great about ML but guess what? its gay as fuck I have to spend $1500 on a 4090 for good stable diffusion performance. If they made a die that was only tensor cores it could have half the power consumption and half the size, or be the same size and at least twice as fast. They're literally fleecing both markets at once
>Nvidiot who too stupid not to see truth that right in front of them
Nvidia knows that graphics in general is a mature market with no growth but only inevitable decline as demand destruction from iGPU makes discrete SKUS redundant for more and more customers.
If your engineers can make ML/datacenter ASICs do graphics on the same that keep up with the compention despite this setup? Why not do it if it dramatically cuts down on R&D costs and time?
Im sorry youre all too retarded to stop giving money to nvidia
>input lag
no, and kys
I'm sorry. I only use nvidia bullshit because AMD sucks for ML
Looks like shit and I haven't owned an AMD/ATI card since 4890.
>Looks like shit
[citation required]
Source: EVERYONE THAT HAS A PAIR OF FUCKING EYEBALLS
then point me to the DLSS artifacts, you can use this video for reference:
>dude find me the artifacts in this artifacted VP9 compressed video
not the sharpest cookie here huh
so you're saying your imaginary DLSS 3 artifacts are so small and insignificant they're completely erased with video compression?
I love DLSS now!
2560x1440x8x3x120=9.88Gbps
Even with the most efficient lossless video codecs, you'd need a bitrate of ~450-500mbps.
>500 megabits
ironic when GPUs can process terabytes worth of data per second
Are you retarded or what? I'm talking about the captured video.
>Posts compressed video as proof
lulz
I'm sorry for shidding in your shoes and for calling you not a real woman, DLSS3. You are the naggerkike of all naggerisraelite. I'm sorry for not being racist, misogynist and meat eating ENOUGH, I will do better from now on. Praise Odin. Death to Zog. Yeet trannies.
I jump back and forth from AMD to NVIDIA depending on the best bang for buck when i pull the trigger, fanboying is wasteful
If I had an RTX 40 series GPU I would go try it right now but I don't so I cannot formulate an opinion on it visually and tolerance with input latency other than seeing videos of it in use
>making an excuse for upscaling
Yes, you do need to apologize.
I kneel.
remember when nvidia made graphics chips? those were pretty good.
I can't wait for dlss4 to be rtx5000 exclusive and in barely 10 games I don't play
DLSS 3.5 with ray tracing actually improves visual quality.
It gets rid of the denoiser and uses the tensor cores instead.
The result is higher detail GI not achievable with any other technique.
The level of AMDdrone cope must be insufferable at this point.
Why are all the colors so wrong compared to the reference image?
The tube with the panels on top are light sources and the colored highlights are a result.
DLSS off is not a reference image but a biased real time global illumination engine, so called """path tracing""" (it's not) with a denoiser running on cuda cores.
This engine is not capable of producing a reference image but the one of the right would be closest if there was one. So if you were to render this in an actual offline path tracer the DLSS 3.5 image would be more similar to it than anything else.
srsly learn about the basics of light transport methods before posting.
> reference image is not a reference image
Okay.
> it would have been the same as DLSS 3.5
How about you present an actual properly rendered and traced image and then start talking about how DLSS compares? Because by now a properly traced one looks nothing like the guessworked upscale by nVidia.
Learn what reference image means in rendering retard.
Learn the difference between biased and unbiased rendering.
> no "reference"
> but it would totally look like our guesstimate slop
Sure thing, rabbi.
You could have googled what unbiased rendering means and but you didn't.
Oh well I tried.
> you can google
Lol, should I "do my own research" and you will not "waste time"?
Point is there is no "reference" according to you, so any bullshit about how it would totally look like Nvidia's slop because it's just so good, even Jensen said so is nothing more shilling.
I don't think you know what a denoiser is.
It's not a reference, it's the old raytracing without scaling - the image itself is just supposed to show the evolution of the DLSS feature set, not how the new integrated denoiser improves quality.
> supposed to show the evolution
> not a reference
Why don't they show it in comparison to a properly rendered reference frame then?
Because the purpose of this slide was feature set and performance, other slides show quality improvements (using the same raytracing model but with the old/new denoiser).
> the purpose
To show that it's supposedly better, since it's an advertising material. I'd like to see how it looks compared to the proper rendered reference image, so I can see which one is actually closer to the ground truth.
There are comparisons on this page, only 2 are what you are looking for (the car headlights and reflections):
https://www.nvidia.com/en-au/geforce/news/nvidia-dlss-3-5-ray-reconstruction/
>I'd like to see how it looks compared to the proper rendered reference image, so I can see which one is actually closer to the ground truth.
It's a denoiser, it doesn't change the look, only the quality of the reconstruction.
> no comparison of the same image with 3.5
What a coincidence.
There are 2, with the old denoiser showing significant artifacting.
Maybe I'm retarded, but can you copy it here?
I see a pic of headlights with reference and shitty denoiser, but not with DLSS 3.5
Keep scrolling down (or ctrl-f for "In the following scene from Cyberpunk 2077, the inaccurate headlight illumination"). But yes, you can use that small snippet if you want to compare to an actual reference.
That one compares to a different DLSS version and not to the reference.
It's all very misleading.
It's comparing to CPDRs denoiser.
If you just don't believe in their denoiser tech, you can read the papers it's based on:
https://research.nvidia.com/publication/2021-07_rearchitecting-spatiotemporal-resampling-production
https://research.nvidia.com/publication/2022-07_generalized-resampled-importance-sampling-foundations-restir
https://research.nvidia.com/publication/2023-03_joint-neural-denoising-surfaces-and-volumes
> if you don't believe then
They could simply show the proper reference, old tech and new tech side by side. But they only show reference <-> old and new <-> old (on different scenes, of course, so no direct comparison can be made).
Smells extremely fishy.
Because it's an ad for an unreleased game not a paper, and improvements over the old method are obvious.
> because it's an ad
No shit. How convenient that there is no clear comparison. But obviously, the newest is much better than real, just trust us, guys. No, we won't show it, stop being antisemitic.
They don't claim it's better than fully resolving - I have to ask again, do you know what a denoiser is?
> They don't claim
Yet retards in this thread do and say that it's way closer to the "true" version without showing any such version in comparisons.
It is closer to the true version, because the old version has significant artifacting, and their denoiser has much less artifcating. You don't need many comparisons when the flaws are so great and well know.
> it's closer to the truth
> you don't need many comparison
I'd be okay with a couple. Hell, even a single one that compares "truth", old and new directly and clearly on the same scene. Preferably the one they like to tout with the pink coloration.
Also, bonus points if they can prove that the true image was not used as part of the training set.
Only if it gives me a solid 75fps on high/ultra and I can't tell the difference between native and DLSS Quality at 1080p.
Fuck no
turn off vegitation and set shadows to mid and turn off DLSS.
>turn off DLSS.
No fucking shit sherlock
Stop using DLSS frame generation.
>game developers will get lazier with optimising their products and rely on gpu tech to make them playable
>nvidia will continue to kneecap gens of cards by not allowing the new dlss versions to be on previous cards ensuring the consumer has to pay for yet another inflated cost priduct
Gamers may be more stupid than cryptards
>game developers will get lazier with optimising their products and rely on gpu tech to make them playable
DLSS and FSR are the solutions to that not the cause of it.
The actual cause is the increased reliance on third party assets.
Before game assets were used to be made in house, which meant they could re-use textures and shaders.
Today studios rely on importing assets from a variety of sources all come with their own textures and custom shaders.
That means more vram requirement and less a less efficient rendering pipeline due to the large amount of shaders.
Most shaders are shared, textures were rarely ever reused.
Depends on how far you go back. The trend started in the 2010s before that there definitly was texture re-use.
Furthermore shaders re-use can happen on a low level with high level representations being different. But this still requires a bunch of context switching which is more expensive on a GPU than on a CPU.
>Depends on how far you go back. The trend started in the 2010s before that there definitly was texture re-use.
Textures rarely were, and it was dependent on artist not copying textures outside of the engine (there are duplicate textures in Quake, for example). This system is no different now.
>Furthermore shaders re-use can happen on a low level with high level representations being different. But this still requires a bunch of context switching which is more expensive on a GPU than on a CPU.
That's because shader graphs were given to artists, who will create slightly different and incompatible shaders, but this is internal to companies (e.g. most skins in Fortnite have unique shaders), not to external content libraries (such as Megascans) which will share shaders or have shaders simple enough to be converted with a script.
>We could do the math and come up with the right answer but instead we will just have an AI take a guess and we'll assume that it's right.
You are mistaken, and sorely misunderstand why denoisers are necessary, if you think a proper solution is available.
Here's is Intel's very similar solution, which does compare to a reference, though that reference is also noisy:
why do they always insist on using RNG for rays? half the problems with the 'noisy' image is that the same pixel is constantly changing despite the character and world being still
>entire thread of poorfags coping
Honestly i doubt i would ever use it because i have a 4090 but i'm very impressed. If you haven't tried it your opinion is kind of irrelevant.
Cyberpunk with path-tracing is actually impressive - looks amazing while running at 120fps. With this newest 3.5 update i don't notice any artifacts anymore, it's pretty much flawless. Yes, CP2077 is shit, but it's a gorgeous tech demo.
It sucks that proprietary software is really superior and if you want to boycott nvidia for its practices and monopoly go ahead. That doesn't change the fact that frame-generation, at least nvidias, is really impressive. AI just makes sense to save on ressources and get better looking games.
Re DLSS "ray reconstruction" denoiser. DLSS now does raytrace denoising and upscaling in one bigass kernel, rather than separate steps. This is guaranteed to improve quality if implemented correctly, due to less information loss, if you were going to use upscaling anyway.
https://videocardz.com/newz/nintendo-switch-2-allegedly-ran-matrix-awakens-ue5-demo-powered-by-nvidia-dlss-during-private-gamescom-showcase
NVIDIA DLSS WON
THANK YOU NVIDIA DLSS