Because it was wasteful as fuck, not to mention being a stupid idea in the first place that only existed so Nvidia could eat more of your bank account for a pointless, yet contrived reason
It was always wasteful to anyone with a bit of value sense >gpu corpos wanted to sell you as many gpus as a board can hold >then suddenly for no reason at all it stopped "being worth it" and started being "a wase of money" in a market where people pay out the asshole for 5% gains >later >HEY GUYS LOOK AT THIS STUFF WE OPERATE WITH 300 GPUS RUNNING AT THE SAME TIME SO COOL
suddenly that narrative about keeping AI out of the peasants hands is a little more plausible, thanks for saying what you did to cause me to realize that anon
SLI was dogshit and barely worked. Motherboards don't even support PCIEx16 in more than one slot generally, so you're limited there as well. This isn't a schizo conspiracy.
a fucking 4090 of all cards isnt limited by pcie2 despite being pcie4, GPUs capable of SLI/xfire literally dont need more than an x4 slots worth of bandwidth normally, two gen2 x8 slots was more than enough for back then
spbp. And special driver profiles required to run it for supported games.
So games were just as playable as on a single GPU but with a higher FPS counter, only useful in that case for e-peen.
because it barely worked
just like running a game over dual socket CPUs will kill your 1% lows, having multiple GPUs playing a game will just make it a stuttery mess.
It's still 100% a thing, even more so since no multisocket capable CPU has good single core performance, which also heavily impacts 1% lows.
No matter how you slice it, a multi-socket system simply can't give the same gaming performance the highest end single socket CPU will get you. To say otherwise shows a fundamental misunderstanding of what you're CPU of actually doing and how it's doing it.
>No matter how you slice it, a multi-socket system simply can't give the same gaming performance the highest end single socket CPU will get you
Depends on your resolution.
IDK why people think CPU gayming performance is such a critical thing when you need to bench at 1080p to see differences.
People who play at 4k/1440p or have anything short of the top-end of GPUs. it hardly matters the CPU.
>Works fine on my machine
possibly the os pins the game to specific numa node, unless you play a numa aware game which is impossible since game devs do not even know what this is it.
>On the professional side, it cuts into margins for GPGPU compute.
Wrong, professional is where multi-GPU rendering makes sense and the only place where it still exists.
nvidia and amd werent interested on honing the drivers
and game engine devs couldnt bother more to fix their games let alone to code for multistructured IAS
A pain in the ass for game dev mostly. Like the Asynchronous GPU Core programing for AMD where is woks only in DooM. Game Engine dev are less and less skilled nowadays.
Because it is the most wasteful thing to pay.
It's like paying for DLSS. You get some games that support them, most of which you probably won't play. Some enhance your framerate but degrade your visual fidelity, with flickering or garbage 1% lows, etc.
Stepping aside from the comparison, it wastes so much energy for the little it advantage it offers (most scaling wasn't even 1:1) .
It was the pinnacle of PC gaming consumism.
Because SLI only made sense when single fastest gpu on the market couldn't meet your expectations. I had GTX 1080 in SLI and it was great I was gaming at 4k in 2016 way ahead of the curve.
nvidia managed to kill that off on anything that isn't a datacenter card too. quite the stupid decision, even if your wallet is bottomless an H100 only has a single GPC capable of graphics workloads. multi-gpu is a meme when it comes to games nowadays but is still useful for non-realtime rendering
currently it makes sense only in vr, as it has almost no penalization when you render two different images for your two eyes to split the load on the two gpus, but nobody is pushing for it
What is the modern equivalent to SLI in terms of being grossly exorbitant? Having a quad-SLI setup was some king big dick shit, even if it was functionally pretty shitty.
#1 - it is multi-GPU rendering not SLI/CF
#2 - All of the common methods of doing mutli-GPU rendering have their own set of strengths and weaknesses. But ultimately none of them made much sense outside of epenis scores
#3 - the new focus on low framerates as the desirable metric utterly destroyed multi-card rendering
Same reason there won't be a new x99 chipset. They don't want to make consumer hardware that could possibly be used by enterprise customers. They want to keep the use cases and price points far apart.
HEDT simply went back to its roots. Cheap HEDT is what is really dead. If you want more DIMM slots, PCIe lanes etc. You have to pay the workstation/server platform tax.
It's not really expensive tho
$879 for all the PCIe 5 slots you could want, dual 10Gb, thunderbolt and your full 8 sata ports is really a bargain when you consider you get on the desktop side.
I'm not even sure of a desktop board that gives you 2x PCIe gen 5 slots except maybe an aorus xtreme but that's a more expensive board.
CPUs are the only thing that's really a premium on the platform. Of course, nothing stopping you from buying a W3-2423
You are forced to get half-locked CPUs they are binned for efficiency rather than raw performance which are repellents for min-max gayming-fag types.
Granted, such types aren't interested in having an abundance of I/O connectivity and memory capacity.
You can get unlocked versions too
It is possibly for the better the CPUs aren't the best single thread performers.
That could change is w790 ever gets hbm
DX12 introduced mGPU which is API-level "SLI/CX". This was supposed to have the ability to do hetergenous GPU combinations, like you iGPU+dGPU or 2 dGPUs from different companies. Basically, SLI but beter, Nvidia and AMD saw the writing on the wall and dropped their multi-GPU support. Problem is, mGPU and especially hetergenous mGPU is very difficult to implment, much more difficult than multithreading games or ray tracing. Game devs are big gay lazy babies and so there's exactly 1 game that ever got mGPU: Ashes of the Singularity. It's a terribly boring game that is only useful for benchmarking that came out like 10 years ago and the devs autistically add every new CPU and GPU feature to it. It's also used to benchmark CPUs because it scales very well with cores, one of the only games that actually does. Other than that, there are several boring compute projects that use mGPU to calculate stupid junk and as far as I can tell most are actually abandoned student projects.
It became limited to those SKUs because it gives Nvidia a convenient reason to charge more. Let's not kid ourselves that they couldn't have added that functionality to consumer GPUs even if there's no reason for most people to use it.
doesnt give as good of a margin as people buying $1000 GPUs
everyone saying it doesnt work never used it, support was great and in every game back in 2012 times, the only thing is most games had to run in full screen mode instead of borderless, otherwise literally it was great
>2 cards 1.5x the speed of one card >3 cards 1.8x the speed of one card >4 cards 1.3x the speed of one card
It's not really cost effective compared to just buying a faster card. The reason it was a thing in the first place was because there weren't any faster cards to buy. Now we have meme cards that cost as much as a car and draw all the power you can get from a single outlet.
>buy flagship card at launch for $500 >buy second card a year later on holiday sale for $350 >spend next two years in comfort with more performance than any single gpu rig >???????? >nvidia makes less profit
it was good while it lasted even if didnt have perfect scaling
the problem was that it just didn't scale well
some games you might get 60% more performance, some game you might get almost no benefit at all
also keep in mind vram is also shared, so if you say, got 2x 2GB cards, you still only have 2GB of vram, because the contents are mirrored because each card needs access to the same memory as they're rendering the same scene
if you absolutely needed more performance than what the biggest card could offer, then it was the only way forward, but it was a hard sell
yeah, but gpu multi rendering is pretty bad because it cant be asynchronous unless youre buiding a frame buffer (which is not good for gaming for multiple reasons).
plus crossfire and sli needed a lot of additional programming sicnce they werent easy to implement. honestly gaming has barely changed apart from 4k textures.
Because it was wasteful as fuck, not to mention being a stupid idea in the first place that only existed so Nvidia could eat more of your bank account for a pointless, yet contrived reason
It was always wasteful to anyone with a bit of value sense
>gpu corpos wanted to sell you as many gpus as a board can hold
>then suddenly for no reason at all it stopped "being worth it" and started being "a wase of money" in a market where people pay out the asshole for 5% gains
>later
>HEY GUYS LOOK AT THIS STUFF WE OPERATE WITH 300 GPUS RUNNING AT THE SAME TIME SO COOL
suddenly that narrative about keeping AI out of the peasants hands is a little more plausible, thanks for saying what you did to cause me to realize that anon
SLI was dogshit and barely worked. Motherboards don't even support PCIEx16 in more than one slot generally, so you're limited there as well. This isn't a schizo conspiracy.
a fucking 4090 of all cards isnt limited by pcie2 despite being pcie4, GPUs capable of SLI/xfire literally dont need more than an x4 slots worth of bandwidth normally, two gen2 x8 slots was more than enough for back then
Is he right, anon friends?
Microstutter
spbp. And special driver profiles required to run it for supported games.
So games were just as playable as on a single GPU but with a higher FPS counter, only useful in that case for e-peen.
because it barely worked
just like running a game over dual socket CPUs will kill your 1% lows, having multiple GPUs playing a game will just make it a stuttery mess.
>running a game over dual socket CPUs will kill your 1% lows,
Works fine on my machine, maybe before a qpi link was a thing
It's still 100% a thing, even more so since no multisocket capable CPU has good single core performance, which also heavily impacts 1% lows.
No matter how you slice it, a multi-socket system simply can't give the same gaming performance the highest end single socket CPU will get you. To say otherwise shows a fundamental misunderstanding of what you're CPU of actually doing and how it's doing it.
>No matter how you slice it, a multi-socket system simply can't give the same gaming performance the highest end single socket CPU will get you
Depends on your resolution.
IDK why people think CPU gayming performance is such a critical thing when you need to bench at 1080p to see differences.
People who play at 4k/1440p or have anything short of the top-end of GPUs. it hardly matters the CPU.
>Works fine on my machine
possibly the os pins the game to specific numa node, unless you play a numa aware game which is impossible since game devs do not even know what this is it.
>I don't understand what is CPU topology, cache coherency and thread scheduling the post
Mediocre average frame rate for games and compatibility issues.
On the professional side, it cuts into margins for GPGPU compute.
>On the professional side, it cuts into margins for GPGPU compute.
Wrong, professional is where multi-GPU rendering makes sense and the only place where it still exists.
Because no consumers were willing to spend an extra $2000 (or more) to get 10% more performance?
Now imagine all those GPUs but with 12VHPWR connectors
How many nuclear reactors would you need for that?
nvidia and amd werent interested on honing the drivers
and game engine devs couldnt bother more to fix their games let alone to code for multistructured IAS
4090 takes the same space as 4 gpus with sli
its a 3.5 slot card actually.
I had 1080s in sli , they were the days
>4090 takes the same space as 4 gpus with sli
Also same power consumption and same bank account rape.
hterwsjmnytr
A pain in the ass for game dev mostly. Like the Asynchronous GPU Core programing for AMD where is woks only in DooM. Game Engine dev are less and less skilled nowadays.
Because it is the most wasteful thing to pay.
It's like paying for DLSS. You get some games that support them, most of which you probably won't play. Some enhance your framerate but degrade your visual fidelity, with flickering or garbage 1% lows, etc.
Stepping aside from the comparison, it wastes so much energy for the little it advantage it offers (most scaling wasn't even 1:1) .
It was the pinnacle of PC gaming consumism.
I miss having Voodoo 2s anons
Crossfire is better
Because SLI only made sense when single fastest gpu on the market couldn't meet your expectations. I had GTX 1080 in SLI and it was great I was gaming at 4k in 2016 way ahead of the curve.
SLI died because it was shit.
is that gigachad
because nvlink took over
nvidia managed to kill that off on anything that isn't a datacenter card too. quite the stupid decision, even if your wallet is bottomless an H100 only has a single GPC capable of graphics workloads. multi-gpu is a meme when it comes to games nowadays but is still useful for non-realtime rendering
programming games to use it was a bitch. Even DirectX 12, which was supposed to make taking advantage of multiple GPU's easy, didn't help.
currently it makes sense only in vr, as it has almost no penalization when you render two different images for your two eyes to split the load on the two gpus, but nobody is pushing for it
Until the perspective in one eye is easier then the other, renders faster and you end up with different framerates in both eyes and end up throwing up
What is the modern equivalent to SLI in terms of being grossly exorbitant? Having a quad-SLI setup was some king big dick shit, even if it was functionally pretty shitty.
Buying a 4090 today is more expensive than SLI setups back when they were relevant.
There is basically no real equivalent.
Like the other Anon says the closest thing to an equivalent would be a 4090 and an expensive one at that.
imagine the power consumption
GPUs got so powerful that there is no point anymore. Anything above a 2080 is masturbation.
>the chain is complete
>itll heat itself once it reaches criticality
#1 - it is multi-GPU rendering not SLI/CF
#2 - All of the common methods of doing mutli-GPU rendering have their own set of strengths and weaknesses. But ultimately none of them made much sense outside of epenis scores
#3 - the new focus on low framerates as the desirable metric utterly destroyed multi-card rendering
Same reason there won't be a new x99 chipset. They don't want to make consumer hardware that could possibly be used by enterprise customers. They want to keep the use cases and price points far apart.
For AMD camp, it is the reason why there will not be another Threadripper. You have get a full blown workstation board.
With W790, HEDT ain't all bad tbh
Boards are expensive but so is regular desktop boards.
HEDT simply went back to its roots. Cheap HEDT is what is really dead. If you want more DIMM slots, PCIe lanes etc. You have to pay the workstation/server platform tax.
It's not really expensive tho
$879 for all the PCIe 5 slots you could want, dual 10Gb, thunderbolt and your full 8 sata ports is really a bargain when you consider you get on the desktop side.
I'm not even sure of a desktop board that gives you 2x PCIe gen 5 slots except maybe an aorus xtreme but that's a more expensive board.
CPUs are the only thing that's really a premium on the platform. Of course, nothing stopping you from buying a W3-2423
You are forced to get half-locked CPUs they are binned for efficiency rather than raw performance which are repellents for min-max gayming-fag types.
Granted, such types aren't interested in having an abundance of I/O connectivity and memory capacity.
You can get unlocked versions too
It is possibly for the better the CPUs aren't the best single thread performers.
That could change is w790 ever gets hbm
Hello, thanks for the question
DX12 introduced mGPU which is API-level "SLI/CX". This was supposed to have the ability to do hetergenous GPU combinations, like you iGPU+dGPU or 2 dGPUs from different companies. Basically, SLI but beter, Nvidia and AMD saw the writing on the wall and dropped their multi-GPU support. Problem is, mGPU and especially hetergenous mGPU is very difficult to implment, much more difficult than multithreading games or ray tracing. Game devs are big gay lazy babies and so there's exactly 1 game that ever got mGPU: Ashes of the Singularity. It's a terribly boring game that is only useful for benchmarking that came out like 10 years ago and the devs autistically add every new CPU and GPU feature to it. It's also used to benchmark CPUs because it scales very well with cores, one of the only games that actually does. Other than that, there are several boring compute projects that use mGPU to calculate stupid junk and as far as I can tell most are actually abandoned student projects.
Tl;Dr: SLI died for vaporware
SLI should've gotten full NvLink capabilities with pooling the vram and other stuff
Which would have made absolutely no difference. NvLink only makes sense for general compute which is why it is became limited to those SKUs
It became limited to those SKUs because it gives Nvidia a convenient reason to charge more. Let's not kid ourselves that they couldn't have added that functionality to consumer GPUs even if there's no reason for most people to use it.
I'm pretty sure that even over Nvlink, SLI ran in master-slave, so Nvidia definitely could've improved it
The problems with multi-GPU rendering is software not hardware. NVlink and SLI/CF PCIe fingers were little more than placebos for gayming.
bloat
doesnt give as good of a margin as people buying $1000 GPUs
everyone saying it doesnt work never used it, support was great and in every game back in 2012 times, the only thing is most games had to run in full screen mode instead of borderless, otherwise literally it was great
>2 cards 1.5x the speed of one card
>3 cards 1.8x the speed of one card
>4 cards 1.3x the speed of one card
It's not really cost effective compared to just buying a faster card. The reason it was a thing in the first place was because there weren't any faster cards to buy. Now we have meme cards that cost as much as a car and draw all the power you can get from a single outlet.
the MOST cost effective is not doing any of this gaming baloney in the very first place
>buy flagship card at launch for $500
>buy second card a year later on holiday sale for $350
>spend next two years in comfort with more performance than any single gpu rig
>????????
>nvidia makes less profit
it was good while it lasted even if didnt have perfect scaling
>draw all the power you can get from a single outlet
One day we'll all suffer for burger power voltlets.
the problem was that it just didn't scale well
some games you might get 60% more performance, some game you might get almost no benefit at all
also keep in mind vram is also shared, so if you say, got 2x 2GB cards, you still only have 2GB of vram, because the contents are mirrored because each card needs access to the same memory as they're rendering the same scene
if you absolutely needed more performance than what the biggest card could offer, then it was the only way forward, but it was a hard sell
This is still the best way to get vram for AI ya? I think 3090x2 is meta for 48vram.
If vram is all you need, 24gb Quadro M6000’s are cheap 2 slot blowers, you can get 4 of them on one board for 96gb of vram at $380-$440 per card
>buy 2 aliexpress 16gb rx580
>suddenly you have 32gb of VRAM for $230
>Maxwell
oof, they're in life support right now, you can do the same with cheaper Tesla cards though
multi gpu setups are still used, just turns out you dont need sli for it
t. video renderer with 2 render cards and a 3rd for display during render
They just mean it's pointless for gaming, literally anything else you use a GPU for multiple cards is still good
yeah, but gpu multi rendering is pretty bad because it cant be asynchronous unless youre buiding a frame buffer (which is not good for gaming for multiple reasons).
plus crossfire and sli needed a lot of additional programming sicnce they werent easy to implement. honestly gaming has barely changed apart from 4k textures.
You can only do so much to sync a single threaded app between multiple devices.
Like parallel versus serial
it should be revived as multi VM setup
you don't need sli for a multiseat environment
fine gay but i meant something else
well what did you mean?
cause it was never good to begin with.