Why did SLI die?

Why did SLI die?

  1. 3 weeks ago
    Anonymous

    Because it was wasteful as fuck, not to mention being a stupid idea in the first place that only existed so Nvidia could eat more of your bank account for a pointless, yet contrived reason

    • 3 weeks ago
      Anonymous

      It was always wasteful to anyone with a bit of value sense
      >gpu corpos wanted to sell you as many gpus as a board can hold
      >then suddenly for no reason at all it stopped "being worth it" and started being "a wase of money" in a market where people pay out the asshole for 5% gains
      >later
      >HEY GUYS LOOK AT THIS STUFF WE OPERATE WITH 300 GPUS RUNNING AT THE SAME TIME SO COOL
      suddenly that narrative about keeping AI out of the peasants hands is a little more plausible, thanks for saying what you did to cause me to realize that anon

      • 3 weeks ago
        Anonymous

        SLI was dogshit and barely worked. Motherboards don't even support PCIEx16 in more than one slot generally, so you're limited there as well. This isn't a schizo conspiracy.

        • 3 weeks ago
          Anonymous

          a fucking 4090 of all cards isnt limited by pcie2 despite being pcie4, GPUs capable of SLI/xfire literally dont need more than an x4 slots worth of bandwidth normally, two gen2 x8 slots was more than enough for back then

      • 3 weeks ago
        Anonymous

        Is he right, anon friends?

  2. 3 weeks ago
    Anonymous

    Microstutter

    • 3 weeks ago
      Anonymous

      spbp. And special driver profiles required to run it for supported games.
      So games were just as playable as on a single GPU but with a higher FPS counter, only useful in that case for e-peen.

  3. 3 weeks ago
    Anonymous

    because it barely worked
    just like running a game over dual socket CPUs will kill your 1% lows, having multiple GPUs playing a game will just make it a stuttery mess.

    • 3 weeks ago
      Anonymous

      >running a game over dual socket CPUs will kill your 1% lows,
      Works fine on my machine, maybe before a qpi link was a thing

      • 3 weeks ago
        Anonymous

        It's still 100% a thing, even more so since no multisocket capable CPU has good single core performance, which also heavily impacts 1% lows.

        No matter how you slice it, a multi-socket system simply can't give the same gaming performance the highest end single socket CPU will get you. To say otherwise shows a fundamental misunderstanding of what you're CPU of actually doing and how it's doing it.

        • 3 weeks ago
          Anonymous

          >No matter how you slice it, a multi-socket system simply can't give the same gaming performance the highest end single socket CPU will get you
          Depends on your resolution.
          IDK why people think CPU gayming performance is such a critical thing when you need to bench at 1080p to see differences.
          People who play at 4k/1440p or have anything short of the top-end of GPUs. it hardly matters the CPU.

      • 3 weeks ago
        Anonymous

        >Works fine on my machine
        possibly the os pins the game to specific numa node, unless you play a numa aware game which is impossible since game devs do not even know what this is it.

    • 3 weeks ago
      Anonymous

      >I don't understand what is CPU topology, cache coherency and thread scheduling the post

  4. 3 weeks ago
    Anonymous

    Mediocre average frame rate for games and compatibility issues.
    On the professional side, it cuts into margins for GPGPU compute.

    • 3 weeks ago
      Anonymous

      >On the professional side, it cuts into margins for GPGPU compute.
      Wrong, professional is where multi-GPU rendering makes sense and the only place where it still exists.

  5. 3 weeks ago
    Anonymous

    Because no consumers were willing to spend an extra $2000 (or more) to get 10% more performance?

  6. 3 weeks ago
    Anonymous

    Now imagine all those GPUs but with 12VHPWR connectors

    • 3 weeks ago
      Anonymous

      How many nuclear reactors would you need for that?

  7. 3 weeks ago
    Anonymous

    nvidia and amd werent interested on honing the drivers
    and game engine devs couldnt bother more to fix their games let alone to code for multistructured IAS

  8. 3 weeks ago
    Anonymous

    4090 takes the same space as 4 gpus with sli

    • 3 weeks ago
      Anonymous

      its a 3.5 slot card actually.
      I had 1080s in sli , they were the days

    • 3 weeks ago
      Anonymous

      >4090 takes the same space as 4 gpus with sli
      Also same power consumption and same bank account rape.

  9. 3 weeks ago
    Anonymous

    hterwsjmnytr

  10. 3 weeks ago
    Anonymous

    A pain in the ass for game dev mostly. Like the Asynchronous GPU Core programing for AMD where is woks only in DooM. Game Engine dev are less and less skilled nowadays.

  11. 3 weeks ago
    Anonymous

    Because it is the most wasteful thing to pay.
    It's like paying for DLSS. You get some games that support them, most of which you probably won't play. Some enhance your framerate but degrade your visual fidelity, with flickering or garbage 1% lows, etc.
    Stepping aside from the comparison, it wastes so much energy for the little it advantage it offers (most scaling wasn't even 1:1) .
    It was the pinnacle of PC gaming consumism.

  12. 3 weeks ago
    Anonymous

    I miss having Voodoo 2s anons

  13. 3 weeks ago
    Anonymous

    Crossfire is better

  14. 3 weeks ago
    Anonymous

    Because SLI only made sense when single fastest gpu on the market couldn't meet your expectations. I had GTX 1080 in SLI and it was great I was gaming at 4k in 2016 way ahead of the curve.

  15. 3 weeks ago
    Anonymous

    SLI died because it was shit.

  16. 3 weeks ago
    Anonymous
    • 3 weeks ago
      Anonymous

      is that gigachad

  17. 3 weeks ago
    Anonymous

    because nvlink took over

    • 3 weeks ago
      Anonymous

      nvidia managed to kill that off on anything that isn't a datacenter card too. quite the stupid decision, even if your wallet is bottomless an H100 only has a single GPC capable of graphics workloads. multi-gpu is a meme when it comes to games nowadays but is still useful for non-realtime rendering

  18. 3 weeks ago
    Anonymous

    programming games to use it was a bitch. Even DirectX 12, which was supposed to make taking advantage of multiple GPU's easy, didn't help.

  19. 3 weeks ago
    Anonymous

    currently it makes sense only in vr, as it has almost no penalization when you render two different images for your two eyes to split the load on the two gpus, but nobody is pushing for it

    • 3 weeks ago
      Anonymous

      Until the perspective in one eye is easier then the other, renders faster and you end up with different framerates in both eyes and end up throwing up

  20. 3 weeks ago
    Anonymous

    What is the modern equivalent to SLI in terms of being grossly exorbitant? Having a quad-SLI setup was some king big dick shit, even if it was functionally pretty shitty.

    • 3 weeks ago
      Anonymous

      Buying a 4090 today is more expensive than SLI setups back when they were relevant.

    • 3 weeks ago
      Anonymous

      There is basically no real equivalent.
      Like the other Anon says the closest thing to an equivalent would be a 4090 and an expensive one at that.

  21. 3 weeks ago
    Anonymous

    imagine the power consumption

  22. 3 weeks ago
    Anonymous

    GPUs got so powerful that there is no point anymore. Anything above a 2080 is masturbation.

  23. 3 weeks ago
    Anonymous

    >the chain is complete
    >itll heat itself once it reaches criticality

  24. 3 weeks ago
    Anonymous

    #1 - it is multi-GPU rendering not SLI/CF
    #2 - All of the common methods of doing mutli-GPU rendering have their own set of strengths and weaknesses. But ultimately none of them made much sense outside of epenis scores
    #3 - the new focus on low framerates as the desirable metric utterly destroyed multi-card rendering

  25. 3 weeks ago
    Anonymous

    Same reason there won't be a new x99 chipset. They don't want to make consumer hardware that could possibly be used by enterprise customers. They want to keep the use cases and price points far apart.

    • 3 weeks ago
      Anonymous

      For AMD camp, it is the reason why there will not be another Threadripper. You have get a full blown workstation board.

    • 3 weeks ago
      Anonymous

      For AMD camp, it is the reason why there will not be another Threadripper. You have get a full blown workstation board.

      With W790, HEDT ain't all bad tbh
      Boards are expensive but so is regular desktop boards.

      • 3 weeks ago
        Anonymous

        HEDT simply went back to its roots. Cheap HEDT is what is really dead. If you want more DIMM slots, PCIe lanes etc. You have to pay the workstation/server platform tax.

        • 3 weeks ago
          Anonymous

          It's not really expensive tho
          $879 for all the PCIe 5 slots you could want, dual 10Gb, thunderbolt and your full 8 sata ports is really a bargain when you consider you get on the desktop side.
          I'm not even sure of a desktop board that gives you 2x PCIe gen 5 slots except maybe an aorus xtreme but that's a more expensive board.

          CPUs are the only thing that's really a premium on the platform. Of course, nothing stopping you from buying a W3-2423

          • 3 weeks ago
            Anonymous

            You are forced to get half-locked CPUs they are binned for efficiency rather than raw performance which are repellents for min-max gayming-fag types.
            Granted, such types aren't interested in having an abundance of I/O connectivity and memory capacity.

            • 3 weeks ago
              Anonymous

              You can get unlocked versions too
              It is possibly for the better the CPUs aren't the best single thread performers.
              That could change is w790 ever gets hbm

  26. 3 weeks ago
    Anonymous

    Hello, thanks for the question

    DX12 introduced mGPU which is API-level "SLI/CX". This was supposed to have the ability to do hetergenous GPU combinations, like you iGPU+dGPU or 2 dGPUs from different companies. Basically, SLI but beter, Nvidia and AMD saw the writing on the wall and dropped their multi-GPU support. Problem is, mGPU and especially hetergenous mGPU is very difficult to implment, much more difficult than multithreading games or ray tracing. Game devs are big gay lazy babies and so there's exactly 1 game that ever got mGPU: Ashes of the Singularity. It's a terribly boring game that is only useful for benchmarking that came out like 10 years ago and the devs autistically add every new CPU and GPU feature to it. It's also used to benchmark CPUs because it scales very well with cores, one of the only games that actually does. Other than that, there are several boring compute projects that use mGPU to calculate stupid junk and as far as I can tell most are actually abandoned student projects.

    Tl;Dr: SLI died for vaporware

  27. 3 weeks ago
    Anonymous

    SLI should've gotten full NvLink capabilities with pooling the vram and other stuff

    • 3 weeks ago
      Anonymous

      Which would have made absolutely no difference. NvLink only makes sense for general compute which is why it is became limited to those SKUs

      • 3 weeks ago
        Anonymous

        It became limited to those SKUs because it gives Nvidia a convenient reason to charge more. Let's not kid ourselves that they couldn't have added that functionality to consumer GPUs even if there's no reason for most people to use it.

      • 3 weeks ago
        Anonymous

        I'm pretty sure that even over Nvlink, SLI ran in master-slave, so Nvidia definitely could've improved it

        • 3 weeks ago
          Anonymous

          The problems with multi-GPU rendering is software not hardware. NVlink and SLI/CF PCIe fingers were little more than placebos for gayming.

  28. 3 weeks ago
    Anonymous

    bloat

  29. 3 weeks ago
    Anonymous

    doesnt give as good of a margin as people buying $1000 GPUs

    everyone saying it doesnt work never used it, support was great and in every game back in 2012 times, the only thing is most games had to run in full screen mode instead of borderless, otherwise literally it was great

  30. 3 weeks ago
    Anonymous

    >2 cards 1.5x the speed of one card
    >3 cards 1.8x the speed of one card
    >4 cards 1.3x the speed of one card
    It's not really cost effective compared to just buying a faster card. The reason it was a thing in the first place was because there weren't any faster cards to buy. Now we have meme cards that cost as much as a car and draw all the power you can get from a single outlet.

    • 3 weeks ago
      Anonymous

      the MOST cost effective is not doing any of this gaming baloney in the very first place

    • 3 weeks ago
      Anonymous

      >buy flagship card at launch for $500
      >buy second card a year later on holiday sale for $350
      >spend next two years in comfort with more performance than any single gpu rig
      >????????
      >nvidia makes less profit
      it was good while it lasted even if didnt have perfect scaling

    • 3 weeks ago
      Anonymous

      >draw all the power you can get from a single outlet
      One day we'll all suffer for burger power voltlets.

  31. 3 weeks ago
    Anonymous

    the problem was that it just didn't scale well
    some games you might get 60% more performance, some game you might get almost no benefit at all
    also keep in mind vram is also shared, so if you say, got 2x 2GB cards, you still only have 2GB of vram, because the contents are mirrored because each card needs access to the same memory as they're rendering the same scene
    if you absolutely needed more performance than what the biggest card could offer, then it was the only way forward, but it was a hard sell

  32. 3 weeks ago
    Anonymous

    This is still the best way to get vram for AI ya? I think 3090x2 is meta for 48vram.

    • 3 weeks ago
      Anonymous

      If vram is all you need, 24gb Quadro M6000’s are cheap 2 slot blowers, you can get 4 of them on one board for 96gb of vram at $380-$440 per card

      • 3 weeks ago
        Anonymous

        >buy 2 aliexpress 16gb rx580
        >suddenly you have 32gb of VRAM for $230

        >Maxwell
        oof, they're in life support right now, you can do the same with cheaper Tesla cards though

  33. 3 weeks ago
    Anonymous

    multi gpu setups are still used, just turns out you dont need sli for it
    t. video renderer with 2 render cards and a 3rd for display during render

    • 3 weeks ago
      Anonymous

      They just mean it's pointless for gaming, literally anything else you use a GPU for multiple cards is still good

      • 3 weeks ago
        Anonymous

        yeah, but gpu multi rendering is pretty bad because it cant be asynchronous unless youre buiding a frame buffer (which is not good for gaming for multiple reasons).
        plus crossfire and sli needed a lot of additional programming sicnce they werent easy to implement. honestly gaming has barely changed apart from 4k textures.

  34. 3 weeks ago
    Anonymous

    You can only do so much to sync a single threaded app between multiple devices.
    Like parallel versus serial

  35. 3 weeks ago
    Anonymous

    it should be revived as multi VM setup

    • 3 weeks ago
      Anonymous

      you don't need sli for a multiseat environment

      • 3 weeks ago
        Anonymous

        fine gay but i meant something else

        • 3 weeks ago
          Anonymous

          well what did you mean?

  36. 3 weeks ago
    Anonymous

    cause it was never good to begin with.

Your email address will not be published. Required fields are marked *