When 8 year old card has more bandwith

It's honestly embarrassing. The R9 390 came out 8 years ago and it has more memory bandwith, the same ammount of vram and a much smaller MSRP than a card released in 2023. How in the fuck did this shit happen? Seriously it's embarrassing.

  1. 2 weeks ago
    Anonymous

    it's only going to get worse as companies resort to increasingly sneaky tricks and machine learning gimmicks in order to avoid admitting that we've hit a wall and further progress is not really possible from here (at least not at a reasonable levels of power consumption and heat output for a home device)

    • 2 weeks ago
      Anonymous

      DLSS and FSR are fucking ruining GPU's.

      • 2 weeks ago
        Anonymous

        these technologies are objectively a good thing. if you can get better performance with the same image quality, why wouldn't you?

        • 2 weeks ago
          Anonymous

          Yeah expect devs realized that they can be lazy because of them.

      • 2 weeks ago
        Anonymous

        No, amd not being able to provide competition is killing gpus

    • 2 weeks ago
      Anonymous

      It doesn't cost them anything to slap more VRAM on their cards though.
      (obviously there's a cost but they're a large enough company to be able to eat that if it weren't for the shareholders that would be upset about them taking a pro-consumer standpoint).

      It's embarrassing to see texture bugs on these cards.

      • 2 weeks ago
        Anonymous

        They're cucking vram as a form of planned obsolesce for gamers, and because things like AI and machine learning use a lot of vram and they want businesses to pay extra for business-directed cards

        • 2 weeks ago
          Anonymous

          They're going to lose out to Intel then for anyone that doesn't need CUDA.
          To paraphrase Steve from GN:
          >Nvidia has handled Intel their best marketing yet. They don't have to do anything but just sit there and the cards will sell themselves

          • 2 weeks ago
            Anonymous

            All I can hope for is that software starts moving away from JUDA toward more open alternatives

            • 2 weeks ago
              Anonymous

              unlikely, devs can afford nvidia gpus and are not interested in dealing with the buggy mess that are intel and amd drivers.

              • 2 weeks ago
                Anonymous

                >buggy mess that are intel and amd drivers.
                They're not buggy.
                Devs aren't interested because Nvidia already has that vendor lock-in.
                New software often supports AMD and Nvidia. It's older software that was often written with CUDA only in mind due to lack of alternatives that's problematic.

                Think about it for a moment. If you're a developer that's already heavily invested in CUDA, why on earth would you re-write your software for AMD or Intel?

              • 2 weeks ago
                Anonymous

                how are they not buggy? rocm/hip is an absolute mess that barely works on consumer gpus and doesn't even support windows. intel drivers can't even compile many vulkan shaders without shitting themselves and they refuse to fix it because it doesn't affect "big games". in comparison, building software with cuda is a pleasure, mostly everything works as expected, integrates perfectly with the rest of your c++ code, and if something isn't supported it throws a clear error instead of crashing or silently corrupting your data. opencl is an absolute joke that requires tons of boilerplate code and different patches and exceptions for each manufacturer.

              • 2 weeks ago
                Anonymous

                >rocm/hip is an absolute mess that barely works on consumer gpus and doesn't even support windows.
                Developers don't run Windows and data centres don't run it either. Windows is irrelevant.
                ROCM also works on most of the GPUs that matter now, even if they took their sweet time to do so.

              • 2 weeks ago
                Anonymous

                >Developers don't run Windows
                Say that again.

              • 2 weeks ago
                Anonymous

                Read what you're actually posting instead of parroting marketing BS
                >Professional use: 48.82%
                More developers use Linux and macOS (combined) than Windows.

            • 2 weeks ago
              Anonymous

              What is the point of even caring. There are no new good games. Everything is pozzed. Games peaked in like 2008. You can run anything you want on a 1060 easily. And with video editing and photo editing sure, but then just buy a 3060 for $300 and tou can do whatever hobbiest shit you want. Everything is gay these days thanks to ESG infecting everything.

      • 2 weeks ago
        Anonymous

        >It doesn't cost them anything to slap more VRAM on their cards though.
        Dumbest thing posted on LULZ all day.

        No, they can't just add more vRAM. Cost is only one small roadblock. The modules only come in relatively small capacities and they run hot. So there's not enough room on the board and it would just produce too much heat. Moving the chips all over hell's half acre is not an option because of the sensitive electrical tolerances of all the parts involved.

        • 2 weeks ago
          Anonymous

          >The modules only come in relatively small capacities and they run hot
          So build a cooler that's not shit or liquid cool the whole thing.

          • 2 weeks ago
            Anonymous

            >liquid cooling a consumer GPU
            >liquid cooling anything other than a mainframe
            LULZ is a disease

            • 2 weeks ago
              Anonymous

              It helps a lot with heat dissipation. If we're really at the point where it's impossible to build a GPU of that class with more than 8GB of VRAM because the modules run too hot it's time to insist on adding copper backplates to everything along with an AIO.

              • 2 weeks ago
                Anonymous

                >still trying to argue in favor of home computer users buying three foot long 900W tumors with a cooling system that ruins the entire machine if/when it fails
                What if Advanced Mental Diseases and Nvidiot spent a year optimizing their trash?

              • 2 weeks ago
                Anonymous

                AIO's can be very small. Did you know there are liquid cooled laptops in existence now?
                The tech has come along way now.
                As for waiting a year, maybe Nvidiot should have thought of that before releasing a half-baked product nobody is going to buy.

              • 2 weeks ago
                Anonymous

                >Did you know there are liquid cooled laptops in existence now?
                Yes, and it's still retarded.

                I'm likely going to upgrade my workstation to an A2000 specifically so that I don't have to hear it 24/7 and see its skid marks on my electric bills. They released this card for people like me who refuse to buy LULZtard cards that pop the fuses downstairs.

          • 2 weeks ago
            Anonymous

            Was Vega not enough of a warning for you?

    • 2 weeks ago
      Anonymous

      >avoid admitting that we've hit a wall and further progress is not really possible from here
      Mvidia's jump from 8nm Samsung to 4N TSMC is the biggest generational IPC increases that they've had since Pascal and that was a massive increase in IPC. Look at 4090 vs 3090 as a point of comparison. Nvidia is just selling shit GPUs because people will buy them. The 4080 12G was fucking berated, unlaunched, people were about to burn Jensen's house, but now that it's a 4070 Ti for $100 cheaper, it's actually managing to move some stock. Nvidia is playing the patient game. Early buyers get absolutely fucked on pricing, buying a 50ti card for the price of what used to be a xx80, they get incredible margins and don't lose market share, will probably slash prices come black firday/christmas and people will buy becuase they've been starved for new cards.

      • 2 weeks ago
        Anonymous

        >buying a 50ti card for the price of what used to be a xx80
        THERE HAS NEVER BEEN 80 CARD FOR $399

        • 2 weeks ago
          Anonymous

          The 980 was $550. The 4060 Ti 16G will be $500. And if you go back even further, the 9800 GTX was released at $350, around $500 adjusted for inflations, so not only are you wrong, you're wrong by about 50 bucks

    • 2 weeks ago
      Anonymous

      Nobody has done commercial adiabatic processors yet. Since those can reduce the power dissipation to single-digit electronvolts per switch, they should be able to do thousand-layer GAA processes, making up for the drop in necessary clock rate.
      Given that, expect that at some point, equivalent flops without fans.
      That is, if they decide to ever implement it.

      • 2 weeks ago
        Anonymous

        >equivalent flops without fans
        That would be nice. Let's ditch these chunky hot bricks.

  2. 2 weeks ago
    Anonymous

    yes more bandwith but different arhitecture that time

    they can still use the 512bit in 2023 but they are greedy companies rip off people money they got so rich they dont care anymore about customer ...the money can make people to bad things even for companies like AMD,NVIDIA,INTEL etc

  3. 2 weeks ago
    Anonymous

    >same bus width as the 4060ti

    • 2 weeks ago
      Anonymous

      I mean the GTX 960 also has the same bus width and because of that it sometimes lost to the 760. (though now because of Maxwell still getting drivers a 960 is about as good as a 780. While 750ti matches a 760)

      However diffrence was that the 960 was 199$ instead of 399$. And as such the 960 was a very popular card.

      • 2 weeks ago
        Anonymous

        >$200 card with bargain bin memory bus
        I don't love it, but it was ok
        >$300 card with same memory bus
        Insulting.

    • 2 weeks ago
      Anonymous

      That was a very good graphics card though, it's usually faster than GTX 760 nowadays, that's how much better Maxwell was compared to Kepler.
      Nvidia doesn't release graphics cards like that or the GTX 1050 Ti before it became overpriced any more.

    • 2 weeks ago
      Anonymous

      See

      >memory is running at 6gbps
      >bandwidth is higher
      When will you retards realize the bottleneck is the vram itself? If the vram is running at a lower data speed than the bus then it makes no difference. Get it through your retarded fucking skulls.

  4. 2 weeks ago
    Anonymous

    The spot price of GDDR6 is $3.4/GB. An A100 has $272 worth of VRAM.
    They know exactly what they're doing, and the goycattle are willingly accepting it.

    • 2 weeks ago
      Anonymous

      A100 uses extremely expensive HBM you fucking retard.

      • 2 weeks ago
        Anonymous

        It can't be that expensive if AMD were able to cram it in their goyslop cards for several years and still remain somewhat price competitive.

        • 2 weeks ago
          Anonymous

          Vega was a disaster that received multiple price cuts just to attempt to stay halfway competitive, not exactly a shining example of a mainstream GPU.

  5. 2 weeks ago
    Anonymous

    supporting higher bus widths is expensive die space size and the card is not memory bottlenecked so what is the fucking problem?
    why is LULZ so tech illiterate?
    one card that could actually use a wider bus is the 4090 and 4080 a bit but the rest are just fine

    • 2 weeks ago
      Anonymous

      Expect the 4060ti is losing to the 3060ti in higher resolutions. This card is DOA.

  6. 2 weeks ago
    Anonymous
    • 2 weeks ago
      Anonymous

      The Radeon VII is based for proving that VRAM amount, bus width, and memory bandwidth, all doesn't matter in games.

      • 2 weeks ago
        Anonymous

        Brainlet. It matters until you hit another bottleneck.

        • 2 weeks ago
          Anonymous

          So why doesn't the Radeon VII, with its 4096-bit bus, 1TB/s bandwidth, 16GB VRAM, beat the 3080, 3080 Ti, 4070, 4070 Ti.
          Could it be that specs surrounding the memory would be a bottleneck far later than the GPU core?
          Nah, must have been the memory all along.

          • 2 weeks ago
            Anonymous

            This is what I'm talking about. 128-bit bus is naggerlicious nowadays, and 4060/4060ti suffers from it. You'd think that they'd give 4060ti a 256-bit bus, but Jenses is too much of a israelite to do that. They're exploiting FOMO up the ass.

            • 2 weeks ago
              Anonymous

              It's not naggerlicious if it beats the 4096-bit, 1TB/s bandwidth, 16GB Radeon VII.

              • 2 weeks ago
                Anonymous

                to be fair those few people that did benchmarks of it this year it seems to be close to 3070 at 2k and close to 3080 in 4k at like 2 games (bfv and cod)

      • 2 weeks ago
        Anonymous

        It doesn't matter until you run out. Which is currently happening on both the 4060 and the 7600.

  7. 2 weeks ago
    Anonymous

    HBM is expensive and unreliable, chip often desolder itself or kill itself

  8. 2 weeks ago
    Anonymous

    Stagnation is about to hit a decade long

    • 2 weeks ago
      Anonymous

      The AI craze is going to boost Nvidia a lot. They'll just ride that wave and keep pumping out marginally better cards.

  9. 2 weeks ago
    Anonymous

    none of nvidias cards are actually optimized for machine learning, i predict they have started working on a card for AI because they could legit make exponential leaps in performance for that use case in a very small time frame

    • 2 weeks ago
      Anonymous

      Nvidia cards which are optimized for machine learning can't be used in your Joe Shmoe PC.

    • 2 weeks ago
      Anonymous

      The Nvidia Tesla lineup is built for machine learning.
      Nvidia literally developed an AI self driving racing league to market Tesla cards.

  10. 2 weeks ago
    Anonymous

    I just want AMD to do a 75W card with at least 8 lanes. They do like a 50W single slot card, and the next card up is 100W+, but they both run on only 4 lanes, which seems retarded, especially on older PCIe spec.

    • 2 weeks ago
      Anonymous

      Why would you make that a card when a theoretical 120W APU can easily be handled by a B650 board

      • 2 weeks ago
        Anonymous

        sub 120W cards exist, which means people want them. i personally like having wattages spread out across multiple chips, with potential for passive cooling. i also don't want to be stuck with x8 lanes in case i want to repurpose or upgrade in the future. i also need the additional i/o which the motherboard itself doesn't usually provide.

        • 2 weeks ago
          Anonymous

          >sub 120W cards exist, which means people want them
          The only "people" who wanted them were businesses which need something cheap to run multiple monitors and those who wanted an HTPC with something more powerful than Intel HD Graphics. Now that AMD is back on the top of the consumer CPU market, APUs are making a comeback. Creating an entirely new chip and designing a card around it when you can just shove in the same number of CUs in the CPU just makes no sense, from both a monetary and packaging standpoint.

          • 2 weeks ago
            Anonymous

            Whatever, dude. I've given you my reasons and I'm not the only one with similar needs. If AMD continues to gimp the lanes on their low end cards in the future, I'll most likely just keep buying from Nvidia - no biggie.

            • 2 weeks ago
              Anonymous

              NTA but I agree. There's certainly a market for 75w cards beyond simply being "more powerful than Intel HD Graphics".
              Unfortunately you're mostly limited to used Quadro/Tesla cards.

            • 2 weeks ago
              Anonymous

              And what the fuck are you going to buy from Nvidia? 4060ti is 160 watts and is a X8 GPU. 4060 non ti will also be a X8 card (and that card is probably not going to be a sub 120 watt gpu). And the 4050 will certainly be a X8 card and that is going to be probably the only sub 120 watt GPU from Nvidia.

              Unless you go Quadro you won't find an Nvidia card with a 16 lane sub 120 watt card. Unless you buy an ancient 1650, which performs worse than a RX 6400 even in PCIE 3.0 systems.

            • 2 weeks ago
              Anonymous

              And what the fuck are you going to buy from Nvidia? 4060ti is 160 watts and is a X8 GPU. 4060 non ti will also be a X8 card (and that card is probably not going to be a sub 120 watt gpu). And the 4050 will certainly be a X8 card and that is going to be probably the only sub 120 watt GPU from Nvidia.

              Unless you go Quadro you won't find an Nvidia card with a 16 lane sub 120 watt card. Unless you buy an ancient 1650, which performs worse than a RX 6400 even in PCIE 3.0 systems.

              Like seriosuly WTF are you going to buy from Nvidia? The 1800 euro Nvidia RTX 4000 SFF Ada? A used mined RTX A2000? Nvidia T1000? GTX 1650? GTX 1630? GTX 1050ti?

              Because those are your options.

              • 2 weeks ago
                Anonymous

                >used mined RTX A2000
                Highly doubt many mined on A2000. They would be paying 3090 prices for 1660 hash rate.

              • 2 weeks ago
                Anonymous

                Oh forgive me for adding the 1630. That one is an X8 card. So your consumer options are the GTX 1050ti or 1650. Or the various Quadro options I mentioned.

            • 2 weeks ago
              Anonymous

              >implying Nvidia cars about people like you
              Lmao keep waiting nagger. I'm sure the next GTX 1630 or GT 1030 DDR4 will be great. AMD and Intel are the only ones who care about this segment and neither of them will waste resources for an EXTREMELY niche market that you're a part of. APUs and high density packaging is where the industry is headed, like it or not.

              • 2 weeks ago
                Anonymous

                I'll just buy from whoever offers a card closest to my requirements whenever that time comes. I don't see why this upsets you so much.

                This is what integrated graphics is for. There is no purpose for a 1030 or 1630 or whatever to even exist.

                my 1650 exists, works fine, and is better than my iGPU.

              • 2 weeks ago
                Anonymous

                >I'll just buy from whoever offers a card closest to my requirements whenever that time comes. I don't see why this upsets you so much.
                You're going to be wating until the death of the universe. The 1050 Ti has been the best LP card for the past 7 years and neither Nvidia nor AMD care enough about this segment to change that fact. The fact that your 1650 is only 10% faster than AMD's laptop 780m should be telling enough, kind of a wake up call about multibillion dollar companies conforming to your desires being a very backwards mindset when they've already figured out a much better solution.

              • 2 weeks ago
                Anonymous

                >multibillion dollar companies conforming to your desires
                funny how you make this kind of shit up just so you can rage about it

              • 2 weeks ago
                Anonymous

                >funny how you make this kind of shit up just so you can rage about it
                It's literally your mindset tho. You'd rather believe in a false narrative and wait until your bones rot instead of seeing the state of the industry. As I said, a Radeon 780M, a laptop GPU that alongside its 8c/16t CPU produces no more than 28W is close in performance to your desktop card. Literally no fucking manufacturer will ever bother to make GT730/GT1030/GTX1650 LP style card ever again, 90% of the cost would be PCB/VRAM/ports/cooler/packaging when just having those 8 or 12 CU in the CPU will offer a miniscule increase in TDP and reduce costs by three fold. It's been the trend for a while now and stuff like chiplets for the low end will kill all LP cards. Not even mentioning the fact that both AMD and Nvidia are shifting to higher margin products in general. Just do yourself a favor and spend time researching the topic before calling people who know better than you delusional.

              • 2 weeks ago
                Anonymous

                holy shit, false narrative this, companies conforming that... meds, now, homosexual. i don't care about any of this anywhere near as much as you do. i didn't say you were delusional, but by god, you're trying real hard to prove it.

              • 2 weeks ago
                Anonymous

                Or you know, maybe you're just ignorant? You ever thought about that?

              • 2 weeks ago
                Anonymous

                like i said, i just don't particularly care. but feel free to keep trying to put some negative twist on it, seeing as I'm the one living in your head rent free.

            • 2 weeks ago
              Anonymous

              This is what integrated graphics is for. There is no purpose for a 1030 or 1630 or whatever to even exist.

  11. 2 weeks ago
    Anonymous

    No wonder my GPU is still enough.

  12. 2 weeks ago
    Anonymous

    gcn/cdna was/is a compute monster anon even now

  13. 2 weeks ago
    Anonymous

    >memory is running at 6gbps
    >bandwidth is higher
    When will you retards realize the bottleneck is the vram itself? If the vram is running at a lower data speed than the bus then it makes no difference. Get it through your retarded fucking skulls.

  14. 2 weeks ago
    Anonymous

    Nvidia decided they will israelite the customer as much as possible and AMD followed suit because they realized it will make them more money than trying to actually compete with Nvidia. And Intel graphics cards turned out to be a joke.

  15. 2 weeks ago
    Anonymous

    What a fucking joke. I'm going to be using my 1080ti until the sun obliterates Earth.

  16. 2 weeks ago
    Anonymous

    >comparing current msrp to msrp from yesteryears
    What is inflation? Your money is simply worth less today. Sad but true

  17. 2 weeks ago
    Anonymous

    Looking at the 512bit bus on my old R9 Fury really hits home just how clown world the GPU market has become.

    • 2 weeks ago
      Anonymous

      I take it back, it's 4096 bit

      • 2 weeks ago
        Anonymous

        The 290/390 was 512 bit, which was already nuts.

  18. 2 weeks ago
    Anonymous

    >more means better
    Modern bus width is not the same retard, modern bus width can compress data a lot more compared to this piece of crap, the way is a lot different compared to old bus width. Fucking retard.

  19. 2 weeks ago
    Anonymous

    last gpu that supported analog signal is 8 years ago

Your email address will not be published. Required fields are marked *