Wtf is the point of the RTX 4060ti???

If it had the same performance as the RTX 3070ti, 16gb, for $400 it would be easily a good buy.

But it is around the same performance as the RTX 3060ti, for the same price. Why?

  1. 5 days ago
    Anonymous

    Because Nvidiots will buy it anyway. Jensen won.

  2. 5 days ago
    Anonymous

    Because they know nobody is buying their shit cards this generation so it's basically a throwaway card. when the 5060ti comes out they can say "LOOK GUYZ IT MAKES 20% MOAR EFF PEE ESSES" and everyone's gonna lose their shit and completely ignore the fact that it's only a 20% gain over two generations

  3. 5 days ago
    Anonymous

    To upsell their overpriced better models dummy.

    Uneducated people wanting to get into gaming will buy it as a budget option.

    Educated gamers won't touch this shit but are forced to buy overpriced higher tier garbage if they want to upgrade.

  4. 5 days ago
    Anonymous

    3D work and AI, not everything is about muh gayms. The 4060 not being offered with 12 should be a crime though since the 3060 had it

    • 5 days ago
      Anonymous

      Because time has shown that people will happily pay $400 for that level of performance. It's also cheaper to manufacture for them with its smaller memory bus and lower VRAM, and they can just upsell shit like AI performance, upscaling, hardware encoding/decoding, and frame generation to argue it's actually much better value than the previous gen.

      Too bad, pay the extra $100 if you want more than 8GB VRAM.

      • 5 days ago
        Anonymous

        Nah, I use the 3060 12GB for my work and the jump from 12GB to 16GB is not as worth it as the jump from the 6GB 1060 I had before. The lower rendering times would be nice but not worth it since I already have a decent turnaround time already. But it's a missed opportunity

    • 5 days ago
      Anonymous

      >3D work and AI
      Not with that VRAM lol

      • 5 days ago
        Anonymous

        >16GB
        >Not enough
        I'll admit idk much about AI but I do freelance animation on a 12GB 3060 and I've yet to have it go OOM, my biggest project used about 10GB rendering

  5. 5 days ago
    Anonymous

    Its a moneycow for nvidia
    >make a 4050 call it a 4060ti
    >upcharge by 100%

    • 5 days ago
      Anonymous

      >comparing cuda cores across different architectures
      The absolute state of this board

      • 5 days ago
        Anonymous

        The memory bus and relative transistor count in the generation you dumb cunt.

        Lol nobody is buying nvidia for gaming. They are buying for AI, and vram is king in AI.

        Probably still be a bottleneck. Card is just plain bad. Nvidia have gone nuts calling this a 106 die.

        • 5 days ago
          Anonymous

          You can't possibly be this deluded, surely

          have fun getting out of memory errors

      • 5 days ago
        Anonymous

        They're the same for ampere and ada tbch tbh

      • 5 days ago
        Anonymous

        >t. Enjoyer of leaded water

        • 5 days ago
          Anonymous

          >sweetener
          >0 calories
          >this is supposed to be a bad thing
          ?

  6. 5 days ago
    Anonymous

    it use 40% less power than 3060 ti

    • 5 days ago
      Anonymous

      40W, retard

  7. 5 days ago
    Anonymous

    Because Nvidia thinks their customers are cashed up retards. And they are correct in this belief.

  8. 5 days ago
    Anonymous

    People that buy those kind of GPU's either upgrade every generation or every 2 generations. They don't look up benchmarks or care really.

  9. 5 days ago
    Anonymous

    Nvidia won, stock went to the moon. People and corps will absolutely keep buying their cards for AI as opposed amd garbage.

    • 5 days ago
      Anonymous

      >People and corps will absolutely keep buying their cards for AI
      Corps yes, people no. Since they are intentionally crippling vram in the consumer cards specifically to _prevent_ them being used for AI.

  10. 5 days ago
    Anonymous

    4060 ti is a better buy than 4070 because it has 16gb vram
    simple as

    • 5 days ago
      Anonymous

      Maybe not. Still has that limited memory bus. more memory won't fix that.

      • 5 days ago
        Anonymous

        Lol nobody is buying nvidia for gaming. They are buying for AI, and vram is king in AI.

        • 5 days ago
          Anonymous

          You can't possibly be this deluded, surely

    • 5 days ago
      Anonymous

      On the other hand the wafer costs double from ampere to ada but then the dies are just one component in the total cost.
      This is getting complicated but i do consneed that picrel isn't the complete picture

      Its a moneycow for nvidia
      >make a 4050 call it a 4060ti
      >upcharge by 100%

    • 5 days ago
      Anonymous

      You can't just expect to keep the entire game in VRAM without any data transfers to/from the PCIe bus

      • 5 days ago
        Anonymous

        gamers don't buy nvida
        it's for ai retard

        • 5 days ago
          Anonymous

          Not even AI can save gamers from 100GB games.
          It's time to put limitations on game size, so that devs can't simply stream music and FMVs easily.

          • 5 days ago
            Anonymous

            wasn't the whole point of directstorage to save us from this hell?

            • 5 days ago
              Anonymous

              We're talking like limitations of no more than 128MB by the way.

            • 5 days ago
              Anonymous

              ya if only it didn't debute on a massive flop of a game like forspoken

          • 5 days ago
            Anonymous

            >Not even AI can save gamers from 100GB games
            Uh try again sweaty

  11. 5 days ago
    Anonymous

    Because the same wafer could be used to make H100s that sell for 40k each, and they know that they will sell every single one that they make. The real question is why NVIDIA still bother with consumer crap when they are making so much money selling datacenter GPUs for AI.

    • 5 days ago
      Anonymous

      The *60 and above have some overlap with people who buy Quadros like small 3D animation studios and freelancers. I wonder why they still bother with the *30 and *50 though, integrated graphics have gotten good enough to where the jump from them to even a *50 Ti is not really worth it, and the old trick of buying an office PC and bolting a *50 on is no longer viable since the machines offices are getting rid off now are either thin clients or can take something beefier

  12. 5 days ago
    Anonymous

    because they can
    intlel and ayymd aren't going to do anything about it

    • 5 days ago
      Anonymous

      amd is just as guitly in this case

      • 5 days ago
        Anonymous

        yeah that's what i'm saying, their offer is just as shit

  13. 5 days ago
    Anonymous

    israelites man they just love mocking us

  14. 5 days ago
    Anonymous

    this and the 7600XT are pointless releases andf i dont know what either company was thinking with them

  15. 5 days ago
    Anonymous

    will the 16gb version be decent for AI stuff?
    Like inference and training?
    I already have a 4080 in my main work/gaming rig but I have a linux box I want to test as an AI service server.
    Was wondering if 4060ti would be worth it for having a cheap cuda core gpu that has 16gb of ram.

    • 5 days ago
      Anonymous

      maybe, the memory bus might be a problem, especially for language models

      • 5 days ago
        Anonymous

        how much worse will it be than a 12gb 4070ti because i don't want to spend that much on my spare rig which currently runs a 1660 6gb

        • 5 days ago
          Anonymous

          can't say with certainty, but I know LLMs have traditionally been memory bandwidth bottlenecked
          the literature usually talks about intra-memory bandwidth rather than bus width to the gpu itself, but the same limitations apply, you're working with huge chunks of data that are almost always different, so not many cache hits.
          it can cache the intermediate vectors, but you still have to read all of the weights into the alus, it can't cache all of that. maybe there's some clever optimizations with the increased cache, idk. The actual compute performance is also kinda shit, so maybe it won't bottleneck it
          in short, I would expect it to be much worse
          might not be as bad for something like SD

          • 5 days ago
            Anonymous

            maybe, the memory bus might be a problem, especially for language models

            thanks

            >16GB
            >Not enough
            I'll admit idk much about AI but I do freelance animation on a 12GB 3060 and I've yet to have it go OOM, my biggest project used about 10GB rendering

            I'll wait for benchmarks for the 4060 16gb to come out and then compare it to 3060 12gb.

            Just going by AI gpu benchmark sites ram really doesn't do much for learning, it does improve the score but not by much.

  16. 5 days ago
    Anonymous

    at this point we all have to realize there isn't a point to any of these graphics cards. the 3090ti will be the apex of graphics cards for a longggggg time to come, and until then we have to realize a few uncomfortable truths

    >there are video cards for every price range/resolution/fps imaginable
    >the only reason these companies keep putting these out is so they can stay relevant
    >we are all hooked on a pc building addiction (whether we act on it or not) and so we keep playing into the hype
    >for what most people play, most video cards have more than enough VRAM

  17. 5 days ago
    Anonymous

    They thought people would fall for the frame generation thing, turns out no one is interested in the soap opera/motion smoothing/fake frames garbage.

  18. 5 days ago
    Anonymous

    They're just lubing your assholes for the 5000 series cards. There's only gonna be one 5060ti 12GB SKU that costs $549 and it will barely match a 3090 (with DLSS4 enabled). Everybody will clap and buy it.

Your email address will not be published. Required fields are marked *