If it had the same performance as the RTX 3070ti, 16gb, for $400 it would be easily a good buy.
But it is around the same performance as the RTX 3060ti, for the same price. Why?
If it had the same performance as the RTX 3070ti, 16gb, for $400 it would be easily a good buy.
But it is around the same performance as the RTX 3060ti, for the same price. Why?
Because Nvidiots will buy it anyway. Jensen won.
Because they know nobody is buying their shit cards this generation so it's basically a throwaway card. when the 5060ti comes out they can say "LOOK GUYZ IT MAKES 20% MOAR EFF PEE ESSES" and everyone's gonna lose their shit and completely ignore the fact that it's only a 20% gain over two generations
To upsell their overpriced better models dummy.
Uneducated people wanting to get into gaming will buy it as a budget option.
Educated gamers won't touch this shit but are forced to buy overpriced higher tier garbage if they want to upgrade.
3D work and AI, not everything is about muh gayms. The 4060 not being offered with 12 should be a crime though since the 3060 had it
Because time has shown that people will happily pay $400 for that level of performance. It's also cheaper to manufacture for them with its smaller memory bus and lower VRAM, and they can just upsell shit like AI performance, upscaling, hardware encoding/decoding, and frame generation to argue it's actually much better value than the previous gen.
Too bad, pay the extra $100 if you want more than 8GB VRAM.
Nah, I use the 3060 12GB for my work and the jump from 12GB to 16GB is not as worth it as the jump from the 6GB 1060 I had before. The lower rendering times would be nice but not worth it since I already have a decent turnaround time already. But it's a missed opportunity
>3D work and AI
Not with that VRAM lol
>16GB
>Not enough
I'll admit idk much about AI but I do freelance animation on a 12GB 3060 and I've yet to have it go OOM, my biggest project used about 10GB rendering
Its a moneycow for nvidia
>make a 4050 call it a 4060ti
>upcharge by 100%
>comparing cuda cores across different architectures
The absolute state of this board
The memory bus and relative transistor count in the generation you dumb cunt.
Probably still be a bottleneck. Card is just plain bad. Nvidia have gone nuts calling this a 106 die.
have fun getting out of memory errors
They're the same for ampere and ada tbch tbh
>t. Enjoyer of leaded water
>sweetener
>0 calories
>this is supposed to be a bad thing
?
it use 40% less power than 3060 ti
40W, retard
Because Nvidia thinks their customers are cashed up retards. And they are correct in this belief.
People that buy those kind of GPU's either upgrade every generation or every 2 generations. They don't look up benchmarks or care really.
Nvidia won, stock went to the moon. People and corps will absolutely keep buying their cards for AI as opposed amd garbage.
>People and corps will absolutely keep buying their cards for AI
Corps yes, people no. Since they are intentionally crippling vram in the consumer cards specifically to _prevent_ them being used for AI.
4060 ti is a better buy than 4070 because it has 16gb vram
simple as
Maybe not. Still has that limited memory bus. more memory won't fix that.
Lol nobody is buying nvidia for gaming. They are buying for AI, and vram is king in AI.
You can't possibly be this deluded, surely
On the other hand the wafer costs double from ampere to ada but then the dies are just one component in the total cost.
This is getting complicated but i do consneed that picrel isn't the complete picture
You can't just expect to keep the entire game in VRAM without any data transfers to/from the PCIe bus
gamers don't buy nvida
it's for ai retard
Not even AI can save gamers from 100GB games.
It's time to put limitations on game size, so that devs can't simply stream music and FMVs easily.
wasn't the whole point of directstorage to save us from this hell?
We're talking like limitations of no more than 128MB by the way.
ya if only it didn't debute on a massive flop of a game like forspoken
>Not even AI can save gamers from 100GB games
Uh try again sweaty
Because the same wafer could be used to make H100s that sell for 40k each, and they know that they will sell every single one that they make. The real question is why NVIDIA still bother with consumer crap when they are making so much money selling datacenter GPUs for AI.
The *60 and above have some overlap with people who buy Quadros like small 3D animation studios and freelancers. I wonder why they still bother with the *30 and *50 though, integrated graphics have gotten good enough to where the jump from them to even a *50 Ti is not really worth it, and the old trick of buying an office PC and bolting a *50 on is no longer viable since the machines offices are getting rid off now are either thin clients or can take something beefier
because they can
intlel and ayymd aren't going to do anything about it
amd is just as guitly in this case
yeah that's what i'm saying, their offer is just as shit
israelites man they just love mocking us
this and the 7600XT are pointless releases andf i dont know what either company was thinking with them
will the 16gb version be decent for AI stuff?
Like inference and training?
I already have a 4080 in my main work/gaming rig but I have a linux box I want to test as an AI service server.
Was wondering if 4060ti would be worth it for having a cheap cuda core gpu that has 16gb of ram.
maybe, the memory bus might be a problem, especially for language models
how much worse will it be than a 12gb 4070ti because i don't want to spend that much on my spare rig which currently runs a 1660 6gb
can't say with certainty, but I know LLMs have traditionally been memory bandwidth bottlenecked
the literature usually talks about intra-memory bandwidth rather than bus width to the gpu itself, but the same limitations apply, you're working with huge chunks of data that are almost always different, so not many cache hits.
it can cache the intermediate vectors, but you still have to read all of the weights into the alus, it can't cache all of that. maybe there's some clever optimizations with the increased cache, idk. The actual compute performance is also kinda shit, so maybe it won't bottleneck it
in short, I would expect it to be much worse
might not be as bad for something like SD
thanks
I'll wait for benchmarks for the 4060 16gb to come out and then compare it to 3060 12gb.
Just going by AI gpu benchmark sites ram really doesn't do much for learning, it does improve the score but not by much.
at this point we all have to realize there isn't a point to any of these graphics cards. the 3090ti will be the apex of graphics cards for a longggggg time to come, and until then we have to realize a few uncomfortable truths
>there are video cards for every price range/resolution/fps imaginable
>the only reason these companies keep putting these out is so they can stay relevant
>we are all hooked on a pc building addiction (whether we act on it or not) and so we keep playing into the hype
>for what most people play, most video cards have more than enough VRAM
They thought people would fall for the frame generation thing, turns out no one is interested in the soap opera/motion smoothing/fake frames garbage.
They're just lubing your assholes for the 5000 series cards. There's only gonna be one 5060ti 12GB SKU that costs $549 and it will barely match a 3090 (with DLSS4 enabled). Everybody will clap and buy it.