Which card is the superior card for Stable Diffusion?
Redpill me
Falling into your wing while paragliding is called 'gift wrapping' and turns you into a dirt torpedo pic.twitter.com/oQFKsVISkI
— Mental Videos (@MentalVids) March 15, 2023
Which card is the superior card for Stable Diffusion?
Falling into your wing while paragliding is called 'gift wrapping' and turns you into a dirt torpedo pic.twitter.com/oQFKsVISkI
— Mental Videos (@MentalVids) March 15, 2023
anything gpu does cpu does better
kys and sage disguised LULZermin
i literally cant think of anything useful cpu does better than a gpu maybe except compiling but almost everything that involves heavy number calculation a gpu will beat a cpu.
that is because you are illiterate moron
Am I in LULZ? Since when has this board become this tech-illiterate?
Your GPU gets crippled by a branch. A CPU is a decision machine.
A single thread is MUCH faster on CPU.
the 1080 ti allows you to generate more images at once, but is slower; i think you need at least 12 gb to train your own models. the 3070 ti will be able to generate images very quickly, but you can only do so in smaller batches.
this is an asspull
1000 series has crippled fp16, its not meant for ai in any regard, 2000 and 3000 series have dedicated tensor cores that accelerate ai way hard and the workload never has to hit the cuda cores
my 3050 laptop is only .4x slower than my 6900xt in stable diffusion, despite the amd card having 4x more vram, 4x more bandwidth, and 8x more fp16 tflops. tensor cores go hard
if only nvidia would sell a card with nothing but tensors
1080 Ti was a great card
Still good card if 60 fps is all you need from newer single player titles.
It can easily handle any popular multiplayer game.
RTX 3060 12GB
is this actually true?
no, the slowest 3070ti benched is still faster than the fastest 3060, vram doesnt mean shit
sure its faster, but its also cheaper and more vram gives more versatility
vram will mean shit when you get into advanced gen 2 AI shit
yes it's a good poorfag AI card. here's the list
>poorfag tier
used 3060
>best bang-for-buck tier
used 3090
>comfy tier
4090
>chad tier
A100
>gigachad tier
H100
If you’re an AI fag like me then yeah the extra vram is nice
The extra vram gives you larger batch sizes or higher resolutions
The 3070. Even if it has less VRAM, RTX cards are MUCH faster than the old GTX ones. Like, the old RTX 2060 is TWICE AS FAST as the 1080TI iirc in SD.
And 8GB is still enough, I have a card with 12GB and it rarely is useful, usually I need just 7GB.
But it is nice when the extra vram sometimes does come in handy.
This anon knows what's what. The RTX 3060 is god-tier value for an AI card, that's what I ended up buying. Pretty cheap used these days, bought mine last december.
Protip: Get any 3-fan design, don't look at 2-fans.
I was running stable diffusion on a 4 gigabyte RX480 not too long ago!
why is
calling me retarded for recommending amd?
Probably a windows user
Just don't care for Chinese products.
>I was running stable diffusion on a 4 gigabyte RX480 not too long ago!
how bad was it?
Slow and stuck to low resolutions, 512x512 max without upscaling. This RTX3060 is like 20 times faster.
>Get any 3-fan design, don't look at 2-fans.
Why the fuck the 3fan design costs almost 25% more in my country??? Is that shit a different card?? it's not just a fan?
rx6600
>amd
retard
Don't listen to Pascal morons, Tensor cores outperform it significantly
>that massive increase on the rx 7900
what is going on there? does it just ship with more vram or did they add dedicated tensor cores to it or what?
yes, the 7900 has tensor units
although general performance improved quite a bit too
Software hasn't caught up with RDNA2. Same with Arc.
Tom's hardware benchmark is a joke.
It's not an apple to apple comparison.
They used the native cuda version for Nvidia GPU.
ONNX for pre-7000 AMD gpu and SHARK for RX 7900XT/XTX (SHARK is at least 50% slower than the native ROCm implementation)
>RX 7900XT/XTX (SHARK is at least 50% slower than the native ROCm implementation)
>RX 7900XT/XTX
>native ROCm implementation
>RX 7900 series not on the list
I am suprised by the A770's performance
the one with more VRAM, usually.
I have a 3070 ti btw, 8gb fucking SUCKS
and A FUCKING 4080 ONLY HAS LIKE 16GB OR SOME SHIT, TOTAL SCAM.
I am seriously going to buy an AMD shit-heap just to get 24GB VRAM.
https://www.nvidia.com/en-us/data-center/tensor-cores/
If you don't have Tensor cores, you can kill yourself just like Pascal cucks
>Not even recommending just renting GPUs
You will eat ze bugs
Running two 3060s together is better than a 3070, for fuck sakes anon.
>just rent it my fellow antisemites on LULZ dot org
https://docs.nvidia.com/cuda/ada-tuning-guide/index.html
The NVIDIA Ada GPU architecture includes new Ada Fourth Generation Tensor Cores featuring the Hopper FP8 Transformer Engine.
STAY MAD, homosexual
hi Klaus Schwab
Schlaus Kwab
IT'S OVER
PASCAL CUC_KS CONFIRMED ON SUICIDE WATCH
I was expecting a lot worse for a card that came out 6 years ago for half the price. Truly was the last decent card.
both of them do not support crt displays
so neither
the Maxwell Titan X has more VRAM than both of those and works with a CRT so i think it is the clear winner
I'm running my GDM-F520 using that RTX3060 I posted about earlier in the thread.
Literally all you need is a 2€ adapter from Aliexpress.
The ONLY reasons to get older cards for CRT use is interlace and 10/12bit color (only useful if you have a colorimeter).
do u still use pulseaudio and pure alsa
none. they're both shit
for stable diffusion the 3070ti, for ai in general the 12gb 3060 or the 3090
https://twitter.com/never_released/status/1634976093009760256
THE ABSOLUTE STATE OF AYYMDEAD
>chinese cpus
>ever
oh no no no no no no. look at this dude.