>why yes I know better than all the hundreds of thousands of researchers, scientists, and engineers of the biggest companies out there. How could you tell??
>why yes I know better than all the hundreds of thousands of researchers, scientists, and engineers of the biggest companies out there. How could you tell??
>NOOOOOO YOU CAN'T DECREASE THE HEAT BY 50%, you will lose up to 15% of performance
actually undervolting alone doesn't decrease performance, you can get power usage even lower by dropping the clock at the same time, but it's not required
undervolting without changing the clocks is like reverse overclocking, rather than pushing the clock up and holding the voltage back, you're pushing the voltage down and holding the clock back
that is, getting the best bang for buck either way, optimum speed vs. optimum efficiency
>actually undervolting alone doesn't decrease performance,
If It's all around better with no downside then why hadn't it been done so from the factory? Let me guess, you know better than all the hundreds of thousands of engineers of the biggest companies out there?
Engineers have to coexist with the marketing team anon, and "we're 5% faster than our rivals" sells more than "we're 5% more power efficient than our rivals"
Because factories make standardised shit. Nvidia wants all their GPUs of the same line to work the same way. Trimming the control bits to get maximum performance for minimum power takes a lot of testing time that they don't want to spend at the factory, and would result in them knowingly selling some customers an inferior product which sounds like a legal landmine.
So what they do is tune the cards so that every one of them is guaranteed to work as specified, despite the fact that some of them are capable of working much better
can you imagine if a company actually optimised each individual card? like think about that, the same model could vary in performance power usage by like 10% or something, how would that even work? people with poor bin chips would be pissed
when people buy a product, they expect it to work the same as anyone else who bought the same product, that's why binning even exists
sure it'd be neat if each individual unit was tested, optimised, and sold at custom, per-unit prices based on their actual individual quality, but that's not a thing and likely never will be, it's just too much additional infrastructure for something very few people are even aware of
Every card is optimized though. They make sure to stress test them heavily so that there are no oopsies during heavy loads or shit games like Tarkov or New World on release that could literally burn some GPUs if NVIDIA/AMD didnt set proper limits because the games had no limitations. They might not get 100% optimal optimization but you can rest easy knowing that it's about as good as it gets and no amount of autistic tinkering would give remarkable improvements without a major risk involved.
Are you retarded
Are you? Why are you talking about things you don't understand. Every single card that is say ASUS Edition of GTX XXXX has the same optimization done to it and one test that applies to each card. There's no magic involved. They just stress test it and make sure it's OC'd to high enough where it survives the stress tests every time. Show me a single benchmark where any sort of tinkering gives an actual major performance boost in multiple games over multiple tests repeated. It's all in 2-5% range and with obvious drawbacks of stability loss included.
Every card with the same SKU has the same bios flashed to it imbecile
If you reduce the voltage every card will boost higher making it faster for free (given it's power or temp limited)
There is absolutely 0 risk of damage by undervolting. Worst case scenario: the pc crashes oh no I didn't save my word document
Refer to ->
>Show me a single benchmark where any sort of tinkering gives an actual major performance boost in multiple games over multiple tests repeated. It's all in 2-5% range and with obvious drawbacks of stability loss included.
Not interested in schizobabble of someone talking out of their ass. Show some proof, I'm not your daddy nigga.
No. You're gay.
I'm not interested in debating ghosts and fairy tales. Show some benchmarks or I'll just treat this as a finished conversation.
My undervolted 3080 scores higher 3DMark bench because it's operating more power efficiently.
It's hitting higher, or at least the same, core boost frequencies at a lower voltage.
We're talking by at least 100mV.
Going below 100mV seems pretty tricky without giving up boost anyway; at least from my tries on 1660 SU and 4060.
Different chips manufactured by different foundries (TSMC vs Samsung).
Apples to oranges comparison.
My chip can boost to stock frequencies (1845~1860MHz) a bit higher than 850mV.
Stock settings would draw at least 1V for the same frequencies.
It can reach 1935MHz @ 900mV, but it's cooling and/or power limited at that frequency boost.
Probably going to get one of the new Super cards next year because this 3080 was supposed to be a backup.
How much you selling your 3080 for?
I can't be fucked dealing with shady buyers, I'm just going to send it back to Newegg to trade-in (sub $350 value) when the Supers launch.
Mine's a reference tier 3080 12GB anyway.
Pity, I've bought from people here. If you change your mind here's a burner email I use [email protected].
Refer to:
>Show me a single benchmark where any sort of tinkering gives an actual major performance boost in multiple games over multiple tests repeated. It's all in 2-5% range and with obvious drawbacks of stability loss included.
Again, I am not interested in your theorycrafting. I want to see what your "optimizations" do in practice. I follow minmaxing nerds like Gamers Nexus, Hardware Unboxed, Digital Foundry on YT and benchmarking sites like TechPowerUp or TechSpot and practically no one can ever show substantial improvements based on any safe underclocking/undervolting/overclocking. At best you might see some tiny improvements to 0.1% at the cost of losing 10-15 fps on average. Which no sane person would want.
I've had to clean install my PC a few weeks ago, I'm going to DL 3DMark and find some of the saved benches.
>At best you might see some tiny improvements to 0.1% at the cost of losing 10-15 fps on average
Why would you lie on the interwebz like that?
You keep yapping and yapping and showing no proof. Peculiar!
You showed no proof for your own retarded claim.
The burden of proof is not on me lol. AMD/NVIDIA agree that tinkering with overclocking/underclocking/undervolting is not a good idea and they warn you in their own software not to do it. You are the schizos that think they know better than the people making the product. And I offered you sources in form of Gamers Nexus, Hardware Unboxed, Digital Foundry, TechPowerUp, TechSpot that have never showed substantial improvements from any such techniques.
>AMD/NVIDIA agree that tinkering with overclocking/underclocking/undervolting is not a good idea and they warn you in their own software not to do it.
of course they do, it's to ward off people who don't know what they're doing so they don't damage their card and try to do a warranty claim or get negative reviews
if they recommended this stuff you know normal people would just crank that voltage up and kill their card, and be justified in returning it because it was recommended, it isn't hard to understand why there's a big ass warning going anywhere near it
nobody is trying to claim that you can double you gpu's speed or something, the potential gains may indeed be minimal most of the time, companies aren't stupid, if the chip quality varies more then they will just make more product lines to make the most of it
but you'll never know just how good your chip it without testing it yourself, and for many people, getting a bit more out of the same product is worth the time
These days it's mostly undervolting that really has the big gains. My GPU that was rated to 115W and actually drew 130W in some reviews, delivers slightly more performance at just 98w for example.
>they warn you in their own software not to do it
While also proving the software and marketing their stuff as overlocking friendly, makes you think.
None of the sources you mentioned (at least websites, who cares about yt) suggest stuff like >tiny improvements to 0.1% at the cost of losing 10-15 fps on average
3080 guy back with some fresh Time Spy Extreme bench.
Stock GPU settings Graphics Score: 9158, max core voltage: 1.075V
Undervolted GPU Settings Graphics Score: 9603, max core voltage 0.900V
No vram overclock was conducted in any of the above settings.
4.8% improvement while LOWERING voltage draw.
Very nice considering TechPowerUp only manages a 5.7% improved performance when overclocking both the GPU and vram of a better-binned 3080 12GB (Strix).
https://www.techpowerup.com/review/asus-geforce-rtx-3080-12-gb-strix-oc/39.html
If you're getting shit undervolted performance, it's a user error.
>one source from your ass with a single picture as evidence
so basically
>no evidence
>no way to cross reference
>no replication
proves fucking zero
You're really bored aren't you? Try to entertain yourself with netflix or smt and stop shitting up this thread
dude i am not even paying attention to you i'm being so half assed about this fucking bait and you took it headlong
i don't even know what to say, you were being so earnest and everything and i just could not give a single fuck
have fun with your tinker toys or whatever
>i was just pretending this whole time
thanks for the laugh
>not knowing that tinkerers love posting their shit anyway, whether someone asked or not
Good job, giving the dude an excuse.
There’s at least three different 3080 owners in this thread and my bench results are inline with their card’s tuned vs stock performance.
Post your benchmarks to prove otherwise on your own system.
Liar! Post a tutorial, or stfu with you cum slurping lies.
https://github.com/LunarPSD/NvidiaOverclocking/blob/main/Nvidia%20Overclocking.md#undervolting
RAM doesn't have an effect on voltage, does it? Seems weird people touch it so rarely in the reviews, even though most RTX cards can easy take +200/300.
Most 3080s and 3090s uses mediocre thermal pads, the silicone based material will break down and cause leakage.
This is most commonly observed in heavily mined up video cards.
Even with vram voltage set constant, higher memory frequencies requires more current as well.
You can observe the difference using HWInfo or other monitoring apps.
This is more of an issue for reference tier cards with very strict board power limits.
Higher tier cards like the Strix has a higher maximum power draw limit.
Jesus shit, at the cost of GPUs now, you'd think they could spare a few cents for that. Though seems hard to tell how it works in combination of lowering the overall temp from undervolting … assuming the cooling itself isn't retarded. Mining is kinda an extreme example.
Video card manufacturers don't generate a lot of profit from selling their cards, especially at NVIDIA's mandated reference MSRP.
NVIDIA charges a high premium for their BOM (Bill of Material), which includes the GPU and VRAM.
AIBs made a killing during the pandemic shortages, some are reported to have made almost ten years' worth of profit in a single fiscal year.
That's why EVGA tapped out of the game, no point being NVIDIA's bitch to pick up pennies.
Leave it to Huang to outkike any kike.
does this also apply to AyyMD cards?
AMD's "Made by AMD" reference cards aren't as premium built as NVIDIA's own Founders Edition cards, there's more demand for AIB designed Radeon cards.
igor's Lab claims the situation isn't much better for Radeon's AIBs, he has AIB contacts for both NVIDIA & Radeon.
Founders Edition cards are commonly misunderstood to be reference design cards; they're actually overspecced than NVIDIA's own reference designs.
But these better Founders Edition cards are sold at reference tier MSRP.
That pissed off AIBs like EVGA because they used to make a lot of money off of selling reference-tier MSRP cards, but NVIDIA's FE cards cannibalized those sales.
>igor's Lab claims the situation isn't much better for Radeon's AIBs
but why don't Radeon AIBs not just raise prices then? because those amd reference cards are ass...
Both AMD and NVIDIA require their partners to sell at least one reference-tier model at MSRP for each GPU SKUs.
That's why MSI has a reference tier Ventus for every NVIDIA card and why PowerColor has a Fighter card for AMD.
ASUS doesn't get to make an expensive & profitable Strix or Matrix card if they don't sell reference-tier Dual or TUF cards.
i'll do you one better and link you the wikipedia article on what i'm actually talking about
https://en.wikipedia.org/wiki/Product_binning#Semiconductor_manufacturing
put simply, companies don't usually set out to make low end parts. low end parts are usually simply failed high end parts, which have been rebadged/programmed to operate with lower specifications, this is why we even have 10 different kinds of the same class of chip
think about it, why the fuck would a mid end card and high end card differ in cost when the manufacturing process and materials needed barely if at all differ? it makes no sense, the true reason this is a thing is because making high end chips is /hard/, not all the chips will come out perfect, so to increase yields, they take some of the less perfect ones and sell them as lower end models
but there's only so many different models they sell, and every chip is a bit different, so depending on how lucky you are, you can end up with something like a mid end chip that can be overclocked to nearly the same speed as a high end model, just because it couldn't pass the requirements to be a high end model
i don't think you understand the topic of my post
Because not every CPU is the same. You can have to CPUs with the exact same model coming from the same factory, and one will crash when you undervolt by 40mV while the other one will be fine with -120mV.
The manufacturer chooses the voltage so that even the most gimped processor, that lost the silicon lottery, will still work reliably.
not every chip fails at the same point. factory settings leave margin for error because it's better for the chip to work reliably out of the box and to have a consistent performance standard all chips that make it to end users are guaranteed to meet, even if it means leaving some relatively minor performance and efficiency gains on the table.
Idk about "minor", if I knew I could get 4060 Ti down to 125-130W at same performance, I might've gotten that instead of 4060 that went to 100. Feels like manufacturers are ignoring a market niche by not offering some pre-tested cards that run closer to the limit, since most factory OCs are a joke.
>buy a really fuck off expensive gpu designed to run at high tempuratures specifically caused by the performance it achieves
>gimp it on purpose because NUMBER GO DOWN = GOOD
you could've just bought a less powerful device and achieved the same result. no one buys a fucking 1500watt motor and then operates it at only have the power. they buy a 750 watt motor instead.
That's not how any of this works.
YES IT IS YOU FUCKIN RETARD
No because you can reduce consumption without reducing speed, that's like having your motor being just as fast but waste less fuel and be more efficient, sure you are no longer running it at 1500 watts, but because of loss it's consumption was 1500 and output 700.
Now it's consumption is 900 and output still 700.
literally doesn't happen prove otherwise
That's literally what happens, undervolting does not imply overclocking.
In fact you may gain performance in some cases because the cpu doesn't heat as much.
Each chip is different and so they can't afford to perfectly calibrate each one of them for peak efficiency.
Undervolting does not imply reducing the clock rate*.
You can look it up, it is known that undervolting can improve performance.
Adjusting sliders and points to set a more power-efficient & stable V/F curve is too much for most of these mouthbreathers.
>source: i made it up
prove otherwise
gpus and cpus are power limited or temp limited. if you reduce the voltage you alleviate both which leads to higher boost clocks. if you can't grasp this simple concept you may want to consider simpler hobbies like chewing gum or instagram
the power and voltage aren't what makes your card fast, the clock is. higher clocks need more power but stock clocks are often stable at lower power too so you don't lose speed while maintaining stability
>no one buys a fucking 1500watt motor and then operates it at only have the power. they buy a 750 watt motor instead.
Yeah they do. Its for different reasons but they do.
>gimp it on purpose because NUMBER GO DOWN = GOOD
This but unironically. If you're too much of a troglodite to understand that the last 5% of performance from a GPU (and nowadays from CPUs too) come at ridiculous energy and long-term reliability costs because whoever doesn't push them far beyond the optimum of their power curves loses the dick-measuring contest of the year, that's your problem.
it depends on the card though. e.g. here's my 6650XT.
There's a balance between efficiency and reliability. You don't run your motor at 100% if you can avoid it.
Or decrease the heat and power consumption by 30% AND lose no performance
>lose performance
more like gain around 5% due to longer turbo duration, while consuming a third less power.
Yes, as a matter of fact, I do.
These edgy jokes don't work on LULZ retard. Only on reddit/youtube do they land because it's actually against the norms there. But we know you're too much of a coward for that. Time to stop spamming my board kiddo
That's the CEO of Reddit right there.
Go home boomer, you have no power here.
each individual chip varies a bit, what they do is test the properties of each, then separate them by quality into groups (this process is called binning), then each group is assigned to a product group (i.e. low end, mid end, high end)
even within each group the chips still vary, the defaults are set up such that they can can be safely applied to all chips within a certain bin, basically the defaults are closer to a "worst case", so you can expect most chips to be able to be pushed at least a bit from their defaults, some more than others
These
>ooooooooh imm im im im gonna im gonna im gona fucking OPTIMIZEEEEEEEE aaaaaaaaah fuck im optimizing IM FUCKING OPTIMIZING ALL OVER THE PLACE UHHHHHHHHHHHH OPTIMIZEEEEEEEEEEEEEEEEEEEEEEEEED
youtuber subhumans on youtube need to get lined up and shot. literal placebo and lies in 99% of the cases. funniest shit ever when Linus Tech Tips made a video comparing a fully bloated windows install with a bunch of malware pozz on top of it installed running in the background and showed how it gives identical performance to a "optimized debloated" install in every game.
>literal placebo and lies in 99% of the cases
underclocking being one of the cases where it's not a joke
you've watched so much garbage that you aren't willing to entertain the things that are real
undervolting*
i mean underclocking is a thing as well, but pretty pointless without undervolting at the same time
Can you undervolt/overclock?
if you reduce the voltage the card will overclock itself
Basically the current thing is working with the curve, so it uses less volts to reach a certain clock. Simply due producing less heat, it tends to clock higher and stay there longer.
If this was true a very widely known tutorial would be available and most people would jump on the bandwagon.
undervolting and underclocking are very niche and depend on the gpu, in majority of the cases it might give either no performance or a performance loss, in some specific cards and editions of cards it might be a good idea to do but at best it'll give a tiny performance increase and nothing major worth mentioning. Even if you autistically le tweaked and le optimized everything you stil wouldn't see more than 5% fps increase.
Debloating Windows is a meme indeed but optimizing clocks and voltage is a real thing that can easy give you 10-15% performance and/or as much less power draw.
What's the issue with it? Aren't all the tools basically the same anyway?
>What's the issue with it? Aren't all the tools basically the same anyway?
None. That anon is a schizo
MSI Afterburner is Russian maintained bloatware with security holes and its WINDOWS ONLY. Fuck off with that garbage.
MPT is better tool for AMD GPUs. Don't care for Nvidia retards.
Wtf is MPT? Link please?
There is just one russoid dev that MSI stopped paying anyway; and yeah it's kinda bloated but the security holes are just a meme due to how simple it is.
Also what's the point even overclocking for Loonix with their lack of NV drivers?
The NVidia Linux drivers are shit, but not non-existant
Linus Cuck Tips lies in every video, he has an entire Kiwi Farms thread with reciepts.
Is this even a technology board
When I undervolted my 3080 there was a noticeable decrease in noise and heat output and I have no stability issues
>GUYS TODAY WE'RE GONNA UNDERVOLT
>FIRST UR GONNA NEED MSI AFTERBURNER...
instant dislike and close tab
the current power use is dictated by the marketing and sales department, not the technicals departments, gpus are operating way past the diminishing returns point in a very inefficient way.
So the reality is that cards are very overclocked by default, and that is a waste on both electricity and oversized coolers
User error
I am about to shit my pants.
>undervolt my 3080
>drops from 330w to 250-260w
>temps from 75-80c to 60-65c
>performance boosted by 5% because it can hold a steady 1900mhz and not bounce from 1700-1900 like at stock
I undervolted my Steam Deck yesterday. Literally just changed a number in the BIOS. It now runs cooler and quieter with no perceivable performance degradation. I guess that makes me a fool, then.
Vegas were insane with undervolts. Stock 220?W 1300 MHz
UV 160W 1600 MHz
>t.never heard of planned obsolescent
OP is just fishing for advice how to undervolt/overclock his shit, isn't he?
best way to get information how to do something on the internet is to claim it's impossible
It's definitely impossible and makes no sense. Besides, how would I've go about achieving a feat, step by step? Tech-insanity, I tell you. Ngmi
>some guy on the internet says his RTX whatever runs perfectly stable with -120mV
odds of him having spent any time testing stability: 1%
odds of him complaining about random game or driver crashes and blaming it on everything but his voltages: 99%
Graphics card stability testing has been streamlined thanks to OCCT.
It's got Furmark-like standard 3D torture test and an Unreal Engine-based adaptive frequency & load test, both with error-checking features.
If your card pass both of those tests, it's stable in real work or gaming loads.
Quit living under a rock, none of this shit is groundbreaking.
You're living in the easiest time to tweak your PC.
>'lol he relies on OCCT 3D stress tests'
OCCT's reliable enough to be commercially licensed by Intel, AMD, EVGA, Origin PC, Microcenter, and actual AAA game devs.
>'lol his 3DMark stress test isn't rated to be 100% stable'
Read the dev reply in link below
https://steamcommunity.com/app/223850/discussions/0/3048356660228319014/
>I'd say most devices out there land in 95-98% range and result better than 98% is fairly rare.
His fault for not installing Watch Dogs. There is no way an unstable GPU would survive 5 benchmarks runs there.
>undervolting
amd tinker tranny garbage.
>buy a 3080
>runs at 95 degrees and thermal throttles in game menus
>undervolt it
>temperature drops to 80
>card doesn't thermal throttle
>can even overclock it a bit
seriously why wouldn't you want to make your hardware work better?
There's absolutely no way it can be done. If there was, a step by step method would have been proffered already. These guys are full of shit. Placebo shit.
>Just because there are a lot of them, they must be perfect
You must think people who tweak or avoid Windows are equally dumb, yes?
I had a 6800 XT brand new before I quit gayming for good. Undervolting it got 5-10% performance bump and reduced operating temperatures. Both facts are objectively measurable.
Lies.
If you can reduce temperatures, the GPU can sustain higher clocks for longer. My understanding though is that turbo is a different voltage scaling anyhow.
Ah yes all those talented scientists working at NVIDIA and EVGA.
It's retarded to worry about saving $10 in electricity a year but it's also retarded to think these big companies don't do dumb shit in the name of THE BIG MARKETING NUMBERS like increasing power draw by 40% for 5% more performance
>but it's also retarded to think these big companies don't do dumb shit in the name of THE BIG MARKETING NUMBERS like increasing power draw by 40% for 5% more performance
Could also be a hedge against ageing.
>be retarded
>think you're special and got a special golden fabbed hardware
>decide to play stupid games with voltages that are set based on averages that guarantee stability
Honestly don't understand this autism. Just because your shit appears to work doesn't mean it actually does either.
AMD CPUs and GPUs can't sustain max boost clocks without undervolting. You don't actually overclock modern hardware you just create a better environment for it to do it's own boosting through better cooling or undervolting. Undervolting now gives the same +10-15% performance increase that overclocking would give.
Post a tutorial you shit eating homosexual.
I've been undervolting since 1080 ti. At the moment I can operate my 3080 ti at 80-100W less without losing performance. 0,40€/kwh in europe is noticeable.