Is there such a thing as overworking a graphics card...

Is there such a thing as overworking a graphics card? Would a graphics card live a lot longer if you play on low in every game than if you played on high in games? I play the witcher 2 on all high but I could probably go higher I just dont want my card to give out on me since I cannot afford another one.

Other urls found in this thread:

tomshardware.de/raijintek-morpheus-vga-cooler-hawaii-review,testberichte-241525-7.html
twitter.com/SFWRedditGifs

>Is there such a thing as overworking a graphics card?

no, literally never been thought of, patent it OP

In normal everyday use, no. Closest you get are OCed cryptocoin mining GPUs that have been under 24/7 load in shitty stuffy room for years on end.

I am not takling about overclocking, I am talking about can it be worked to its potential or will it give out sooner?

No, you won't wear it out.

If you over clock your GPU severely you'll "overwork" it in a sense by running it very very hot and accelerate the ageing of electronic components which can ultimately result in a malfunction.

On stock clock your GPU will probably not have a malfunction under heavy loads (caused by high graphics settings) during it's usable lifetime.

Relieving the GPU from work by lowering settings or under clocking it will not result in a noticeable longer life.

What do you give a fuck?

it has a finite lifetime and you are just going to use all the time it has before the fucker breaks.

Not that it will if you make sure the fans are still spinning and replace them if they aren't

No, not directly. Heat and very high voltage are damaging to lifespan, but modern NVIDIA cards have the voltage locked down tightly enough for that to not really be an issue, so all you need is proper cooling.

graphics cards which even miners use last a long time. we're talking years. if you're just gaming it will last for 5/6 years at least before the VRM's or something like that fry themselves.

No, the lifespan mainly depends on the heat. A card running at max capacity will produce more heat, so better make sure to buy a proper custom card. Shitty ones die earlier. Otherwise you can also undervolt to save heat. Versy nice on 200/300 AMD series.

The less it heats, the more it serves

This. Even heavily overvolted and overclocked cards live longer when they are watercooled. They often live forever. Shitty blower and reference cards often die after a few years.

No there isn't unless you overclock it, but I would like to know about the risks of overworking an HDD, especially if its seated next to a GPU or other component that tends to heat up quite a bit (maybe about 70C).

1. Running it too hot all the time will degrade the chip
2. Tons of cooling/reheating cycles causes thermal expansion/contraction that contributes to physical wear on solder and connections after extensive periods of time

Don't run your GPU hot, and don't run it hot then cold, then hot, then cold, and it'll last you decades.

Don't give a flying fuck and it'll last you only a single decade.

I don't know why you'd want to keep a GPU for an entire decade. They're usually completely outdated within 2-3 years.

No.
I always buy reference cards because they last longer than the dual fan cards that keep your gpu "cooler" (TDP is TDP, whether the core is hot or the air gets taken away the same amount of energy is used)

My 1080ti runs at 84c 12 hours a day, but it's warrantied for 3 years. I imagine then that it could handle 84c 24/7 for 3 years and still work, otherwise nvidia wouldn't warranty it for that long.

No. Cooler cards save energy. Not a joke. I can post a test but I'm not sure if you understand German.

Explain to me how a 250w card will expend less than 250w if it's cooler.

That makes no sense. However, I do speak German so link it.

tomshardware.de/raijintek-morpheus-vga-cooler-hawaii-review,testberichte-241525-7.html

Nice. Its not a joke, I think its called electromigration. Also OC is more stable on lower temperatures. My old 780 Ti struggled when I applied 1306 MHz @80+° (artifacts and crashes), I bought a AIO cooler for it which ran it on around 60°. The OC was totally stable since then. Custom loop users will confirm this.

I don't know what article he's talking about, but I guess it can be possible though I wouldn't expect a huge difference. Even if the actual GPU self-regulates and uses the same amount of power the VRM/power delivery could work at lower efficiency due to increased temperature, so even if the GPU itself is using the same amount of power more is drawn from the wall.

Its about 10 %. You can expect more saving if you use liquid cooling.

As stated above a cooler card is more stable and capacitors will age slower on lower heat.

Each 10C rise in temperature halves the life of silicon over 20C~