Guys, I hear a lot about "u shouldn't use molex to pcie" or "its fine"

Guys, I hear a lot about "u shouldn't use molex to pcie" or "its fine".

It seems to come down to 12v rails. I'm not a electrical guys, so could someone give a guideline on when its OK to use adaptor? What should the PSU say in terms of rails, I don't get it.

Other urls found in this thread:

newegg.com/Product/Product.aspx?Item=N82E16817151085
twitter.com/NSFWRedditImage

I'd just get a psu that can handle your gpu with no adapters needed. PSU's don't really cost that much. I see adapters as more of a temporary solution until you can upgrade the psu.

If your PSU doesn't have that, then it's probably bad.

Guys its a poweredge server, it doesn't have standard PSU. It does have 875w though

Don't use them. They're designed to work around a limitation that was deliberately imposed by your PSU manufacturer.

People usually try to use that with 230W chink PSUs, it's fine in your case.

OK but I have other things like proliants and thinkcentres that don't have as beefy psu. I want a guideline.

>Guys its a poweredge server,
Then serve something with it. It is not for gaming.
When do people understand again that computers are not designed to only game with them...

What is live video game streaming

2x molex is fine. The issue emerges from single molex to 6pin pcie.

>implying

I have a t3500 from dell, same question.

Your best bet is finding technical reviews of the PSUs desu user

you can always check your amp ratings. pcie uses the 12v rail exclusively.

There we go, I don't know how to read it though.

Do I just multiply 12v by max current on 12v rail and make sure its more than expected GPU power draw?

All molex will be on same 12v rail right?

All 12v components are on the same 12v rail.

Add up your 12v components and make sure they're below the rated max. Molex max current is 5A. pcie is higher. I'd only bother doing this if all components add up to be 80% or more of the max output in watts.

Throw that PSU in the garbage now.

Literal fire hazzard.

The reason you shouldn't use them is that it allows you to overload shitty PSUs.
The reason it's fine is because you bought a good PSU, didn't you?

Not enough LEDs and whirling fans for you? How reliable do you suppose PSUs used in servers that form the backbone of the internet are hmm?

I use them because I have a Modular psu but somehow lost the PCIE wires. My 1080 is not on fire yet.

Fucking idiots.

Just solder the wires directly on the 12V line, that's what I used to do before getting modular PSUs.

I have a dual molex to 8 pin connector on my 325w r9 280x
It hasn't exploded
I think it's all good

Gonna get a new PSU though, something really nice like one of the BeQuiet ones

Get a SeaSonic modular, can't go wrong with that.

Also use some good gauge wire, you can use one wire and split it at the connector.

Depends how many 12v outputs your psu has

>2x molex is fine. The issue emerges from single molex to 6pin pcie.

those psu's are probably single rail
so one molex to pcie wont make a fucking difference over 2

>I'd just get a psu that can handle your gpu with no adapters needed.

A PSU with the correct wattage and amperage -can- handle a GPU even if it doesn't have the right connectors.

As long as the adapter is of high quality there should be no issue.

>It is not for gaming.

It's for whatever OP wants to use it for. Maybe it has a good CPU in it and OP paid very little for it?

It will either play games or it will not. The computer doesn't care if it is not being used for what it was designed to do, and neither should you.

What you want to pay attention to is not how many watts the PSU will supply, but how many amps are available through the PCI-E connector rail.

Servers and workstations often have high wattage, multi-rail PSUs. But the amperage on each rail may be low. For instance I have a HP z800 which has a 850 watt PSU in it but it will only supply 18 amps over the two 6-pin pcie connectors.

The GPU slot itself provides 75 watts of power, and the 18 amps x 12 volts = 216 watts for the GPU. That means I have a limit of 291 watts to be drawn by the GPU. However, the amount of power drawn through the slot and drawn through the pcie connectors can vary from GPU to GPU and it's not a well-documented area at all. So I cap my GPU power draw to only that which is supplied by the pcie connector. 216 watts is still plenty considering that will power up a GTX 1070. A GTX 1080 would be possible by using one of the other rails and a molex adapter, but I'm not really bothered enough to do it.

>I'm not a electrical guys

The problem is the 12V rails. Really old (pre-LGA775) power supplies had one 12V rail. With LGA775 and the Prescott P4's, power consumption was so high that it needed its own 12V rail to power the voltage regulator. When GPU's started needing 12V as well, the same thing happened. Using an Molex to PCI-E adapter indicates that your PSU is too old and won't have the necessary amperage to power a GPU. Ideally, high-end power supplies should have four 12V rails: one for the motherboard components (can be low amperage), one for the CPU voltage regulator, one for the GPU(s) and one for the hard drives, optical drives, extra case fans, etc.

TLDR if you need to use an molex to PCI-E adapter you need an newer power supply.

I have a 300w seasonic with 2 12v rails and it doesn't have a pcie, it can run an adapter fine, just can't power a 1080 or similar, but it's 300w so that's obvious regardless.

newegg.com/Product/Product.aspx?Item=N82E16817151085

2x molex to 6/8 pin is fine. 1x molex to 6/8 pin is bad, because molex does not support the wattage.