Who here AMD Processor + Nvidia Video Card?

who here AMD Processor + Nvidia Video Card?

Other urls found in this thread:

youtube.com/watch?v=Rnf5QzoLjG4&t=132s
evonide.com/non-root-gpu-passthrough-setup/
twitter.com/NSFWRedditVideo

I'd say a good 20% of us

Gaymers BTFO
>>>/pcbg/

I would be very interested in doing this and a combination as a mega-super-turbo server+data+gaming box that I used headless 80% of the time. Ryzen would be great for iterated simulations and CUDA has would save me personal labor. However, residential upload speeds and inconsistent dynamic address bullshit mean the entire idea isn't practical (now). My area is getting fiber this autumn.

Reminder that ryzen can use ecc udimms.

Has nVidia fixed their fucking retarded drivers yet? Or do you STILL get lower performance with Ryzen?

Proofs? Cause I might get a ryzen to go with my GPU, since it's the 1000 gtx series

AFAIK this only happens in DX12 games and nobody runs DX12 versions on Nvidia anyway.

>dx12
Sounds about right, nobody uses that shit anyways

>want to run a GeForce card in a Windows VM using GPU passthrough
>the drivers detect the OS is running in a VM and refuse to install
>complain to nVidia, get told to buy a Quadro
>want to use a GeForce card and a Radeon card in the same system for different things
>the nVidia drivers detect there is a Radeon card in the system and disable PhysX
>only way to get PhysX back is to physically remove the Radeon card
>want to use Gefore Experience for easier driver installation and Shadowplay
>need to sign in with your email
NVIDIA: Never Again.

Ryzen 7 1700 + gtx970 reporting

I have an 1800x and a gtx 1080, it was a little be rough in the beginning, but its been smooth sailing for the past 2 months

6300 with a 950 wishing someone would kill me
>going 8350/1060 soon

1600x and GTX 1070 is my rig

>AMD Processor + Nvidia Video Card
What's the difference between this and being a cuck?

I wanted to go r5 1500x + gtx 1060, any issues with this combo?

8320 @ 4.4 + 970.

it's alright I guess, plays all the games I have great except h1z1.

eventually I'll go to r5 1600/X + 1070 or 1080. anything I should worry about going to r5 1600/x?

I play on 1080p but have a 144hz monitor. I just want most games to be minimum of 120fps on most games on high.

During the TNT/3Dfx era, that was an ideal combo.
You either used AMD CPU+Nvidia GPU or Intel CPU+3Dfx GPU.
The Nvidia drivers used the 3Dnow for the T&L, while the 3Dfx drivers relied on the CPU FPU, that was better on the Pentiums.

i5-2500 + HD 7870 + Realtek RTL8111E LAN

to

Ryzen 1600 + GTX 1060 + Intel I211AT LAN
AMD + Nvidia + Intel

who /comfy/ here

What's the equivalent 1080TI card for AMD?

I'll have it the other way around soon. (waiting for Vega) Kinda wishing I had waited to get Ryzen instead of this i7-4790K however. This will likely be my last Intel chip.

The Vega 64 will be.

Did they say the price?

Yeah, I've had different issues on my end. You use Nvidia with Linux, you have to kill 20% of your perf just to reliably fix screen tearing. Took them multiple years to publicly acknowledge that the issue was even there.

Fuck Nvidia and their stupid broken bullshit. Should've never tried switching teams.

Probably won't hit 1080 Ti perf, but it will likely get near 1080. Price should float around $500.

499 for air and 699 for water

me

Well fuck. I'll wait for a 1080ti equivalent. The price point could be lower if its likely near a 1080.

>tfw RADICAL COMPUTER CENTRALISM

>who here cpu bottleneck

Not me

Nah mate i was a Piledriver 8320e+ Maxwell GTX960 4gb then i went Skylake i5 6600k+ Polaris Rx480 8gb and iv'e realized i was using the wrong CPU and GPU brand all around. Ryzen Seems a bit tempting but unless my i5 does not do well in the games i mostly play for me 4 cores are fine for now, until some AAA game start to really using more cores i will be considering Ryzen 1700. Until then i am good.

So you have a Ryzen?

It doesn't surprise me you would come to that conclusion

The only reason AMD's behind is because Intel wouldn't stop paying companies not to buy any AMD stuff. They've been pulling that for like 20-30 years now. Intel's shit won't work anymore.

No

Sempron 3850 + GT 710
Gonna upgrade soon to A8 7600 + GTX 760

By the price those CPUs do cost, you should be getting twice the frame rate of the ryzen parts, not marginally better.

The 7700k is almost half the price of the 1800x you moron

Date of benchmarks?
bigger price=/=bigger gaming perf
Ryzen destroys as a workstation card and after the updates it does just fine with games. Intel shills just like to hang on to the old benchmark results and leave dates out.

scared about inevitable driver gimp

>date of benchmarks
2 weeks old

youtube.com/watch?v=Rnf5QzoLjG4&t=132s

wew user

nevertheless, you're going to get cpu bottlenecked with that gpu

>old benchmarks
>2 weeks old
Time to move the goalpost somewhere else

damn, didn't know 133 is half of 157, nor that 197 is half of 211

You do realize the 1800x is alot more expensive than the 7700k right?

Also you must be new to technology as a whole if you think price scales linearly with performance

>I'll wait for a 1080ti equivalent
Navi is in 2019

Christ what the fuck. I'm going to save my shekels

kill yourself and take whoever was the retard that made those graphs with you

8350 and a 1080 ftw2 icx

They have something planned for 2018 that competes with Titan Xp. Notice how Vega stack extends all the way up

r7 1700x with 2 gtx970
>h-hey my internet penis is 10% bigger than y-yours

>i7-7700K @ 5GHz
How many died in the fire?

By 2018 the Volta will be out and blow anything AMD puts out to shreds so long as it is based on the Vega arch

what ram is he using

1800x with a 760 here, waiting for vega™ to hit the shelves

miners leave my cards alone REEEEEEE

Got 4790k over a year ago.

Only kinda regret the gtx970.

Especially now that I'm switching to linux and want gpu passthrough.

I may keep the gtx970 for the host, and if I run a windows VM I'll passthrough an AMD card.

Though I'm seeing less and less need for Windows to game really. Star Citizen will be ported to Linux when its finished.. and I can always just dual boot instead for gaming if I really want to, and avoid gpu pcie passthrough...

In which case I can sell my 970 and get a Vega.

Aint 1080 also $499? So they're really matching there not undercutting

And for the water version thats 1080ti price... unless the water cooling allows for a higher clock meaning it obtains 1080ti performance..which would be big news if so... but even then they match nvidia's price

Wait, why can't you do passthrough? Unless your motherboard doesn't support it, you should be all set. Devil's Canyon has that feature enabled.

>Star Citizen will be ported to Linux when its finished
Hahahah

Not that guy, but I don't care if it takes them another 10 years to finish it. It's already pretty beast and these guys obviously put a lot of attention to detail.

I just hear its annoying with nvidia GPU you have to run through more hoops to trick it to think its not in a VM, whereas with AMD you don't

;___; when its finished

Seriously though, I'm thinking they release the beta in 2 years. Been keeping track and really all this foundational work is the hardest. Once they transition to just making content (items, planets) and not frameworks (systems for persistence tracking, tools to build solar system), things should move quicker.

Oh yeah, now that you mention it, I do recall that stupid Nvidia bullshit when reading guides. It's shit like this that's got me ready to buy a Vega card despite knowing that it'll probably be buggier due to drivers.

Why are they making the Vega $500 if it matches the 1080? 1080 often sometimes below $480 or $450 maybe below $450 if you can snipe that shit.

They haven't even got a concept of how it all fits together. They have all these individual modules but they have no idea the system which ties them all together and the groundworks which will make the basis of that system.

They probably figure the cards will disappear due to miners so why price them much lower?

Shits annoying because I would normally be OK with buying a used GPU, but now knowing what kinda shit they may be run through wih mining...yeah probably wont buy a used amd

Miners. They don't have to undercut Nvidia because their stock will sell out regardless if it is good or not.

Actually the 3.0 alpha release on sept 8 will tie things together a lot more, they're expanding the missions, ship cargo will be persistant, their will be tracking of ship's legal owner, etc. the Universe is coming together nicely.

Squadron 42 is essentially its own stand alone single player game, and Arena Commander is basically an arcade mode.

Is that how you deal with evidence that conflicts with what you believe?

>>want to run a GeForce card in a Windows VM using GPU passthrough
>>the drivers detect the OS is running in a VM and refuse to install
it sucks but there is a simple fix for it if you are using a QEMU script to launch the VM.

# Basic CPU settings.(hyperv tweak + Nvidia workaround)
OPTS="$OPTS -cpu host,kvm=off,hv_spinlocks=0x1fff,hv_relaxed,hv_time,hv_vendor_id=Nvidia43FIX"

Quick rundown on qemu scripts..

Is that something that applies to VM player? Vbox?

Run well? This is what I want... I'm 8350 and r9 270 for now...

Not a precise technical explanation so take it with a rock of salt.
KVM is kernel based virtual machine which is already included in the linux kernel, you just have to enable it.
Qemu is a hardware emulation layer you'd have to install it, you use it to start up the virtual machine and set up and all the devices you want to passthrough, audio, video, network connection, etc.

In layman it is like a configuration file for your VM, you just write a script from a template and execute it to run the VM.
There are GUIs like virtmanager that facilitate this process but you have less control since you tweak it with what is available in the GUI, and through the messy XML file if there are missing features in the GUI. Qemu is easier once you get the jist of things.

have a look at this guide it explains the basics, its what I use as a reference to set it up. Arch wiki is more thorough but this is easier to grasp for a beginner.
>evonide.com/non-root-gpu-passthrough-setup/

>Arch wiki
Thats main reason I'm considering arch right now. I've only done very basic work with gcc and command line in centos, trivial stuff I'm sure to most.

Was considering NixOS because I like the idea of easily being able to format and reinstall everything (documents offloaded to a nas).

It's probably still behind the 1800x in power consumption, but nice meme

me
never really had a problem since I don't game much and when I do I don't care about getting 200 fps

Looks like the 1800x uses about 3-4 watts more on stock settings, but it has twice as many cores.

Your point?

QEMU/KVM gets more performance than VMware/Virtualbox?

yes, pretty sure it has the best performance for GPU passthrough but you can only use it with Linux as the host OS.

Right, gotcha

One thing I cannot find is a comparison between: just running the VM as normal, no gpu pcie passthrough, so both the host and guest use the same gpu; versus gpu passthrough performance. Is it leaps and bounds better? Because I imagine the host OS isnt demanding a lot from the GPU when its not passed through.

Generally, that configuration will give a virtual GPU to the guest OS. It's effectively software rendering. If we could do it that way, passthrough wouldn't be considered nearly as frequently.

hardware emulation for 3D acceleration is pretty poor currently, probably due to the overhead in passing the display output from VM to host OS.
You get leaps and bounds more performance by passing the physical GPU to the VM since it uses the device directly.

I hope newer GPUs start including features that enable them to run one or multiple VMs in the host OS simultaneously on a single card in windowed/borderless-fullscreen mode. Just like emulating the video except close to the performance you'd get with GPU passthrough. It would be amazing

>One thing I cannot find is a comparison between: just running the VM as normal, no gpu pcie passthrough, so both the host and guest use the same gpu
on that topic, if vega rx supports SR-IOV, does QEMU/KVM support it or will it need some sort of special update?

I imagine with GPU's getting more RAM space this should help part of the problem of using a single GPU for both host/guest?

Its fairly simple to swap between dedicated gpu for host-linux, no vm & igpu for host linux w/ gpu passed to vm?
Or is it annoying enough that most people get two high power gpu's to run for both host/guess so they dont have to always game on the vm?

Intel + Nvidia masterrace
>tfw your PC powers your gaymes but also ISRAEL
>tfw your PC is exterminating muslims at every active second

ha ha ha ... w

>Its fairly simple to swap between dedicated gpu for host-linux, no vm & igpu for host linux w/ gpu passed to vm?
>Or is it annoying enough that most people get two high power gpu's to run for both host/guess so they dont have to always game on the vm?
Funny you ask that, I am struggling with it.

I managed to get it to work but it is such a hassle, so far I have had limited success.
I have two solutions, one partially works, and the other definitely works but takes more time than writing a single command in terminal.

>method 1
I wrote 2 xorg config files, one for iGPU and another for the dGPU.
I wrote a command that switches between the two GPUs but it automatically logs you out of your session and logs you back in to load the relevant Xorg.conf file.

The xorg file for the dGPU uses intel modesetting to set the iGPU as inactive. This method currently does not seem to work for nvidia it is really strange.
running lspci -k show that my GPU is actually using the nvidia driver.
However, running glxinfo | grep renderer shows that the dedicated GPU is using Gallium 0.4 llvm instead.
The Gpu is displaying and everything but I cannot run 3D games in steam.

>method 2
I reboot then go to bios and disable vt-d and the iGPU.
I use a xorg.conf file I have previously generated using nvidia-xconf command.
This method works, its basically just running linux with the dedicated GPU as the primary GPU right from boot.

I am frustrated that I cannot figure out how to get method one to work properly and use openGL4.5 like method 2. I am sooooooo fucking close its infuriating.
Method 1 is very quick, just open terminal type one command and bob's your uncle. It automagically logs out and back in with the GPU switched.

It might be an issue with modesetting or nvidia not supporting GPU offloading, might work with an AMD card haven't really tried.
Two powerful dGPUs is wasteful power consumption and cost wise. It is like using a sledgehammer to tackle a small nail.

I agree and I cant afford another dgpu right now.

Whats the "best way" people stream their games these days? Not OBS, but like Steam game streaming? So to stream the game (or whole OS?) back to the host OS. I figure this will have a performance hit, but not as much as not doing gpu passthrough.

streaming from steam is a pain, I managed to get it working at 720p on a gpd win from my desktop reasonably well. But it takes a while to tweak and relies heavily upon how solid your local network is.
Quality takes a hit.
Don't think its worthwhile on a Linux desktop even unless perhaps you are sitting next to the router connected via Ethernet. Even then quality will not be as good as running straight from dGPU.

it might be possible to use prime to switch GPUs, I haven't really dug into it on arch wiki.
Tried it with the nvidia GUI, the GPU switching works as intended but I cannot unbind and passthrough nvidia to vfio because it becomes the primary GPU.

It did work on previous Xubuntu setup before by some freak incident, I was able to swap Using Nvidia's GUI between iGPU and dGPU and still was able to passthrough when I have iGPU as main.
So I know IT IS possible, there must be something I am doing wrong.

...

Cant you stream to yourself the same way you can for example, change the port for ssh from 22 to 2222 on a vm, then connect to the vm from the host on localhost:2222? Or something like that?

Does that not skip the router? Or am I retarded and it does use the router it just uses your internal Ip?

haven't really dabbled with it much so I am not sure if you can bypass your router. If you figure it out I'd like to hear from you.

Will do. I'll make a thread when I play around, just got a seperate 850 evo to put linux on; want to leave my windows install alone on its own drive so if I bork something on linux drive im not fucked

I use 750Ti along with 8150 because some programs need CUDA.

>tfw fx-4300 + gtx 660

yeah its what I've done as well. Linux on SSD and windows in its own 3TB HDD.
If you are worried about borking linux you can get a cheap ass HDD, I have a 500GB 5400 RPM from some old crummy laptop that I use for scheduled backup snapshots with rsync.
What I do is unmount the 500GB HDD and have it only mount when the automated scheduled backup script runs once per day.
That way even if something nasty like rm -rf / is triggered your backup HDD in theory should be untouched since it is not mounted.

Well I figure while I play around it should be easy to reformat the whole thing and just restart worst case, still need to put in the 850 evo, but I figure I should be able to easily reformat the 850 from a usb boot or something worst case.

I was considering NixOS but I dont know if with playing with VM's and pcie passthrough if havin arch's bleeding edge is more important

>tfw you hate jews but also hate muslims and not sure who you wanna stick it to more

At least get a Ryzen 3.