Why does Sup Forums never learn?

Why does Sup Forums never learn?

Other urls found in this thread:

pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/
evonide.com/non-root-gpu-passthrough-setup/#GPU_passthrough_with_QEMU
twitter.com/SFWRedditVideos

>filename
>pic

>2 monitors
Gay

Relies on too many different factors for it to be feasible for most people. Doesn't work on K-series i7s where it does work on the non-K series, for example, at least for ones released a couple years ago.

you seem to have your computing sorted in opposite way

WINE does the trick nowadays.

not gtav

you could do it on a single monitor granted it has multiple inputs.

>two GPUs
>two monitors

I lucked out, good thing I went for a non K i7 haswell at the time instead of an unlocked i5.
iGPU seemed useless at the time, if it weren't for it and vt-d support though my main desktop would still be running windows bare metal.
Still considered fence sitting, but at least it is one step above installing Ubuntu in w10 or running a VM inside windows.

I'm not a gaymer, I'm a techie, I don't need your shit and you on Sup Forums.

Was using a setup like this previously with an i5 and 390. Worked flawlessly until VAC got updated and started losing its shit. It won't let me play in VMs any more.

>CPU passthrough

wat?

Isnt that just called out of the box virtualization?

your jack off station looks dope af

No thanks. Linux is dead.

>t. wintard ransomcucked

>I'm not a gaymer, I'm a techie
Then create or contribute to threads that cater to your interests.
>I don't need your shit and you on Sup Forums
You are a "techie" surely you must know how to hide/filter/report the thread or what ever suits your fancy, quit moaning.

The topic may be related to video games but the discussion does not revolve around the video games themselves but rather the hardware and software involved for the most part.
Creating a thread like this on Sup Forums will result in the same response to fuck off to Sup Forums.
Until this topic is actively banned by mods deal with it

>gpu passthrough
>literally requires you to recompile your kernel to set up
>guide never instructs you to update your make flags so it builds using all your cores so for any uninformed user it will take forever

never truly understood why most build tools don't use all your cores by default. Iv'e gotten use to enabling multi threading for compiling but it's still annoying.

It makes such a large difference too.

Some people have server processors with 20 or more cores that cant all be taken by one compile job and the safest default is just one so some intern doesnt bring the system down

>>literally requires you to recompile your kernel to set up

Nope. Use this guide: pugetsystems.com/labs/articles/Multiheaded-NVIDIA-Gaming-using-Ubuntu-14-04-KVM-585/

Will this work with an nvidia/intel combo? obviously using the nvidia in the VM

Does it work with Ryzen?

Yep. There's some other resources as well such as that huge archived thread on Arch BBS. Remember to use kvm=off (if you're not using a Quadro card) to stop the nvidia drivers from terminating.

Maybe, but probably not. AMD cpus were a lot more trouble to get working last time I checked, and Ryzen is fairly new.

Setup has changed since then, previously my setup had the iGPU running both monitors on the host OS and dedicated GPU connected to a second input on the 4K display.
Now I have the dedicated GPU connected to both monitors and the iGPU connected to the portrait mode one, the reason I did that was because I figured out a way to pass the dedicated GPU back and forth between my host OS and the VM.
One thing that bothered me with the previous setup is that my dedicated GPU's fans will keep running at full blast after I shutdown the VM and will not stop unless I restart the VM or bind it ot a different VM.

The Vive itself requires an entire USB controller to be passed through since QEMU has a USB device limit of 4 currently. I use an Inatek PCIE usb controller for that purpose since it is more consistent and easier to pull off with future motherboard upgrades, the cool thing about it is that I have my front panel USB 3.0 ports hooked to it making hot plugging devices to the VM a breeze.
It is pretty nice once you get it all setup, can be posting on here while running applications fullscreen on windows, similar to playing borderless fullscreen mode except you can do it on older titles that don't have that feature as well.
Not having your computer rendered completely unusable while it is updating is nice as well since you can continue messing around in the host OS while the VM is being updated.

There are plenty of guides out there but the one I followed was this
>evonide.com/non-root-gpu-passthrough-setup/#GPU_passthrough_with_QEMU
Getting the audio just right without any crackling was the fiddly part, had to use alsa plugin with ac97 and disable HDMI on my GPU in the VM. Works flawlessly now

I am currently using i7 4790 CPU, H97M-E motherboard, and GTX 980 GPU.
Intel should be easy for now, recent AMD GPUs shouldn't have any issues either read of people successfully passing through r9 290 and up probably even lower, just check for AMD-V support or vt-d.
For recent hardware I would only be concerned about AMD ryzen CPUs currently, mostly the memmory mapping on current AM4 motherboards not the CPUs themselves, I have heard of people managing fine with ACS kernel patch but I'd rather wait until there is a motherboard with support for vfio out of the box without fucking with the kernel every time.

By the way SLI/crossfire is not supported and you need to have two different GPUs (preferably from two different vendors to be safe that no device IDs match such as HDMI audio for instance) otherwise you will have to apply ACS patch to get it working which I assume would be a pain in the butt to babysit every time your your kernel gets updated, and I hear it can be a security risk.

Alrighr same guy here im about 9 minutes from home and when i get there i olan to purge all my drives and install arch on my SSD and get started setting up passthrough that way im starting completely clean

Enjoy your humongous CPU bottleneck.

CPU performance penalty is minimal as long as you host OS is not using up too many resources while the VM is running.
Most games are GPU bound so I don't really notice the performance penalty likely up to 5fps at worst depending on game which you won't be able to tell with the naked eye.
I have run multiple games and applications bare metal prior and in VM now at 4K 60Hz, haven't really noticed a performance drop, gap might be more noticeable if you are running at higher refresh rates like 144Hz+ which would be more taxing on the CPU.
For crying out loud I am even messing around in VR 2160x1200 90Hz just fine with games like Arizona sunshine and Raw Data that are taxing as hell on the hardware.

2 hard 4 me
Maybe in 5 years it will be doable for normies like myself

bought new parts and found a guide to do this for my build. just gotta get a cheap gfx for the host. excited to lose the win partition.

There has been work on making it a point and click process through a GUI with virt-manager, which is what I used to get an idea prior to delving with scripts because messing around with XML was a pain if you wanted to do something that was not supported through the GUI directly like foil NVIDIA's "error code 43" intentional cockblocking bullshit.

Still a long way to go to make it stupid simple automated install process.
For now you'd need to have enough time and patience to research required hardware and follow a few steps in a guide.

>he doesnt have a dual cpu dual gpu rig
>running the same cpu and gpu in windows wm as in linux

cucks

good luck and god speed user, you only really have to set it up once.
Looks daunting but it is worth the effort and doesn't take long to setup again afterwards.
The audio emulation was the only part that I had struggled with, on some motherboards you might have to update the Bios for KVM to work. I had to do so myself but it is a simple process these days with a USB memory stick, don't upgrade the BIOS unless it doesn't work though so you don't risk bricking your system.

this is what I use from a guide
# Basic CPU settings.(hyperv tweak for better GPU performance disable if it fucks up)
OPTS="$OPTS -cpu host,kvm=off,hv_spinlocks=0x1fff,hv_relaxed,hv_time,hv_vendor_id=Nvidia43FIX"

Can someone explain why you want this over having two computers and a kvm switch?
I guess sharing data between the machines is a bit slower

1 PC with an iGPU and dGPU is cheaper.
If I had access to two desktops I'd rather use the second PC as a 24/7 home server for scheduled remote backups, network drive for all mah neemus, and home streaming vidya/videos to gpd Win.

Fuck I want one now.

Yea, two computers seem to be the best solution.

I have something similar, managed to get the VM hooked unto the host soundserver so I have perfect sound from both at the same time.

My pci usb controller refuses to passthrough for whatever reason

It is as alive as the mind of the user.

Is it ASSMedia by any chance?
I got a cheap ass ASmedia USB3.0 controller and it was absolutely horrid, kept locking up requiring a full system reboot bare metal to be functional again after rebinding/unbinding several times.
It fucked up a lot with USB2.0 devices causing bluescreen on windows VM, tried every single fucking driver I could find, gave me absolute hell.
Also this is stupid, but make sure you actually plugged your PCIE USB controller into the PSU some tend to forget to do so, I know I did.

see the vfio iommu mailing lists