CPU for virtualization

What determines how many virtualizations a CPU can support? I'm trying to run multiple clients on the same hardware, and I'm not sure how to select the right CPU.

Other urls found in this thread:

linux-kvm.org/page/Processor_support
anandtech.com/show/10340/googles-tensor-processing-unit-what-we-know
pcworld.com/article/2365240/intel-expands-custom-chip-work-for-big-cloud-providers.html
infoworld.com/article/3050369/cloud-computing/intels-new-22-core-server-chip-speeds-up-cloud-services.html
theregister.co.uk/2016/04/06/google_power9/
nextplatform.com/2016/03/14/intel-marrying-fpga-beefy-broadwell-open-compute-future/
ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm
en.wikipedia.org/wiki/X86_virtualization
twitter.com/NSFWRedditGif

Infinite
Its only limited by processing power

How can you share cores? I thought each VM needed its own core(s) and ram?

Thanks. Do you need to allocate processing power to each server at set up (like in the case of hardware), or does the entire system essentially share processing power.

Replace "server" with "client" and "hardware" with "storage"

You allocate

From what I remember running more than 3 VMs per core starts causing CPU scheduling issues.

No. Imagine a vm just like its a process in your os. 10 vms can run on a single core. RAM needs to be allocated statically though. At least in full virtualization. I think openvz can even dynamically do that.

Bullshit

linux-kvm.org/page/Processor_support

As for how many vm, each vm should have:
- Enough memory
- Enough IOPS (i/o per seconds)

anything more than 1 vm will cause scheduling issues unless you are using something like sparc which can run 4 simultaneous threads per cpu core. x64 will run 1 thread per core and each vm will be scheduling its own threads while the hypervisor/os schedules vm threads.

in short

>10 vms can run on a single core.
Lol you've obviously never actually done this. Or if you did it was 10 VMs idling and doing literally nothing.

Good luck getting 10 VMs to schedule appropriately when they're all doing work at the same time, even if it's low process work the fact you have 10 VMs trying to schedule at once will create a major bottleneck

>Or if you did it was 10 VMs idling and doing literally nothing.
That is what 90% of VMs in the world are doing 90% of the time.
Yes in some cases this is not true, but it would depend on OP's use case, which they haven't shared in very much detail.

Are there special CPUs designed to handle multiple VMs without scheduling issues?

No don't be retarded

Are your VMs even going to need much CPU time or will they be sitting idle 90% of the time?

If they're just monitoring something or doing basic shit you probably don't need to worry.

You'll need to explain what you actually want to use them for if you want better answers

How is that being retarded? I'd guess Intel had designed a CPU to be used in a virtualized server that solves the most obvious problem. No, all of the VMs would be doing very intensive operations simultaneously.

It's retarded because if Intel coils just magically make a better scheduler they'd have done so. There would be no reason to have some offshoot product SPECIFICALLY with that. It's something that would be brought to every CPU in the product stack.

So, do you believe google has a billion CPU's in its data centers to handle searches from every user? Clearly, they have some kind of algorithm that handles this issue.

If you've fully loaded 10 VMs per core then you're a failure at infrastructure planning. Fully loaded VMs (ones that will hold at 100% processor or close to it) should have dedicated cores. Not in the literal sense but in the sense that you have planned capacity for those cores to always be loaded.

They have custom hardware too which they've not released basically any public information on.

Custom software written specifically for that hardware and that particular function is hardly comparable to consumer products running generic VMs.

There is no limit past software. That's assuming you're ignoring load. A general rule of thumb is 4 vCPUs per core for heavier loads. 10 vCPUs per core are acceptible for light loads. As for hardware: any Xeon after the Core2-architecture support virtualization. Many desktop CPUs do as well.
AMD actually goes back further with virtualization. It has been supported since the Athlon 64. AMD has also supported pcie passthrough far longer than Intel.

You can have 20% load on the core but if you've got 10 VMs all trying to schedule work at the same time you're going to hit a bottleneck. The actual CPU use is minimal, the scheduler just can't keep up.

>custom hardware
Custom built servers, not CPU architectures. They use the same Intel/AMD processors that are available to the public.

I'm not talking about consumer products, though, I'm talking about enterprise products for a server room or data center. I read somewhere that Intel Xeon E7 can support 60 VMs at high performance.

What hypervisor are you running that this is an issue? I have never encountered this problem.

>Infinite
>limited

>do you believe google has a billion CPU's in its data centers to handle searches from every user?
Yes, they do.

>Custom built servers, not CPU architectures.
anandtech.com/show/10340/googles-tensor-processing-unit-what-we-know

>They use the same Intel/AMD processors that are available to the public.

They don't, though. Intel customizes chips for each major cloud provider. No one uses AMD.

>Intel customizes chips for each major cloud provider
Proof.

Just for fun here's some math
Blade servers allow 2 CPUs per blade for a 16-blade configuration.
A blade chassis to hold these is 10U. You can fit 4 per rack.

4*2*16*1000(reasonable number of racks for google)=128,000
That's CPUs, not cores. If you count cores then it comes to ~4,096,000
This is actually an older configuration, you now have 3U chassis that support 16 Xeon D blades.

pcworld.com/article/2365240/intel-expands-custom-chip-work-for-big-cloud-providers.html
>Until a few years ago, all its customers got basically the same general purpose processors
>The rise of online giants like Google, Facebook, Amazon and eBay has changed that.

infoworld.com/article/3050369/cloud-computing/intels-new-22-core-server-chip-speeds-up-cloud-services.html
>Intel will customize the new Xeons for larger customers, Lane said.

In 2012 it was estimated google had a little over 2.3M servers. Not sure what that would boil down to in pure core count. And I'm sure they've only gotten more dense since 2012

If they were all Intel, multiply by 20 (dual 10-core processors).

theregister.co.uk/2016/04/06/google_power9/

Some more info on Xeon-FPGA hydrid designs

nextplatform.com/2016/03/14/intel-marrying-fpga-beefy-broadwell-open-compute-future/

Still available to the public. You can buy IBM Power9 servers.

See You've already been given proof they use custom hardware

Nice. This means non-giants can have custom chip features as well.

My point was that more than x86 Intel exists. We can't assume servers are only intel.

You might want to look into linux containers, they are lighter than VMs because they don't have their own separate kernel, its a bit like virtualization of only the working environment/applications if you want to think of it that way.

Yup though obviously we'll see a shift to on-die FPGA before we see a broader availability. A few years though

Intel bought up Altera a couple years ago, it'll be interesting to see what comes of it. With any luck we'll have completely redefinable processors in 20 years.

We need a smart FPGA, so it can reprogram itself based on workload.

I can't imagine leaving it up to the average consumer to try and make heads or tails of it.

And memory

Linux guests with the qemu balloon driver can dynamically allocate RAM

>We need a smart FPGA, so it can reprogram itself based on workload.
This will inevitably happen. Microsoft and Red Hat will develop kernel drivers to dynamically alter the configuration based on workload.

Depends on how many you plan to have with active tasks at one time rather than how many machines total you have. With any modern hypervisor, it will be very efficient at allocating processor time on its own without any need for configuration. If you need vm's that will have requests consistently, you want more cores/threads over clock speed. If you have vm's that will have heavy workloads but mostly idle time, you want faster clock speed over cores. This is an oversimplified explanation, there are many things you may have to do to fully optimize your host configuration. I would recommend going through the IBM knowledge center on virtualization and the documentation for KVM.
ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm

Yea it's going to happen just a matter of hardware maturity and having software devs get thier hands on it.

I should add to this, if you are using any Intel Xeon in an MCC or HCC package, it will be much more difficult to configure an optimal solution for your host because of the physical design of the processor itself. Not all cores are equal, in particular on the Haswell-EP and later Xeons. Depending on a VM's memory usage, IO requests, or use of AVX instructions, a VM with a specific workload may vary in performance dramatically across separate cores on the same processor, and even vary between different states of overall processor utilization, even with dedicated cores per vm. If you really want to find out exactly what you need to do and what processor will be most effective, you have a lot of research to do.

>Or if you did it was 10 VMs idling and doing literally nothing.
Thats one of the main selling points of virtualization.

If you have enough cores you can set latency sensitivity to high on ESXi for workloads which require it, ensuring they have dedicated cores and memory.

No one major is using Xeon-D, though.

>I've never done DCIM
>I dont know what it would take to support power densities like that

Wow, can't believe they're doing that.

I thought almost all CPUs since ~2006 had special virtualization passthrough or something. Direct CPU access within a virtual machine?

I think you mean VT-x?

>VT-x
Yeah, thanks.
en.wikipedia.org/wiki/X86_virtualization