The idea of a PCI-E based CPU addon card sounds awesome.
Why isn't this the future?
Have the onboard CPU handle the OS, and the addon CPU everything else.
The idea of a PCI-E based CPU addon card sounds awesome.
Why isn't this the future?
Have the onboard CPU handle the OS, and the addon CPU everything else.
Because G4600 is enough.
It does exist.
But they are usually super niche or expensive.
They are also usually sometimes hybrids as well, like the Cell add-on that was out for a period.
Of course, IBM being shit like IBM are, they fucked it and the entire future for Cell and Power in the process. GG, you dumb cunts.
>onboard CPU handle the OS
>addon CPU everything else
Y?
Because you can just scale regular cpus
Some boards even allow 4 at a time!
>XFX Ryzen 7 1700 OC CPU
yes pls
Woah!
I see no practical reason for PCI-E based CPUs. From consumers perspective, modern CPUs are pretty much capable of doing everything you throw at them.
>Have the onboard CPU handle the OS, and the addon CPU everything else.
No need for this, for less money we can have a single perfectly capable CPU
We have loadsa cores nowadays
How about an entire computer on a PCI-E board?
(Motherboard, CPU, GPU, RAM, everything).
Just plug a PC into a PC and be able to control a PC within a PC.
Sounds like you're joking but I'd love for something like that to replace VMs.
Then why have the mainboard there at all? Just complicates things and makes it more expensive
Hmm really making me think boys
Could we then plug another pci card into the pci card seeing as its a computer
It exists and in essence a gpu is a computer for you computer.
Didn't old Apple computers technically did this to run programs designed for older models?
That's what a graphics card is.
I recall there being these daughterboard computers back in the day
Back in the day = Pentium II era or thereabouts
We have virtualisation, multi-core cpus etc now
You can already do the same thing by plugging a smartphone into your PC.
Yeah i work at the rubbish dump and threw one out today lol
What if instead you put all components on a dedicated board? Skip the PCI-E part entirely!
WOW can i get your email i think this will sell hotcakes we can create a cryptocurrency for backing our new company
i mean, it's definitely possible. this is the mobo of the i7 surface pro 3 i use as my main computer. just about the right size, and includes everything but the ssd
i mean if the idea is to "swap parts quickly" or something ditch the fucking PCIe altogether and just build a barebones system with gorillion thunderbolt slots and connect everything to those
Wouldn't this take away bandwidth from the PCI-E bus? As in GPU might suffer.
>another 15 year old thinks of something / finds something out
You mean like we did over 20 years ago already and still do to an extent?
zen3 + navi should do
If think it might be cool if you split it even finer. Like imagine a low-power CPU that's all for the scheduler which rules over all remaining CPUs. I don't really see the point but it seems cool.
Add some separate processors for I/O and you've got yourself a 70s mainframe.
>Like imagine a low-power CPU that's all for the scheduler which rules over all remaining CPUs
That's called Intel ME.
the return of the co-processor?
But Intel ME isn't usable by the OS as far as I know.
I mean, you basically described an advanced blade chassis.
I had an asrock board that had a similar feature. I think it was some x2 or 64 board that had a feature to add in another board for a future cpu socket model.
Lane too slow.
well, not return, the classic co-processor is just integrated now.
on enterprise servers on the data center i worked in
there are servers that has hot swappable cpu. i mean you can pull one blade(i forgot what the real term) that contains a cpu on a server that has 4 blades and replace it
This would be great for shit like game consoles. I'd rather buy a pci card than a whole new fucking box
Some sort of parental board. A fatherboard perhaps?
I vote we call it the patriarchy board to trigger the sjw.
how about we put an entire computer on a fucking pendrive?
You need a backplane with SHB but that already exists
it already exists, both AMD and intel have multi CPU mobos, but they are for server processors like Opteron and Xeon.
Could i then run two pc's on a pc in SLI?
They did a hell of a job making that look straight out of the early 2000s
Trips of falsity.
A graphics card is not quite a conventional computer.
Non-trash architectures had coprocessors and external cache modules for ages. It's just that x86 is ghetto shit.
some amount of future proofing, some amount of just wanting have a fuck load of less powerful cpus that still act as cpus do for when a task calls for parallelism.
gpus are good at this, but are far too stupid to be universal, imagine if instead of people designing for 1 powerful core, or 4 powerful cores, they instead defaulted to powerful core when it called for it and a possible add in card when it would serve the function better.
hell, the fucking thing could even have normal cpus, just recycle old server shit and put it on a card.
your kind of dumb, but honestly, a pci card that would have something like a pentium 3 and era gpu for emulation/hardware vming for max compatibility, or even older for older ones, possibly an arm co processor for emulation in hardware for cellphone/tablet apps/programs.
an add in card that is specifically meant to take the load off the main cpu, so when you are done you switch to it, using even less power
Could you SLI a dozen of these?
No, but you can crossfire
That's basically just a blade server
Kinda what Sony did for the PS2 and PS3. The original fat PS2s had the PS1s hardware directly integrated into the mainboard for backwards compatibility. Same with the early PS3s having the PS2s Emotion Engine and I believe audio processor built in.
You already can do this, but you need 5-15 grand to buy an embedded card for this purpose.
You also won't get any where near the performance gain of just buying a better core, or buying a server system with multiple sockets for CPU's. When multi processor systems are built that I've seen its usually to have a system that can run at extremely temperatures, which high end/high power server parts often can not do.
Even with multiple sockets you are generally better off buying a better CPU up to a pretty high price point, because of caching, sharing of memory, sharing of IO and interrupt latency. Read about NUMA if you want to know why multi socket systems are really not that great for general consumer use.
In multi-cpu systems joined via PCIe the core are very loosely coupled, and how they interact extremely proprietary. If you went down this path you'd pretty much have to write all your code from scratch as each card will be running its own OS in real wold systems I've seen. Sharing memory over PCIe would incur such high latency for memory accesses it would generally not be very worth while.
Because GPUs are far superior for the kind of work the add on card would be doing.
Cuz quantum shit is expensive
Not talking about this kinda shit tard face. This is for gpu like cpus that work for pci. They are not for normal server operations.
Backplane it is then
Amiga did it well
Because my onboard CPU can do everything already by itself without the need for a $1000 pci-e meme card.