GPU on FPGA

I want to implement a GPU with partial Vulkan support on a FPGA. I already know some Verilog.

Where do I begin?

Other urls found in this thread:

distrowatch.com/?newsid=09415
amazon.com/Computer-Organization-Design-ARM-Architecture-ebook/dp/B01H1DCRRC/
github.com/ssloy/tinyrenderer/wiki
opencores.org/project,gpu
twitter.com/NSFWRedditImage

That sounds cool, do you already have access to one?

I recommend installing this first

distrowatch.com/?newsid=09415

Is it just me or do most people overestimate Sup Forums the majority is here just to call each other Pajeets or Jews

This Computer Architecture book explains the Intel Core i7, ARM Cortex-A53 but also the NVIDIA Fermi GPU architecture:

amazon.com/Computer-Organization-Design-ARM-Architecture-ebook/dp/B01H1DCRRC/

Also learn how a graphics API can be implemented in software:

github.com/ssloy/tinyrenderer/wiki

first design a cpu from scratch. Something simple like an 8 bit micro. Have it execute instructions from a little ROM you build.

bump up complexity to a 32 bit with more instructions.

talk to external ram.

make your cpu pipelined (~1 instruction per clock)

add instruction and data caches.

build some sort of multiplexed bus or xbar for multiple agents to talk to ram.

design your first shader core.

this will take you a few years.

This,
I don't know if OP is baiting and everybody else who posted here is retarded or if this is just a delayed april autism day.

But as always Sup Forums is a meme board with barely any actual knowledgable people so...

Wrong, many engineers and computer scientists like me here.

implement a quantum computer on FPGA

Don't try to do that. A GPU is already an ASIC that supports GPU APIs

If you want to use it with your computer then you have to interface it with PCI (debugging PCI requires actually probing it on the board with really expensive equipment), write drivers, implement Vulkan, get an FPGA board (if you want one that isn't completely obsolete it is really expensive)

And you better hope to god you don't run into an NDA spec or you are fucked.

oh yeah and I didn't even mention RAM. Good luck figuring out a DRAM controller.

>verilog

At least learn a non meme hdl language like VHDL

gaah, I left out adding interrupts and exceptions to the early cpus.

do that after you go 32 bit, or after you go pipelined.

>Good luck figuring out a DRAM controller.

A dram controller is not that hard, particularly if you start with SDRAM. DDR and DDR2 are not much harder than that.

I've not done a DDR3 or above design; I can't speak to that yet.

I don't know anything about FPGAs, but talking about FPGA GPUs, is it possible to implement analog video output entirely on a FPGA or would you need external hardware for the DAC?

you need external hardware, but that can just be a resistor network.

fun fact: the analog voltage inputs on at least some monitors appear to be AC coupled. Don't just apply some voltage and expect to get a color. when you are outside the drawn area of the screen, command 0V.

Since you want it so badly.
(you)

Don't forget you g /fags/ that GPUs themselves may be nearing obsolescence (though the next step in evolution is murky).

Shaders are tiny processors that exist to extremely efficiently exploit TLP. The GPU is a collection of thousands of these little guys tied together with support hardware.

If it is possible to dynamically synthesize the actual hardware that implements a shader program on the fly and dump it into a fpga on die within a cpu, that might be far more efficient than the best GPU shader because you are no longer dealing with instructions or caches, the hardware just does the calculations. It would be like the original fixed function display hardware, but generated on the fly per texture/polygon and dumped into the fpga fabric though partial reconfiguration.

Could you get better performance/watt this way? I dunno, but if so kiss the GPU goodbye.

Who the fuck knows what this would look like in the end, either from the programming or the hardware side.

GPUs aren't going anywhere until we get 1thz graphene CPUs that can render games at 16k resolution in software without breaking a sweat.

Nobody uses VHDL nowadays except maybe US DoD.

You don't seem to understand the paradigm shift I'm talking about.

This is beyond the old CPU vs GPU debate. There is potentially a 3rd path. It's why Intel purchased Altera.

It's not a paradigm shift. It's a delusion you pulled out of your ass based on a very shallow understanding of the field.

The old divide was the east coast US and Europe used VHDL, the west coast of the US used verilog.

Verilog for ASICS and VHDL for FPGAs.

I personally prefer VHDL.

For OSS stuff Verilog got a boost because it was simpler and Icarus Verilog was around. With GHDL around and pretty good, who knows.

Verilog has seemingly been pulling a lot of VHDL ideas into it, and I've heard that some Verilog ideas have gone/are going the other direction.

I hope you are not basing your 'career' choices around your present 'understanding' of the direction of computation.

Posters here are rather diverse. There sure is a lot of garbage on this board but there are a few who know their stuff.

>Where do I begin?
By bootstrapping off an existing project:
opencores.org/project,gpu