NVIDIA BTFO

NVIDIA BTFO AND IS BANKRUPT

>MOAR GIGAHERTZ
>MOAR COARS
>MOAR RAM
>MOAR VRAM

you cant make this shit up

Another pcie ssd? pls b chiep dis tiem.

It's a GPU.

Well, paint me green and spank me like a disobeying avocado. I guess i misread it, sorry mate.

inb4 the entire thread gets filled with shitposts thinking these GPUs are meant for the consumer market.

This will probably be very good for those very specific industries that would actually benefit from having a much larger cache.

It doesn't affect you and it doesn't affect me though. This won't be anything of any use to consumers, even if you could afford it. The application has to be specially developed to utilize the extra cache size. No program a regular consumer/prosumer would ever use will be programmed to use as such.

Still, this'll provide a good amount of profit-per-card for AMD given the industries that they're gaming for.

Compared to
>LESS GIGAHERTZ
>LESS COARS
>LESS RAM
>LESS VRAM
Intel\amd cucks fall for this every time.

Oh, and all the processors you use are ARM/Nvidia/Qualcomm?

>solid state
>has moving parts

It's a coprocessor.

Nope, it just have a way faster cache. If you ever worked in the movies a single roll of recording is arround 64gb of data. people uses thunderbold for that so they can access it faster. But with a card like that would be even faster and less demanding on the CPU.

the idea is they stuck an M.2 SSD on a GPU and tied it into the PCI-E slots.

And it's absolutely useful for consumer applications. This will pretty much make texture size and streaming a non issue. Partially Resident Textures are already becoming the norm, and now they'll be able to have gigabytes of them without being bottlenecked by the CPU, RAM, or SATA interface. For fucks sake the demo card was streaming 8K movies at 96 FPS.

a version with a 128gb SSD could be EASILY put in the high end and mid-high end within profit margins.

Finally I can play RAGE with megatextures on ultra nightmare mode

RAGE megatextures only sucked ass because they did it via a software hack.

GCN and Maxwell both support partially resident textures fully, Kepler sorta does.

Examples of games that use it?

Every Frostbite game after BF4
Doom

No, it's not a coprocessor.

>making a meme out of the natural progression of technology

Way I see it, this is fucking retarded. Does anyone remember all the chinese GTX 980 cards on eBay with DDR3 VRAM and shit performance? Well, I'll let you in on a secret: NAND is slower than DDR3.

What's Amy's equivalent to Cuda?

Think of it as a really fat L4 cache that can also permanently store things. The GPU's still have GDDR5 on them.

Traditionally, L4 cache would either be RAM or would be faster than RAM. NAND on the video card is fucking stupid.

If you want to cache your textures, put 64GB of actual RAM in your PC - even including the overhead of the CPU and PCIe bus, it will still be probably 3 times as fast as NAND on the video card.

Which is why they were getting 96 FPS streaming an 8K movie vs 17 FPS off of RAM?

Guys, this has nothing to do with video games. It's designed for video rendering.

This system cuts out the time needed to contact the CPU, system RAM, and ultimately hard disk by having an SSD right on the card. For games, which need a card to process a small amount of data in as little time as possible, it's useless, because a game won't use more than 12 GB of VRAM for another 3-5 years at most. Video rendering, on the other hand, processes as much data as it can from the project given to create the final product, and will take advantage of anything it can use as a cache. So, having 1TB of storage that the GPU doesn't have to ask 3 components to access is immensely useful in that application.

Don't compare this to a 980. Compare it to a Quadro.

If that's what they said they were comparing, they're liars. You think NAND is faster than system RAM? You think a normal M.2 SSD connected to the motherboard is too slow for 8k video?

They did a live demo mate, shits slow when it all has to be routed through the CPU first.

Notice how Intel and Nvidia wreck the shit out of AMD all the time, then.

>1060 wrecks the 480
>b-but 480 has MOAR VRAM!!!

>i5/i7 wreck the FX series
>BUT FX HAS MOAR COARS

>Nvidia will soon release a GPU to kill this piece of shit
>b-but RADEON PRO HAS MOAR VRAM!

not the other guy, but why is it slow? wouldn't 8k be roughly 400Mbps @ 96fps?

Would be pretty useful for scientific computing.

Too bad OpenCL is garbage and CUDA is Nvidia-only.

someone made a thread with the 950 in the OP earlier. I sat down and made a medium length post that I don't want to go to waste, so I'm posting it in here.

I have been watching this thread for a while, I guess I'll post about my experience with the 950.

I got it for my cheapo $400 rig (not including the 480GB SSD) with an i3-4170. It sits headless in my closet and streams games to a Macbook Pro through Gamestream and Steam in-home streaming. (I prefer the latter)

Overall it's a pretty good MMORPG/MOBA card. I play a ton of GW2 and LoL which it handles well. GW2 isn't maxed but it's on the higher range of settings and looks nice. LoL runs at ~200FPS if you really think more than 120 matters. It also plays Quake Live perfectly as you'd expect. The card hits its limits with Second Life, but Second Life runs like shit even if you have a 1080. Still, you can have the fancy lighting and effects turned on if that's your thing. Just don't expect to get more than 10 FPS at a crowded sim.

Finally, it functions as a MODO render slave occasionally. It does that pretty decently if you're not working with something insanely high-poly

Was going to post exactly that.

>Professional rendering card
>Relevant to less than 0.1% of the population
Congrats to AMD, but this is not going to affect that many people.

>NVIDIA BTFO AND IS BANKRUPT
Yep. Going to lose that 80% marketshare within a few days at most.

For you

Nobody is getting BTFO you annoying fuck

>ignoring the ridiculously lucrative professional market that ATi/AMD have never been able to tap well due to no drivers
This card is perfect to strengthen the Radeon image in workstation products, after that they will tap into the HPC market

What is this even for?

Even if the memory card is using NVMe, the maximum bandwidth will be around 3GB/s.

Compare this to a Titan X, which has a memory bandwidth of 480GB/s, you can see how fucking ridiculous it would be for most memory intensive applications.

>What is this even for?

Not for gaming. It's for workstations. For people who create visuals in games, movies and ect.

The goal is go have the files you're editing directly on the card, no need to go through pcie to access them