ITS COMMING !!!!!!

ITS COMMING !!!!!!
wccftech.com/amd-radeon-pro-wx-7100-workstation-card/

Other urls found in this thread:

twitter.com/RadeonPro/status/757782561313566720?s=08
youtube.com/watch?v=Lc6Jtupbtic
hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility/
twitter.com/NSFWRedditGif

Interesting, but not many Sup Forums users have use in professional workstation cards, hence the steep incline towards consumer gaming cards.

A fucking ssd gpu hybrid

Nobody on Sup Forums is actually going to use this thing because Sup Forums is comprised entirely of Sup Forums kiddy gamer faggot crossboaders nowadays.

Cool idea, but like most things in AMD space, they expect other people to write software to take advantage of it.
But on the enterprise and professional side, people usually do write their own software, or use professional suites

I only go to Sup Forums to lurk

twitter.com/RadeonPro/status/757782561313566720?s=08

I hope AMD releases something else like the Pro Duo on a single card

Why do Gaymen and Workstation have to be separate

PRO IN LOO

Muh dick

are those single slot cards?
those are either going to be really under powered, or run super fucking hot.
actually known AMD it will probably be both.
that seems... cool?
why not just have a fast pcie/m.2 ssd and a normal gpu?

that fuckin 8k demo

I might actually get one for my workstation machine instead of buying 1080 like I had planned.
Gonna wait for benchmarks though.

>Why do Gaymen and Workstation have to be separate
As time goes on both AMD and Nvidia are getting better at everything. However, including the top-of-the-line from both realms (compute/workstation and gaming) in any particular unit's configuration adds significantly to the cost both from higher costs to design each variant, and from a lower percentage yield of acceptable chips (as they must then be more complex).

Now consider that 99% of people invest in a top-of-the-line card for EITHER gaming or work, not both. You can thus sell a high-end unit to those people at a slightly lower cost by tailoring it to not include the other realm's needs. Or let's imagine you didn't, but your competitor was able to offer lower prices by having separate chips in an architecture that specialized? You would get creamed in every review as being "too expensive for the performance tier" in both the workstation and gaming categories.

It's a natural result of the increasing sophistication in two worlds that have separate needs (e.g. FP16, FP64, patterns of shared memory use).

Eliminate the extra busses, eliminate wire/trace length, increase response and throughput while decreasing (total) power.

Its a pretty cool concept to think about but I cant say how practical or effective it actually is, AMD has to source a controller from somewhere else for starters.

You wanna know why 480 supply is lower than expectee, took longer than expected, and has variability issues while drawing more power than we thought?

All the good binnings are going to contract deals and workstation cards.

>AMD has to source a controller from somewhere else for starters.
May not be too difficult considering their branded SSDs, so arranging some deal or another could be fairly simple.

>why not just have a fast pcie/m.2 ssd and a normal gpu?
These are designed for compute. You want the GPU to have exclusive control over use of said SSD as a cache. At the performance levels and use case this product targets, being on the same expansion card as the GPU's die and memory has huge benefits. Instead of shuffling data across a separate PCIe lane (and worse, waiting on the CPU to coordinate that shuffling), the GPU can coordinate that access more intelligently and more quickly on its own.

I made that comment before I watched the above linked video of the product working - gotta say, it's really impressive. Nvidia doesn't (yet) have an answer to this and it's a real benefit for professional editors.

>Nvidia doesn't (yet) have an answer to this
Unless they pull another GTX480 "release", then they won't have anything like this for a while, I don't think anyone expected something like this to come along.

No doubt we all expected massive amounts of HBM and HBM2 on workstation cards this year, but this is certainly something different. And very interesting.

Can we expect GPUs with NVDIMM slots in the future? Seems like a natural evolution from NAND
Heck, even Micron's xpoint

Just to add even further. To combine this with the technology implemented in the Firepro S7100X (Hardware Virtualisation) and you could possibly get something really awesome in the pipeline.

Hardly useful in a professional environment. A UPS would be far more convenient.

I earn money by using Rhino 3D in shipbuilding industry and still I can't tell if this program relies on GPU or CPU more. We have a workstation with 12 core Xeon CPU + Quadro K2000 and another machine with I7 CPU and GTX 580 GPU...

Both suck the same when I open a huge(+2 GBs) drawing file

Density is all the rage now, put as much shit as possible in every inch and cluster it.
CPUs with onboard/package memory isn't far off.
We're finally entering an age of integration since we can no longer depend on pure performance increases every year from contemporary ICs

>I can't tell if this program relies on GPU or CPU more.
Look into the specification sheets on their website, no doubt something there in regards to hardware acceleration.

>PUs with onboard/package memory isn't far off.
We're already there with GPUs, so why not.

I imagine CPUs with onboard memory would not be the primary source of system memory, but would be restricted to feeding the iGPU if anything.

No, the use is pretty obvious for latency workloads and branchy code that can't fit in the primary caches.
It would act as a L4 with far lower latency than system memory.

>I imagine CPUs with onboard memory would not be the primary source of system memory, but would be restricted to feeding the iGPU if anything.
If you look at weird one-off SKUs like the i7-5775C, you'll notice it's already happening. What's a bit funny is that if you disable the iGPU and run displays a dGPU instead, the CPU gets a significant boost by leveraging the eDRAM as an L4 cache.

keep a resource monitor open when you're trying those large files. Is your CPU slammed but your drives are not at throughput capacity? Are your drives maxing out and everything else is jumping between low and high utilization?

The next question I have, does the machine use SSDs? If not then that's your problem right there - in the best of cases a 2GB file would take 11-15 seconds just to read from a hard drive, and that's with a single contiguous file not packaged projects, also ruling out any processing the system has to do.

Not him but what I want is 4-8GB of on-package HBM or the likes so you could conceivably load full programs into a memory space as close to the processor as feasible.

HBM isn't gonna cut it for CPU jobs, its latency is still high compared to DRAM

Best would be SRAM, but fuck it's 7 times less dense than DRAM and way more expensive.
And I don't think those 1T or 2T SRAM cells ever worked out.

>Not him but what I want is 4-8GB of on-package HBM or the likes so you could conceivably load full programs into a memory space as close to the processor as feasible.

Why stop there? Given the sort of usage cases in the here and now for such a thing (i.e servers and workstations) if you could sort out the latency just have a custom design with enormous amounts of HBM to double up as system ram.

>currytech

>if you could sort out the latency

Considering that AMD and SkHynix managed to get HBM rolling to begin with, I imagine it wouldn't be too difficult to create a variant with notably lower latency.

In some ways this might be a good idea.

Most graphics cards are sitting on 16 PCI-e lanes and can't even saturate it. Give four of those lanes to onboard SSD and you have a 3GB/s SSD without harming graphics performance in a single PCI-e slot.

Well it might not be possible, the design could inherenly make it not worth the effort to get latency low enough as HBM is inherently for usage cases where capacity and bandwdith are a higher priority than latency.

Exactly.
For the longest time now, GPUs haven't even been able to max out PCIe 2.0, yet alone 3.0, will certainly be interesting to see how this carries over to workloads, as opposed to realtime video. While it was still an impressive display of this GPU's potential, I think we're all very excited to see how this handles something like PremierPro.

If we're talking compute, the GPUs need all the lanes they could get.
So I assume this SSD/GPU hybrid from AMD has a PLX chip

since when GDC and SIGGRAPH became junkets?

>"Workstation" card
>Doesn't support CUDA

Into the loo it goes.

For workstations, yeah I guess. They would see benefits of directly being able to access large amounts of fast storage for their compute, but I could definitely see the application in the desktop space.

Hell, just stick an m.2 connector on the back of the card even.

>Hell, just stick an m.2 connector on the back of the card even.
Without knowing what kind of connection or interlink is being used here, I think we can just assume that a typical m.2 socket would result in higher latency and lower transfer speeds. Would be damned nice for a full spec sheet, but paper launches aren't AMD's thing it seems.

Would it technically be possible just to separate some of the PCI-e lanes directly to the m.2 (controller)? Or would it need to go through the GPU (and thus the AMD driver) to be picked up by the system?

If I recall correctly the main 16x PCI-e connectors talk directly to the CPU so it might not be possible to have literally two separate devices on a single connector even if it would be possible via the MCH.

Would probably require some driver black magic, but the question in the end, is would it result in better performance? Probably not would be my best guess.

Pretty clever to color them blue, that will surely help keep heat under control

If it got direct PCI-e lanes then the performance should be identical to an m.2 mounted on the motherboard.
In terms of applications or benefit, people could add m.2 (full speed m.2) to boards that don't already have it (Haswell boards that still aren't redundant, lower end mITX boards, etc), or just add another m.2 connector.

So this was primarily made to free up the bus?

As well as cpu overhead.

And with it reducing the load on the CPU, allowing itself to do more actual work as opposed to just passing information from storage to the GPU. And by such increasing performance as a whole.

True, the slight advantage in performance and perf/watt Nvidia has should easily be offset by reducing load on different parts of the chain.

It's probably colored that way so Sup Forums kids don't confuse it with the gamer line of gpus.

>are those single slot cards?
>those are either going to be really under powered, or run super fucking hot.
>actually known AMD it will probably be both.

>single slot GTX 970

The workstations these cards will be going in have a ton of thermal sensors and fans

Single slot? These go in blade servers don't they?

Exactly. And this is a significant reduction in load too.

The 8k video in the demo is being processed at 4GB/, now imagine if the CPU was having to pull all of that from the HDD's or SSD's located elsewhere in the system, we're looking at a lot of reduced overhead right here.

Or 5 or 6 into a render station.

4GB/s*
ftfm

they are underclocked

Neat, finally something new in the professional market.
Was pretty sad when the largest innovations there in the last 3 years were new cache snoop filters by Intel

These TDPs look weird, for the two Polaris 10 chips.

Especially take a look at the FLOPS/watt ratios, just 1TFLOP less but half the TDP?

AMD hasn't released any specs, where are you getting this from?

currytech no doubt.

Nothing there adds up, that's even before you take into account that AMD hasn't released the specs. Those niggers just make shit up as they go along, then hope they don't get called out on it.

fun fact: flamethrower doesn't actually burn anything except gas

they already pulled their tesla/quadro card and titan card
so probably not this year, but as military guys say - if its possible somewhere it means its possible anywhere
but this kind of fuckery with firmware,cpu communication, pci lanes configuring, controller is very specific and drivers comes at a cost, AMD obviously developed this thing for a while

Depends how they are measuring TDP as it gets odd on some workstation and server cards because said cards aren't expected to provide all of their own cooling, unlike a consumer gpu.

Flamethrower as weapons use gas to propel a burning liquid, kinda like napalm, that sticks to what it lands on.

Both machines have SSD. Both have 32 GB RAM

What I'm trying to say, Xeon + Quadro GPU workstation exactly sucks as i7 + GTX 580 Desktop PC in Rhino 3D

Great product, terrible pricing.

can you change them to Xeon + GTX580 and i7+Quadro and compare again?

*school bell rings*
*crowd boos*
*looks into camera*

You're probably wondering how I got here.

You think the specs are currytech's invention?

That's not the point. The SSD is used by the GPU, not the host system.

So whats the eta on seeking alpha having a post saying how this is a terrible idea and AMD are doomed?

Neat, but this is very dependent on SSD controller speeds and NAND, if those stagnate AMD will be fucked.
The next gen of this product seems to scream PCIe 4.0 too.

SA has been in the proverbial shitter for the last few years.

Thats because seeking alpha is an attempt at stock manipulation.

pretty sure it wont be on p10 or p11... amd will bring tahiti chips since they are already very matured for this job

youtube.com/watch?v=Lc6Jtupbtic

for your education

He actually mentions in the video that they used 380 (psi?) in order to get napalm out.

I am being educated, but I'm not sure I am being proven wrong.

Have you not watched the fucking livestream?

hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility/

>Where the plot thickens is when you look at the Koduri’s unwavering ambition. Koduri’s ultimate goal is to separate the Radeon Technologies Group from its corporate parent at all costs with the delusion that RTG will be more competitive with NVIDIA and become a possible acquisition target for Koduri and his band of mutineers to cash in when it's sold. While Koduri is known to have a strong desire to do this by forging a new relationship with Apple on custom parts (no surprise there) for Macbooks, the real focus is on trying to become the GPU technology supplier of choice to none other than Intel. While this was speculated some time ago I can tell you with certainty that a deal is in the works with Intel and Koduri and his team of marauders working overtime to get the deal pulled into port ASAP. The Polaris 10/11 launch, and all of its problems, are set to become a future problem of Intel’s in what RTG believes will be a lucrative agreement that will allow Koduri and his men to slash the lines from Lisa Su and the rest of AMD.

>Intel uses blue colour

Pajeet not even trying to hide it anymore

Kyle was right all along

You never get tired, don't you?

Still mad Kyle was right all along?

>In the simplest terms AMD has created a product that runs hotter and slower than its competition's new architecture by a potentially significant margin.

ITT- gaymers who think a workstation grade card is the same as a consumer level card

Are these intended at least partially for video editing? Any chance OpenCL support for more NLEs might expand?

Did YOU watch the livestream?

>BETTERED
It's blue now? Who knows.
oh wait

>they literally show a project in realtime gain a boost from 17hz to 92hz with the card
>can't possibly be real
>why not just use ram/normal ssd?
Is Sup Forums populated by retards?

apologize