Hey Radeonfags, what's it like being completely excluded from the machine learning scene?
CNTK? Nvidia Cuda required.
Tensorflow? Same.
OpenAI? Same.
You are so irredeemably BTFO. GPUs aren't just for games anymore cuckfags. Stop living homosexually like it's the year 2000. Get with the times or just declare bankruptcy and let the Intel geniuses take over the dumpster diver GPU market.
Minibatch[ 1- 128]: loss = 0.582338 * 3200; Minibatch[ 129- 256]: loss = 0.312839 * 3200; Minibatch[ 257- 384]: loss = 0.291943 * 3200; Minibatch[ 385- 512]: loss = 0.266468 * 3200; Minibatch[ 513- 640]: loss = 0.249986 * 3200; Minibatch[ 641- 768]: loss = 0.232742 * 3200; Minibatch[ 769- 896]: loss = 0.230655 * 3200; Minibatch[ 897-1024]: loss = 0.215014 * 3200; Finished Epoch[1 of 300]: loss = 0.297748 * 25600 4.782s (5354.0 samples per second); error rate on an unseen minibatch 0.040000
Didn't realize Ryzen affected the Shill-Stations in India so badly, Intel must be putting you guys on overtime.
Andrew Gray
I'd like to go ahead and acknowledge that this is bait.
However, I might as well bite.
AMD's new Vega lineup was showcased at CES this year, and they demonstrated pre-optimization performance somewhere between a 1080 and a titan X.
Secondly, intel is running out of ideas. Each new generation is only a tiny performance increase over the last. I'm still sticking with them for now, but AMD is lined up to eat their lunch with their latest lineup. Of course, it's all speculation and nobody knows for certain what will happen, but AMD has a few more tricks up it's sleeve.
Next, as far as the ML scene, you could always just use, you know, CPUs. You aren't required to use CUDA acceleration, at least for tensorflow. Not sure about the others.
The only reason nvidia and intel are beating up AMD (and have been for nearly 10 years) is because when AMD actually was ahead, intel and nvidia both did massive licensing deals to prevent AMD from gaining any popularity.
Dominic Miller
>Next, as far as the ML scene, you could always just use, you know, CPUs.
So you acknowledge our superiority.
If CPUs were good enough we wouldn't use GPUs, but I don't care what you think because you're a shit smeared AMD neckbeard manbaby gaymer faggot cuck whose mommie bought him a Radeon to play child porn games with.
Kill yourself.
Kayden Clark
t. scared of vega
Bentley Harris
>bitch ass
Remember the Riva TNT?
Cameron Gomez
Kys yourself jewtel + nvidiot shill.
Jaxon Williams
I don't care about price you faggot cuck. Maybe when you get a real job you'll realize that cutting edge technologies are worth an extra 100 bucks and no hike in your fire insurance.
Jordan Young
>this triggers the poojeet gaymur baby
Owen Myers
>cutting edge See the pic related again
Elijah Phillips
Didn't I see some shit that said new AMD cards will be able to run CUDA code?
Bro AMD is rising and I am making the bank investing in it. Come together and make money too m8.
Accept and will hurt less.. ;>)
Juan Scott
I'm not OP but the point was that all the ML frameworks are built around CUDA, not OpenCL because CUDA is nicer to write in.
In other words, even if AMD's hardware is comparable it still doesn't have the same software support. Of course it doesn't help that Nvidia has been going balls deep into ML applications for years, arguably moreso than gaming.
Even if AMD starts supporting CUDA it will still lag behind Nvidia.
ML on a CPU will always be indefensibly slow. The advantage of using a GPU is that many of the operations lend themselves to parallelization and GPU's can deal with large amounts of memory at once (provided said card has plenty of ram and bandwidth).
Kevin Fisher
explain why any of this matters to a home user.
To me this is like saying Apple is fucked because most servers run on windows and then posting "Hey MACFAGS, what's it like being completely excluded from the server scene?"
KYS
Ian Lee
I dunno how did it feel to be left out of buttcoin mining?
Easton Sullivan
I don't know what any of that means Does it have 60 frames in Crysis?
Hudson Ward
>what's it like being completely excluded from the machine learning scene?
holy shit who cares?
AMD owns the console space, guess which space is more commercially relevant lol?
Michael Powell
ML is, actually.
Andrew Ramirez
commercially? More than cucksoles?
no
Colton Perry
>machine learning
It is what it ever was, a playground for phd's creating research for the round storage device.
Grayson Martin
You actually think there are appreciable profit margins in consoles?
Hudson Williams
When will AMDfags ever (machine) learn? Kek
Levi Hughes
I don't think AMD's making all those processors for free
Grayson Rogers
It is. You just have no idea how widespread it is and how much money goes into it.
Ryder Sanders
more commercial money in a cutting edge developing theory than there is in consoles?
ProTip: no
Juan Harris
There are a lot of algorithms involved in machine learning. Even within just the subset of deep neural networks there are lots of different types and more constantly being developed/discovered. That said, there's a great wealth of research that is already well developed and being applied fucking everywhere.
In a timespan of a day you probably interact with more machine learning algorithms than you can count.
Michael Parker
>holy shit who cares?
Companies who want to automate nearly everything (for example). So, all of them.
Margins are extremely high for ML because its so sought after and there are very few players in the market.
Nolan Davis
>most servers run on windows
Isaiah Stewart
if half of what you said is true then that's pretty pathetic since CUDA is nonfree garbage that only works for nvidia.
it would be pretty stupid to depend on such a low quality company like nvidia.
Wyatt Gomez
also good luck with FPGAs if all your garbage libs depend on CUDAids
Jeremiah Collins
You're not wrong but CUDA owns the market. Their proprietary linux drivers work better than AMDs and CUDA is nicer to work in than other comparable languages. Not to mention that Nvidia is doing a TON for machine learning while AMD is sitting around with it's thumb up its ass. It's really not surprising that almost all frameworks out there are designed around Nvidia, it's like AMD isn't even trying.
FPGAs are slow.
Jayden Gonzalez
>Tensorflow? Same.
>what are TPU's
Kayden Harris
>what is Tesla >what is Netflix >what is Google >what is Microsoft >what is Apple
can go on forever tbhfäm
Jack Murphy
Swap "ML frameworks" for "DL frameworks" just to be safe
Juan Bennett
It matters only indirectly to a home user. Doesn't take away from the fact AMD is getting BTFO'd though
Tyler Murphy
It's one market out of many AMD hasn't entered yet, AMD is in plenty of markets Nvidia has no relevance in as well.
Big deal.
Josiah Green
The way I think about it is:
It's a profitable market that AMD could potentially have had a foothold in, but they basically ceded all ground to NVidia. So a huge wasted opportunity
Elijah James
Yeah, I agree. But AMD is stretched too thin already, they should focus on markets they know they can do something since they don't have money to spend willy-nilly like Intel and Nvidia.
Thomas Gutierrez
Yet they're still trying, which is odd.
I guess it's one of those things where you have to put resources into just so the competitor doesn't actually own the whole market, even if your own product sucks
Kayden Russell
>AMD is in plenty of markets Nvidia has no relevance
Such as(excluding the CPU market of course)? Asking out of curiosity.
Ian Perez
Their entire CPU side of course, most notably embedded where they really shine until Zen wins some server market back. Then there's miscellaneous markets like DRAM and SSDs, but they're rebrands but apparently still a source of profit, All GPU markets are covered by both of them, machine learning is the newest hip one and Nvidia got to it first. While before that was whatever market Radeon SSG was addressing like professional graphics that have to decompress a hundreds of gigabyte or larger CAD files. Nvidia still hasn't entered this market, and I don't even know if you can call it a market, but the SSG concept itself is a pretty unique product with no current competitor.
Isaac Jenkins
>SSG concept itself is a pretty unique product with no current competitor.
It indeed is, I hope opencl replaces cuda. Vendor specific APIs should have been a thing of the past years ago.
Luis Bennett
Is it weird that these ML chips are called GPUs? And the non-graphical ones are called GPGPUs?
Like, even the TPU name makes more sense than GPU when applied to ML.
Jace Smith
GPGPU is pure numeral calculation though, ML is not. That's why GPGPUs are done with Teslas or the like with no video output.
Gavin Evans
AMD BTFO
Anthony Martinez
Who is this cuckbird
Jose Scott
pol's boyfriend
Christian Anderson
>GPGPU is pure numeral calculation though, ML is not.
Could you clarify? My understanding was that GPUs used in DL are already fairly "general-purpose", but it seems like GPGPUs go one step further?
Evan King
>tfw nvidia fags won't get freesync monitors for cheap
Easton Martin
I'm not going to read through the chain of posts, but GPU = Graphical Processing Unit and GPGPU = General-Purpose Graphical Processing Unit.
Modern GPUs are all GPGPUs, because you can run your own software on them (CUDA, OpenCL, ). It has nothing to do with video output.
Brayden Cox
sup fags structural engineer here. If I made an algorithm to solve finite-element method problems using GPGPU, would it be faster or slower than using the CPU?
Lincoln Morales
Ah, there comes the confusion I guess. I know that CUDA and so on can already be run on regular GPUs. I just assumed that GPGPUs referred to entirely separately devices.
Kevin Flores
Not at all. In fact, the term "General Purpose GPU" is actually considered a bit archaic and has fallen out of mainstream use. I haven't seen it been used in ACM or IEEE publications for at least 10 years.
Aaron Hall
Yea as far as I'm concerned, GPUs nowadays means "Fast Matrix Multiplication (and other operations) box", rather than graphical processing unit. At least in the context of non-gaming/media.
Zachary White
"cheap" is a great word to describe AMD
Leo King
Yes-yes, good goy, now buy Titan X.
Jason Martin
Yeah, same. I have a bit more abstract point of view than you I guess, and see them as "moar layers" Deep Learning[c][r][tm] boxes.
Ryan Hernandez
relevant
Carter Campbell
>nvidia bla bla bla bla ml bla bla bla bla Wait until qualcom starts to become a threat to intel and nvidia, maybe then you'll come crawling back to amd.
If you can parallelize it, it would be much faster. Finite-element methods are solved by matrix multiplications as far as I know, GPUs are good at that.
Austin Hall
Don't forget Volta which promises a 50-75% performance boost over Pascal
Henry Jenkins
Cool 2018 Onii-chan.
Adam Torres
Bitch, Volta will be out before AMD even pre-paper-non-launches Vega, they are dragging that shit out slower than a sloth with concrete feet.
Caleb Rivera
You get Pascal refresh this year and maybe a single Volta HPC SKU by the end of the year.