NVIDIA GeForce driver deployment in datacenters is forbidden now

>NVIDIA GeForce driver deployment in datacenters is forbidden now
nvidia.com/content/DriverDownload-March2009/licence.php?lang=us&type=GeForce
THANK YOU BASED NVIDIA. AMD BTFO & BANKRUPT HAHAHAHA

REDUCED MARKET WINS!

how the fuck is this a good thing

we need competition

brand cheerleaders are too irreparably idiotic to understand how the lack of a competitive market affects them as customers.

Im sorry but what is the implications of blocking datacenter usage beyond blockchain crpyto?

Haha AMD BTFO and bankrupt now that they have no competition for bitcoin mining

Read the fucking link moron, they still allow blockchain proccesses

I'm a retarded brainlet, but doesn't this mean Nvidia geforce software can't be used in datacentres? How does this help nvidia?

no uses amd in a professional environment because amd is absolute garbage. nvidia can get away with this because who are they gonna use? ayymdd? HAHAHAHAA. they're litteratly going bankrupt and IT'S FUCKING GLORIOUS. can't wait for that piece of shit company to go belly up. gonna be a great day and massive celebration. no longer will we be tormented by that pile of shit.

>Own dataserver
>Install GeForce drivers
So, what now? Wait for the Nvidia SWAT team to kick my door in? Seriously, how the fuck can they stop anyone from doing it anyway?

All neural network training is done on GPUs.

How is NVIDIA even going to enforce the rule?

When built-in telemetry sends info about it on nvidia's servers, you can expect invitation to tje court. But hey, at least it is not an amd with their pesky open-source driver and sane eula.

if nvidia knows you exists, then will go after you. you can't conceal your hardware from them. also they do have telemetry in their drivers.

Violating EULA is not a crime, and even then, how would the GPU be able to determine that it is in a datacenter?

Does anyone even uses Nvidia in parallel computations, due to their dogshit price to performance ratio? That's not to mention they usually have lower raw GPU horsepower.

By pinging drivers on the same network. And you literally cannot disprove that huang's blobs not doing it.

I replace my nvidia with a ryzen. Never looked back once!

So the amount of computers on local network is a liming factor? That's a pretty funny way to do it, especially considering many datacenters isolate clusters of computers into their own networks.

nvidia does have telemetry in their drivers. pretty easy for them to scan your computer and report back what you have. if you have a single data center with say 100 1070's and they all report back to nvidia from the same location it be pretty easy to figure out. if they can't collect the tele data if the center is not connected to the net, then they can still find out. either from asking the data center their hardware, public information about that data center if its say from a university which has to disclose what they have, a whistleblower (pissed off whistleblower), extra.

I still don't see what would the accusation be in court.

A web tracker scripts can figure your it's you even if you ip hop and clear browser data, you think with a price of hardware as a beacon they couldn't do it?

It's just legal shit to them to cover their ass

This has nothing at all to counting how many computers re there on a local network.

>they allegedly can do one thing obviously then they are able to do this entirely different thing

Thanks user!

When there is a possibility that people may fuck you over and there is no evidence to the contrary, you do not assume that they would not.

You absolutely do. You have to cross a street. But wait, a guy in a car can actually kill you as you're crossing the street! Better not make those assumptions and never cross any roads again.

>this thread
>Nvidia do shit
It all amd fault

based nvidia blocking cryptoniggers

>No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.
>except that blockchain processing in a datacenter is permitted.

You must be one of those faggots who jumps out of the bushes and jaywalsk without looking. English doesn't have enough words to describe how much I despise you, and whenever I clip your feet crushed I feel no remorse.

Vega 64 has 1/4th of the computational power at 1/5th the price of Titan V's price though. And literally all of Chinese tech giants are switching to AMD hardware

Oh wait yes I do, my remorse is that coming back to crush your other foot would be an actual crime.

When an answer is not an answer at the same time.

>Sex is rapr. But consensual sex is not rape.

Are you going on msrp? Lmao

you dont really know much about technology eh

they cant unless they can explain to a judge how you manage to pay for a 3k card and still not be the owner of it

He is right, though, popular computation libraries for machine learning depend on CUDA.

You do realize data centers don't buy retail right? They always buy msrp lmao

yes that is about to change drasticly considering the wins amd have in china on google and with microsoft..

remember when the instinct cards were launched? back in 2016 lisa said that in two years they expect a full support from the companies guess she knew better

fuck competiton. you can't keep backing an abortion of a company like rtg when they keep shitting out trash products again and again. it would literally be more beneficial for the market if rtg just die and fade away and someone like samsung or another billion dollar company comes and takes their spot whilst actually making competitive products.

Well, if that is about to change then right now it still is the way it is, which makes the original post with uppercase haha correct.

>AMDumb!
>Nvidia BEST!

As the other user said, neural network stuff is done using GPUs. The idea behind this is to force those users to buy specialized nn hardware that costs severial thousands per card.

By limiting warranty and possibly design NN APIs in future that block consumer GPUs.

ah yes they're going to cut these data centres a sweet deal instead of having them sell out to other miners and retailers at nearly 200% msrp

>and possibly design NN APIs in future that block consumer GPUs.
fuck no please no

They'd have to cancel CUDE altogether because CUDA is a general computing platform that other libraries use to do machine learning. I doubt that is a possibility.

this is literally the same shit they pulled when they had chipsets
trying to enforce people to use their cards with their chipsets..
didnt end well for them

And yet I don't see companies raising the price, only kike retailers.

p a t e n t s

Because they keep the msrp to look good on paper and in press. Just like how intel cpus were """"""available""""" earlier this year

>what is a buy out

Yes. Their GPUs aren't less good for DP compute, they're just more expensive for it. However, the software platform support is better, or rather, it's shilled more by nvidia. If you're a company you either get this dirty smelly poo in loo openCL crap or this shiny proprietary polished by child asians in a sweatshop to a mirror finish CUDA with a $4000 QUADRO or TESLA or whatever.

You're an underage faggot if you think some miner in his parents basement or some fat gamer matter to the AIB partners or AMD. They get all of their profit from Data Centers, Tech Companies and such. That's why intel gave a big fuck you to cpu overclockers, why Nvidia didn't even care about 3.5gb scandal, why AMD didn't care about the 560 debacle. None of it matters as long as they have their contracts with the large buyers. A guy buying 5 64s at 1000$ a piece is much less profitable than a company buying 100 v64s at retail price

Yeah, gamers don't matter.

Enjoy getting butt fucked by nvidia when they can sell low end gpus at high end prices idiot.

if you have a datacenter you don't buy your stuff from microcenter but directly to nvidia or tight partners. (quite hard to buy 1000 gpus from retail).

Said partners will enforce nvidia policies.

>we need competition
>buys the leading brand only

>Vega 64 has 1/4th
Are you comparing volta memecore ml performance? Raw compute power of volta in generic compute tasks is barely better than vega performance.

Actually, Ngreedia is still making most of their profits from gaymurs.