What went wrong?

What went wrong?

Other urls found in this thread:

en.wikipedia.org/wiki/Richard_Belluzzo
twitter.com/NSFWRedditGif

"Compiler will fix it" was and always will be a red flag for any CPU uArchitecture ever.

They broke compatibility with x86, and AMD came out with a 64-bit microarchitecture that preserved compatibility.

They distanced themselves from the only product that made Intel who they are today, the 386. If they had stuck to their guns, as they still do today, they wouldn't have had a flop on their hands.

Actually Intel had some decent RISCs back then.
It's just the whole idea of EPIC was bad.

They forgot the letter t

beating x86 technology isn't that hard. beating the pc industry and business is very hard.

>beating x86 technology isn't that hard.
It is.
x86 got pretty flexible through the ages.

Fucking everything went wrong.
It was several years late.
x86 emulation was slow garbage.
The smart compilers on which it relied turned out to be not-so-smart.
Intel's aggressive market segmentation meant it was only used in enterprise big iron and out of reach of casual programmers - meaning very little adoption.
AMD64 came a year later and completely blew it apart.

It would have been an OK arch in a vacuum, but here it was competing with x86 at low-end, with POWER and SPARC at high-end, and with AMD64 everywhere.

the real question is, why HP and DEC and SGI all killed off their RISC platforms in expectation of Itanium taking over the world, before the first chip even rolled off the fab line

"Compiler will fix it" is actually happening with POWER though. This is mostly because most shit written today is written in high level languages, so there's way more room to optimize in.

Intel has some wondeful ways to persuade OEMs.

based jim keller and athlon 64

it was just bullshit to get them out of their x86 licensing obligations to AMD anyway, it ended up fucking themselves over lel.

You can tell this is the case by looking at where they ended up with Poulson. EPIC was supposed to be static and in order. Poulson, though, broke up bundles into multiple constituent instructions. It even did some limited out-of-order execution. To achieve high performance, they had to pursue techniques opposite of the ideas behind Itanium.

One guy: Rick Belluzzo.
He worked at HP and persuaded them to abandon HP-UX and PA-RISC for Windows and Itanium.
Then he went to work at SGI, where he decided to kill off IRIX and MIPS and focus on Windows and Itanium.
And then he got a nice warm chair at Microsoft. Go figure.

en.wikipedia.org/wiki/Richard_Belluzzo

Because a Giant came to play in a Children's Playpen, so they left and found somewhere else to play until he got bored.

Intel has never had BAD RISCs. My statement was grounded in that they tried to force a change a market they created, which would have alienated and fractured it.

POWER is irrelevant. x86 and ARM are the only architectures that matter.

(maybe RISC-V in a few years, but I'm not hopeful)

>POWER is irrelevant.
You've never worked in enterprise, have you?

POWER targets silly small niche.
Most deployments are 1S/2S x86 machines.

Do you live under a rock?

"silly small niche" such as SAP, Oracle, and lots of virtual machines on one host

Meme, claimed by SPARC, no one cares.

The enterprise where I work is x86-64 from top to bottom. Of course we're young and don't have much in the way of legacy systems to deal with, but sometimes we run into other businesses still using Solaris and on occasion IBM z-series mainframes running COBOL. (Ewww.)

Haven't seen POWER in the wild since Apple went to Intel a decade ago.

>claimed by SPARC

Endless hype and endless delays until they finally released a chip that was incredibly disappointing when compared to already disappointing 64-bit "high-end" architectures, most of which were getting matched or blown the fuck out in most desktop use cases by PeeCees that finally came of age. The x86 compatibility that Intel harped on was fucking dreadful and useless, and the systems built around it were mostly commodity ganoo shitnux trash that appalled SysV customers who were asked the same bullshit price for something that ran none of their software, had none of the features and was probably slower anyway.
It didn't entirely fail, though. Once they got their dick up on later designs it found a niche as a supercomputer chip, had a run in database servers as well.
That depends on what metric you're measuring, nobody gives a fuck about a 5% performance gain running something that's already fast when it carries an extra $10,000 on the price tag and comes from some shitty vendor that's going to do everything they can to lock you in to their platform until they die a righteous death and fuck you over.

Unless you're in a legacy shop that IBM has vendor-locked, at most you'll see a rack of those shitheaps hiding in the corner of a datacenter working on some specific task, usually databases. The only thing keeping them from going the way of SPARC is that big fucking IBM logo you and the rest of Sup Forums who think it's some upcoming x86 killer are jerking off to.
Zs aren't just about running legacy code nobody wants to fix, they encrypt the fuck out of everything, run thousands of Linux VMs on some of the fastest CPUs money can buy and do it for 20 years straight without a reboot. There's a reason pretty much every financial transaction goes through one of those mother fuckers.

VLIW requires a Smart Enough Compiler, which didn't, doesn't, and will not exist. The extra clever things (register rotations etc.) were just more stuff that compilers can't and won't emit. Even JIT runtimes with their infinite profiling couldn't, because most programs simply don't have enough instruction level parallelism to give -- even if the compiler disregards the instruction cache entirely and unrolls all loops four times over. Dynamic scheduling of conventional CISC instruction streams won out very quickly.

Plus the chip was fuckhuge and so cost a fuckton to make, each. Its support chips weren't tiny either.

But it was sexy enough for Intel, HP, etc. to pour gadzillion buckaroos into it. Twice.

I can't imagine HP had a choice at that point since they had stagnated PA-RISC out the ass.

Itanium was bad, but Opteron being good made it only worse.
Opteron killed Itanium and raped some RISCs along the way, courtesy of HyperTransport.

We mostly use POWER for condensing many virtual machines under one hypervisor because it's more efficient than running a bunch of x86 boxes. POWER also has superior virtualization features as a consequence of being designed for enterprise usage. It's also a good upgrade path from dead-end Itanium and SPARC for things that require it.

Of course it's not a mainstream solution and we run many x86 boxes alongside, but it makes sense once you are big enough and your applications support the architecture.

So much salt. Did POWER touch you in inappropriate places when you were a child?

Only PowerPC is kinda relevant in embedded these days, although ARM is taking over its strongholds like automotive control units

POWER is pretty much dead

>Not liking Cobol
you must hate money a lot

Plausible, but SPARC was also stuck in static superscalar land at that time, and it took Oracle (nee Sun) a trip to hardware multithreading land to set them straight. Didn't hurt them none, except the Rock (was it?) stuff was scarcely better outside of carefully-constructed benchmarks. Mainly Rock ran well when it could execute 32 threads of the same program code due to front end restrictions, making it far less useful for e.g. shared hosting, databases, and the like.

It's funny that Intel managed to fuck the pooch even when its competitors weren't doing so hot. At that time IBM was still stuck in putting microcoded megainstructions into its POWER series, because that's how things are done in mainframe land, and diddling around with non-POSIX pointer formats so just about anything and everything had to be ported over by specialists.

Really I'm surprised that AMD was in any way able to go the conservative route, i.e. AMD64, DDR bus, and memory controller integration. Maybe that was the ambitious thing for 'em; wouldn't know.

Honestly, I just hate all of the retards on here who only heard of it a year ago and now won't shut the fuck up about it because it's le epic x86 killer from the good guy thinkpad company.

I'm not necessarily saying you're one of those tools, it's cool if you like it and it has its niche, but I don't think it's worth the price tag to a lot of places that are just as well served by commodity systems for 95% of what they're going to do with it. I think it's significance is pretty overhyped nowadays.

>Really I'm surprised that AMD was in any way able to go the conservative route, i.e. AMD64, DDR bus, and memory controller integration.
They poached Alpha dudes.
Alpha dudes were *mostly* sane.

He wanted to kill VMS until the banks grabbed Intel and Microsoft by the balls and made them port it to Itanium. It was that or hostile takeover.

Itanium was HP's baby from the start, Intel just enabled them. Compaq shitcanned Alpha since they were already an Intel customer anyway. SGI was just fucking incompetent. They were already circling the drain before Itanium finished them off but this faggot loaded the gun and handed it to them.

>en.wikipedia.org/wiki/Richard_Belluzzo

what a fucker, what is it about HP that breeds sociopaths ? *cough* carly

Friendly reminder that Alpha died because Compaq (who bought DEC) believed in Itanium.

In hindsight, Wintel is the scourge of computing.
They managed to kill IRIX/MIPS, HP-UX/PA-RISC, and Tru64/Alpha.

That was before AMD64 though. Alpha guys did the first Athlons and its pipelined FPU, super fast at the time. Won AMD the gigahertz race, too. I'd say the memory controller stuff and AMD64 were all AMD, because the DEC people were naturalized by then.

I'm glad the Alpha died, by the way. Its instruction format was too wide, yet too low level to compete with bigger decoding hardware. As things often go, it was on the wrong side of the soft/hard split, just like Transmeta's stuff and AMD's later VLIW GPUs (like, 2005-2009 ish?).

Cobol is how mediocre programmers earn a little over their income bracket in return for a soul crushing job

Wait, sane technology discussion?
On Sup Forums?
>As things often go, it was on the wrong side of the soft/hard split, just like Transmeta's stuff and AMD's later VLIW GPUs (like, 2005-2009 ish?).
Is Denver VLIW?
Besides, VLIW GPUs weren't THAT bad for graphics.

It's not like RISC shitfests of late 80s-early 90s with fuckton of incompatible platforms with vendor lock-ins everywhere were good.

I never paid attention to AMD's GPU stuff properly. I do know they were fielding a VLIW design at a time when Nvidia had gone for single instructions executed over 16 lanes and 5 stages per GPU core. AMD (was it ATI still at that time?) was late to that party.

The problem was that the VLIW decoder had static assignment between instruction slot and functional unit, supposedly to run muls and adds concurrently; but again, the compiler (this was in the shader era already) wasn't smart enough to do better than a theoretically less efficient brute force solution. Especially considering semiconductor manufacturing. Again: hardware wins, software loses.

16x5 was all ATi.
G80 was much more like modern GPUs, i.e. n warps executed over some number of ALUs grouped into functional unit (TPC).

'90s RISC platforms were nicely engineered and fun to read about and explore on the second hand market, but anyone who actively mourns their demise is underage, or a retarded hipster.
Wintel destroyed them on the desktop because they were shit that only survived the '90s on niche use cases and sleazy vendor lock-in tactics. Could they have done better without the Itanium delusion clouding their minds? Maybe, but in the end it wasn't Itanium's fault that customers didn't feel like paying $35,000+software licensing for an Octane2 based on 5-year-old rehashed R10K tech that was already slow when it was new and a graphics solution you could spitroast with a fucking GeForce 2. DEC couldn't market the Alpha for shit and Tru64 was irrelevant, and HP intended the Itanium to be PA-RISC's successor from the start and it ran HP-UX from the beginning.

They didn't break anything.
1st of, they went from ooo ss to vliw.
2Nd they introduced ia64 and made an inefficient x86 compatibility layer that was not able to run x86 code at acceptable speed.
3rd huge chips, expensive, power hungry cannot make it far.
Ford, amd released a better successor to x86, which fixed all of x86s issues, plus kept the design simple for both isas to coexist in an efficient maner.
5th, vliw is a very strange arch, those who say that "compiler" fixes code are wrong. Compiled s/w for vliw needs to be recompiled again for later generations or else you will suffer from empty slots in each group of loaded instructions, thus losing greatly in performance.
An amd64 cpu can suffer 0 to 10% or might jump to 20% if the code is not optimized. A 5 issue vliw can even lose 80% of its performance, while in most cases won't be able to reach near 60% whilst consuming a tremendous amount of power.
Vliw is an amateur attempt to have generic code working plus backwards compatibility. The most important reason why ia64 died fast is amd64, the reason why ia64 died is vliw.