Is IA-64 the same thing as X86_64 without 32-bit support, or is it fundamentally different?

Is IA-64 the same thing as X86_64 without 32-bit support, or is it fundamentally different?

Also, maybe it failed in the past due to lack of support, but would a modern 64-bit only CPU be more efficient and powerful?

Other urls found in this thread:

ixbtlabs.com/articles2/cpu/insidespeccpu2000-part-i.html
twitter.com/SFWRedditImages

No its fundamentally different in virtually every way

Completely different.

IA-64 doesn't exist anymore, dumbass.

>Also, maybe it failed in the past due to lack of support, but would a modern 64-bit only CPU be more efficient and powerful?
No. The "64-bit only" thing is insignificant.

when itanium first came out people had the hardest time programing for it, compilers were extremely difficult to make

Turns out smart compilers aren't so smart at all.

people and software developers were unprepared back then

>Also, maybe it failed in the past due to lack of support, but would a modern 64-bit only CPU be more efficient and powerful?
It could be.
x86 holds back 64bit on the x86_64 architecture because of the reliance on x86. It can't really be improved on.

On the other hand, on ARM with ARM64 you can get significant performance increases over the 32bit ARM.

Basically we are handicapped by the reliance on backwards compatibility with x86. That's pretty much why Itanium failed, though it may not have fleshed out to be much more efficient than x86_64 anyway.

Not only that, but the first few iterations really sucked. The only way it could compete with x86 was because Intel stuffed it full of cache as a damage control, which in turn made the chip huge and hot and expensive.

It's not about compilers, the whole concept of EPIC is nuts. There's only so much you can statically schedule and the rest needs to be done by hardware - which they did later with Poulson (I think?) when someone at HP or Intel finally saw what a shitty corner they got themselves into.

Well, at least everything is moving to 64-bit for the most part.

Also, modern CPUs are far faster than something from 10, 15 years ago.

>Also, modern CPUs are far faster than something from 10, 15 years ago.

Not really, no. CPU development has been effectively halted for nearly this entire decade.

A 15 year old cpu, sure, that thing will be smoked by anything today. But, say, a core 2 quad overclocked to 4ghz is less likely to suffer from that. And if you have a Sandy Bridge then there is zero reason to upgrade even today.

A core 2 quad OCed to 4GHz is smoked by everything today.

Sandy Bridge is sufficient for day to day stuff, though.

biggest problem with itanium was always lack of backwards compatibility and companies as Autodesk were not going to port their CAD software to new CPU anytime soon.

Literally only reason why x86 and ARM are only valid CPU options today is software devs.

>That's pretty much why Itanium failed
Wrong. It failed because it was a shitty architecture that couldn't compete with *anything* that was on the market at the time. It never even paid its development costs back.

The difficulty of compiling coffee for it was insane. Way to much of the work was shoved onto the compiler, leaving the cpu amazingly dumb where it should be doing prediction.

>or is it fundamentally different?
That one.

Coffee. Code. For once, autocorrect isn't too far off. From a certain point of view.

>And if you have a Sandy Bridge then there is zero reason to upgrade even today.
Only CPU-wise. Older platforms in general are obsolete as fuck.

>On the other hand, on ARM with ARM64 you can get significant performance increases over the 32bit ARM.
Yet it's still a pile of poop compared to x86.

Ironically CPU's these days aren't even CISC or RISC anymore.

They are a hibrid abomination of a base CISC processor stuffed with RISC accelerators.

Which begs the question are thrully new CPU'S actual CISC or are they just Hybrid RISC cpu's?

>stuffed with

Doesn't that alone make it CISC?

Tecnically CISC is hardware centric meaning it works closer with the hardware then RISC architecture that depends on software for everything.
However since most things that run on the accelerators are Software it begs to question if doesn't fall under the category of hybrid RISC.

bump for interest

No they're not, ever since Sandy Bridge CPUs have stagnated. The only thing better about them is muh more coars, and that's only if your software properly implements multi-threaded processing.

iGPUs on the other hand have come a long way. CPUs have not.

IA64 was an abomination of overpriced, underperforming vendor lock-in attempt by Intel to extend their monopoly into the UNIX RISC space and kill AMD. I will forever only buy AMD products for saving us from this hellish nightmare, fuck Intel in the eye.

It didn't fail because of its register size, it failed because it was just a piece of shit outside of HPC applications that was never able to live up to the near decade of marketing hype that proceeded it, made even worse by the hilarious degree of unpreparedness of its biggest proponents that thought customers investing into high-end niche proprietary Unix workstations, servers and supercomputers were faunching at the bit to pay the same five-figure rates for shitty commodity Linux boxes loaded with "legacy free" cancer that didn't accomplish anything but making them impossible to hook up to a fucking KVM switch.
Of course the shit wasn't going to be supported as a result, who in their right mind would go out of their way to maintain a port for such a shitty platform nobody liked?
AMD didn't save us from shit, Itanium was already comatose by 2003. All AMD64 did was hammer the last nail in the coffin for entry-level proprietary RISC systems that no longer had a valid excuse to exist beyond legacy applications and vendor lock-in.

Fuck off incuck.

>not trying to twist and manipulate everything in a way that makes AMD look good and virtuous makes you an Intel fanboy
God, I hate CPU console war cucks. How brainless do you have to be to think anything in that post was complementing Intel? Stop white knighting for rich corporations that don't give a fuck about you.

IA-64 is shit from the ground up because Intel got cocky, which is why it's now dead

this
on top of instructions having different binary representations, on top of having new instructions and taking out old ones, it was a different design philosophy, it demanded more from a programmer or compiler to use it, and it demanded a lot that was either harder or new and different to actually use it to full effectiveness

I'd be open to RISC-V (and limit the botnet to software ONLY) if MS ported the full Windows 10 desktop to it (none of this Windows 10 "S" crud or w/e)

yeah, JVM would be a mess on IA-64

no you wouldn't
I bet my ass you would gawk at the price tag and lackluster native application compatibility just like people did in the '90s when NT ran on a number of major workstation platforms that actually had a tangible advantage over PCs other than being different for the sake of it.

Except NT pre-5.0 was barely usable garbage.

Then Microsoft finally made 2000 but decided to drop all non-x86 CPUs at the last minute. Fuck.

I don't know, I liked NT4 fine for what it was.

>for what it was
A pile of crap?
We're talking about basic functionality like Device Manager and Plug & Play here, user.

I didn't really expect much real plug & play at that time, to be honest.

This is emphatically not the case; Cavium ThunderX2 is better than similarly-costing x86.

Early PNP was such a nightmare I rather config'd everything manually.

sounds like you didn't use it much honestly, I shitpost here from NT4 all the time and it's a pretty solid system once you get everything set up, it just won't run shitty old games and insecure DOS applications, and lacks some quality-of-life extensions from Win9x like device manager that are nice to have but still easy to live without
>>Then Microsoft finally made 2000 but decided to drop all non-x86 CPUs at the last minute. Fuck
mostly because all of the supported platforms were dead or on the brink of it by 2000 if you want to be real about it, ACE systems were old news, DEC was dead and buried, RS/6000 workstations were pretty much over except for an occasional intellistation here and there and all the software you'd want to run on them already ran on AIX anyway
from experience I don't really get the PnP meme with NT, for any situation where that mattered competent drivers or software like cardwizard made it a non-issue and I've never encountered any significant problems with setting up new devices on an NT system as long as I have the proper drivers on hand

You put NT4 on the internet? *why*?

because life without wikipedia and imageboards to fill the void in my soul is suffering

itanium is nowhere near the same as x86. shit sucks, I used to support some itanics, slow, hot and no support/compatibility.

>I don't know how to compile software

Itanium is a VLIW architecture, x86 is a kind of malformed goblinesque clusterfuck of RISC, CISC, and SIMD

>I install Windows on a Unix system to run free software
tripshits never cease to amaze me

IA-64 failed because it went after a dying market (big irons) which used to dominate the HPC scene prior to late 1990s.

Clustering of comparably cheap hardware took over the HPC scene.

IA-64 lost its raison d'etre. The architecture wasn't really suited for general purpose computing. Its crapstatic hardware-level x86 emulation was bolted at the last-minute when it became clear that Alpha/RISC were dead and Intel couldn't outright x86.

completely different architecture

completely different instruction set (machine language and assembly language)

>would a modern 64-bit only CPU be more efficient and powerful?
No. It *may* be cheaper, but even that is doubtful.

Unless you're talking about ARM, perhaps. They made a lot of changes in AArch64 which helps implementations a lot, like not being able to write arbitrarily to the PC, so that could perhaps simplify some structures in the CPU. More likely the differences are completely localized to the decoders, though.

No, the biggest problem with Itanium was that it was slow. If it had at least been faster than x86, it would have found uses in HPC if nothing else.

It won't run ANY games, moron. It's limited to DX3.

Speed was only really a problem with the first generation, later ones did enjoy a run in HPC and high-end servers for a few years before development interest fizzled out.

As with people, first impressions are everything.

Didn't the Itanium have hardware PA-RISC emulation built in? I seem to recall that was one of the conditions that HP required, in order for them to EOL their PA-RISC based 9000 series hardware, and switch to the Itanium based Integrity hardware.

not all games need DirectX, it'll run Quake and Half-Life fine and who cares about anything else? if you really want you can hack it up to DX5 or just dual-boot 9x on your system for when you're really itching to run shitty games on a shitty operating system

Itanium was intended to run PA-RISC natively and from the very beginning.
It was HPs baby that costed them and Intel gorillion shekels only to be raped silly by the Opteron.

That part's not wrong at all, I'm just saying that Itanium did end up with some mild success in HPC and settled into pretty much the same role as the proprietary RISCs it ended up replacing, an overpriced high-end chip with great floating-point performance and shit integer performance that made it best suited for high-end servers and supercomputers.
HP-UX handled it in software

Where'd you see that? I've never heard of any Itanium system being able to execute PA-RISC code in hardware, at most they just shared common chipsets and supporting components.

>this is what poorfags want to believe

Slightly disingenous and only because SGI was committed to the arch because MIPS was effectively dead.

>but muh games need to go fast
kill yourself retard

I don't play games

Actually... it does. Well, in a sense. Intel did release an IA-64 processor this year called the Itanium 9700, but it's the last Itanium processor that will ever be produced.

SGI was a bunch of dumbshits putting all of their eggs into the Itanium basket and totally shafting MIPS and IRIX, but I don't see why it matters. They still sold their share of systems.

Maybe you could allege that it was just in the SGI name, but these systems had no ties to what built their customer base and SGI was nothing but a shell of itself pretending to be relevant while hemmoraging customers left and right. And if you're going to downplay an architecture because it had a company backing it, then would you also not say the same for the likes of POWER, SPARC, Alpha, or practically any other high-end RISC of that time?

I'm saying that SGI could have filled a few racks full of z80s and it would still have gotten top500 because of NASA and co throwing billions at it.

i've seen a lot of really fucking stupid shit posted on this Indonesian leaf raking forum but this, this is something special.

No. IA64 is completely different.
Basically IA64 is a VLIW type instruction set, which attempts to move optimisation onto the compiler rather than getting the CPU to do its own optimisations on the fly like current x86 CPU's do.

Everything I've looked at shows the Itanium has pretty reasonable floating-point performance.

I was looking for an article I read forever ago putting it up against POWER and Xeon chips but couldn't find it, this looks interesting though:
ixbtlabs.com/articles2/cpu/insidespeccpu2000-part-i.html

what a load of shit

>later ones did enjoy a run in HPC
Only because they had somewhat good FP performance. Int performance was still abysmal, and you can't survive on FP alone.