When will there finally be a desktop alternative to this piece of shit again?

When will there finally be a desktop alternative to this piece of shit again?

x64

This piece of shit has to make it onto mobiles

amd64

What's wrong with x86?

apparently intel is going to overhaul it and remove legacy shit to make it sanik

This thread looks like it came from an archive of a thread back in the year 2000.

What would be the point in keeping this pile of shit if you're gonna remove legacy support anyway?

No possibility of that happening. It'd kill compatibility which is about the only thing x86 has going for it.

Apple used PPC back then, and there were still tons of workstation with a variety of different archs.

Is there really anything better than x86?
>inb4 ARM
>inb4 POWER8

Yes, fragmentation of software is a great idea.

Isn't only stuff like firmware written in ASM nowadays?

ARM could legitimately be better if it weren't limited to being designed for mobile phones and meme cloud laptops.

>could be
How? Is there anything related to the architecture that makes it a better design?
Genuine question btw

Doesn't the Nvidia shield use arm as well as tegra x chips?
Correct me if I'm wrong.

They don't know. This board is packed full of tech illiterate retards that don't actually understand one ISA compared to another. X86 is just the most popular so all the little know nothing spastics crawl out of the woodwork and call it shitty.

Put $50 billion more R&D into ARM and you'd end up with a high clocking/high perf competitor to X86 that would have serial integer performance plateau in the same place that we're at today. How fast we can feed instructions to the executing logic isn't being limited by the ISA.

It wouldn't kill compatibility with emulation, which is the rumor.

X86 has a pedigree that no other ISA has. It has been the largest focus in the computing world for the longest time, its had more dollars, and man hours poured into it than anything else by an order of magnitude.
X86 dominates in HPC to such an extent because perf/watt is unmatched in all but a few niche cases.

ARM is an actually RISC design as opposed to x86 which has been a RISC cpu emulating a normal x86 chip since something like the Pentium Pro.

that convoluted instruction set has actually turned out to be an advantage.

In the late 80s and 90s when everyone thought RISC was the future, a big part of the reason is because the smaller, simpler, more consistent instruction set meant you could spend fewer transistors on instruction decoding. Transistors were at a premium in chip designs then, there were never enough to go around. RISC made the size of code in memory somewhat larger, but this wasn't viewed as a big problem. Machines of the time ran the memory bus at the same speed as the processor. A multiplier of 1x, essentially.

Then CPUs started speeding up a lot more than memory ever could. Multipliers went from 1x to 2-3x, to 5-6x, to 10x, to beyond. Suddenly a compact instruction encoding was a big advantage because with memory slow compared to the CPU, fitting more instructions into X bytes helps keep the processor fed. It means that a given amount of cache can go farther. And transistors stopped being scarce as processes advanced from 800nm to 350nm to 130nm to 45nm. Complicated instruction decoding is no longer a big source of overhead in the chip design, it's a minor cost-of-doing-business expense.

Academics and purists continue to love "straight" RISC designs (as opposed to x86's CISC on the outside, RISC on the inside hybrid) for being elegant, but elegance takes a backseat to practicality. And x86 is here, its cheap, and its quite close in performance to what a more elegant architecture could do, so trying to get people to switch away from it will bbe like trying to switch people from Linux and BSD over to Plan 9. It's not better enough to overcome the costs of change.

compatibility. By the way if ISA mattered to performance then intel would have switched to a superior ISA and maintained backwards compatibility just like x86_64 is backwards compatible to x86.

>inb3 muh x86 decodes to risc internally
First it doesn't and second that basically is the final argument that ISA doesn't mean shit. If you can translate a bad ISA to a good internal instruction set and reap the benefits that puts a nail on the coffin and basically only the internal architecture is what matters.

The reason why ARM is better in the mobile space is because they do more than just CPUs. They provide the whole package and let you customise it to an SoC that is perfectly suited to your particular application e.g. basically every smartphone uses a custom SoC for that particular model.

This integration allows deep sleep modes where the CPU core can shut down while the essential parts of the SoC keep running and listening for network activity, button presses, touchscreen input and only wake up the CPU if neccessary.

With Intel and AMD they are slowly going the SoC route but they are still generic off the shelf designs.

The things that makes ARM great for mobile make it shit for desktops.
Desktop users want a generic extensible computer where they can freely choose the operating system and peripherals.
Mobile users want a fully integrated computer system for maximum efficiency and portability.

which reminds me, it'd be good if x86 displaced ARM in the mobile space for that reason, then we'd have a standardized platform. Imagine if it was as easy to install a custom ROM on your phone as it was to install Linux on a desktop. One generic install image and everything just werks, none of this shit about needing completely separate builds for every single device.

Why can't we have gigabytes of L1, L2 and L3 cache in 2017?

MIPS masaterrace

Because you don't understand what SRAM is of what its even used for.

kys

There's only two manufacturers of x86, one who has been having lots of financial problems the last decade or so. Both of them have implemented hardware backdoors.

The fact that you can scan a PC for its buses and peripherals in order to load the correct drivers isn't tied to its x86 architecture, though. In principle it could also be done on ARM, but nobody thinks that standardization would be worth the effort.

Did Intel's x86 Atom SoCs behave more like PCs in this regard?

We won't use fucking arm until these retards stop flooding the market with different architectures every month, it's already an pain in the ass to adapt android all the freaking time, we don't want that to happens in the pc market.


If arm everywhere become reality, I stop working as as developer, you have to be fucking mad if you like writing 1454545456453 different versions of the same programs because some chink tought it was a good idea to drastically change a instruction set without changing the version number or by suppressing things they don't understand.

Yes x86 is not perfect, backward compatibility bring a shitload of weird things you can't explain without knowing how and why they did that. But it's freaking standardized and it's for this very reason that even if arm is better or whatever it won't come to the pc market.

I'm not positive but I was under the impression that they did

So have ARM-powered phones, in the baseband controller.

It's going to fail

>le x86 and ARM are the only archs there are meme

RISC-V is probably going to end up dominating

Not even memeing, there's just so much potential for an open ISA

I hope so, it seems like there will be a while though. Do you know if there's anything for high performance use planned for RISC-V? The only chips made as of yet are those SiFive chips, and they're comparable to Arduino I think.

>Do you know if there's anything for high performance use planned for RISC-V?
Not currently... :^)

There's also open-v and lowrisc

Trips of truth.

>The RISC-V authors aim to provide several CPU designs freely available under a BSD license.

GPL commies BTFO

it literally originated at Berkeley

Open-V is a microcontroller, and I'm pretty sure lowrisc isn't meant to be any high performance stuff, more like ARM I guess.

>cuck license
so other companies can take the design, tweak it to make it slightly incompatible, and then release it as something proprietary that only gets support through them?

Copyleft is a necessity in this day and age. Permissive just gets taken advantage of.

I'm just mentioning some real silicon risc-v

Why would someone use an incompatible RISC-V arch?

It's like someone making an incompatible x86-64 cpu to avoid intel and amd licensing

If IBM would've reached the fucking 3 GHz barrier with their PPCs, Apple and Motorola would've never given up and used Intel processors. I bet if Apple machines would use an x64/amd64/x64_86 architecture PPC processor, they'd be far ahead of good goy 1% jewtel processors

Control. Same way each ARM SoC maker puts their own tweaks into it so you can't just have a standard platform.

Not for a long time, if ever. CPUs are so fast and abstracted away that an ISA change brings too little to the table to justify throwing out years of software, documentation, compiler advancements and developer experience for a one-time net 5% performance increase where it actually matters at best.

Unlikely. It still costs money to make and design the chips, after all. OEMs big enough to design and build their chips will already hold significant sway over ARM. At best nothing would really change other than the one-time expense of licensing which is fuck all compared to the return you're going to get anyway.

It's a bloated botnet.

You'd still need to recompile everything for every different architecture.

They're the only ones that are actually used for anything :^)

>KEK LICENSE

>It's a bloated botnet
That's going to be the case no matter what, especially when taking into account the fact that ARM is the only thing that somewhat stands a chance of usurping x86 in its traditional strongholds and that "reduced" instruction set is about as reduced as your mother's fat ass and the increased modularity also increases the likelihood of botnet and locked-down chips.

Besides, the "bloat" of x86 is ultimately trivial and meaningless.

Fuck, all these Sup Forumstards defending x86.

>t. angry birds/toy microcontroller apologist
Stop sucking marketer cock from the '90s and face reality.

t. autist

If the PWRficient CPU made it to the market fast enough apple probably would have gone with that

>apparently intel is going to overhaul it and remove legacy shit to make it sanik
They did it with the Atoms since CloverTrail.
They disabled CSM, Real Mode and Protected mode and replaced them with Connected Standby, Atoms have 2W SDP compared to an Equivalent Atom-based Celeron with Protected mode, etc enabled, that has 6.5W SDP.
For example;
Asus E200HA with Atom has 13 hours battery life
HP Stream 11 with a Celeron has 7-9 hours battery life with a similarly sized battery. Both have the same performance.

A fuck load of software still worked, it's just 16 bit software will no longer run, the only 16 bit software I ran into is the SimCity 2000 installer and DOS software.

>Did Intel's x86 Atom SoCs behave more like PCs in this regard?
Yes. Drivers are absolute crap though in Linux, but installing another OS on UEFI is only slightly harder than a PC with a BIOS.