Why are we still using cpu architecture from the 70s?

Why are we still using cpu architecture from the 70s?

Other urls found in this thread:

blog.zorinaq.com/?e=74
top500.org/green500/lists/2016/11/
cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/
en.wikipedia.org/wiki/X86_instruction_listings
en.wikipedia.org/wiki/X86-64
youtube.com/watch?v=A_GlGglbu1U
computerhistory.org/atchm/adobe-photoshop-source-code/
thumbscript.com/
twitter.com/NSFWRedditVideo

Instruction set architecture*

The hardware underneath is all new.

Backwards compatibility

Because instant replacement is impossible. Also it's the bestest in non-numeric benchmarks.

We're not. x64 doesn't have 16 bit mode.

You know I could bring up the point with the combustion engine, but thats just stupid.

>Backwards compatibility
You can just compile your software for a new architecture.
Debian manages to maintain ~20000 packages for half a dozen architectures just fine.

Why are we still using wheels from the -7,000s?

You can get 6502 variants the same as they always have been.

Reliability is the reason. 6502 is truly tested and is accepted in pacemakers. In applications like that a life time guarantee takes on a whole new level of meanings.

6502 is used in numerous embedded systems where people do not know what is inside, like mice, key fobs, pacemakers, picture frames etc. More than 200 millions are made every year.

Also old chips are well designed, small, thus require little SI area and require very little in power.

>You can just
>Debian

Why are we still using binary from 1679?

>why do we still use wings from billion years back

If a non-profit organization like Debian or Arch can do it, surely billion dollar "enterprises" with thousands of employees who are definitely not cheap imports from India should be able to do it in their sleep :^)

Trinary now!

Embdedded software/hardware is true engineering, thus it values stability and reliability over everything else. That's why people still use "old" but good enough hardware.

I'm actually sure Debian & friends couldn't do it themselves.
They are barely hold together by the actual developers work of making their software somewhat portable, eg taking care of issues like big vs little endian.

Also, you clearly overestimate the capabilities of companies regarding fixing their shit and and their willingness to cooperate on that issue. Someone post that MS windows 10 chat log.

I think you can still get 8086/8088s as well. 80386/80486 variants are still in some places as well.

OP is really talking about the ISA though which is misleading because the parts of the ISA from the 70s make up only a tiny portion of the thing at this point. New instruction set extensions are constantly being added as the need arises. A lot of them were added in the 1990s and 2000s

I don't have the Windows 10 chat log saved but this provides similar insights:

blog.zorinaq.com/?e=74

The gist is that change is often punished when it doesn't affect the next quarter's profit, like optimizing the kernel.

>blog.zorinaq.com/?e=74
/thread

There are rad hard 486s, but I don't think they lasted long.

Because it Werks pretty good at the moment.

Maybe in a few years also desktops running arm

>Why are we still using the same shape of wheel that we've been using for thousands of years?

For about the same reason why cars don't fly, retard.

How do I recompile photoshop?

why not? Assembly x86 is turing complete, what else do you need?

We aren't though. Most wheels are made of aluminum and rubber, this is very different to 1000 years ago.

because it just werks

>How do I recompile photoshop?
That's a problem indeed, a problem of non-free software only.

Literally every x86 compliant processor starts up in legacy 16 bit real mode.

So, your initial claim that backwards compatibility doesn't matter because you can recompile software is false.

You can recompile software, just not all software.
Software that cannot be recompiled is defective.

Mostly stability.

...

Wheels are nothing like they used to be though

Thanks, have a (You).

Why are we still using the same CPU architecture from the 80s?

And processors are any different? Compare a wheel from early history to a wheel from 40 years later in early history and you'll see basically the same shit. Compare a processor from the 70s to one from the 2010s and you'll see an incredible difference in performance.

point
got enough of them together could run what you're looking at

>Why are we still using cpu architecture from the 70s?

Because Moore's Law rendered the efficiency advantage of RISC irrelevant during the processor wars of the 90's. There were many RISC chips that were faster than Intel's x86 offerings throughout the decade. But when processor power doubles every 18 months it doesn't matter as much as backwards compatibility with the massive Windows PC base.

You'll note that we do not use an instruction set architecture from the 70's in mobile. That's because there was no backwards compatibility to consider and efficiency was much more important.

If Moore's Law truly is dead then this equation may change and we may see RISC on the desktop again.

>There are still people on this planet that believe the RISC vs CISC marketing.

Because:
• It's currently the most energy efficient
• It currently has highest IPC
• It's backwards compatible with a fuckload of very useful proprietary software

Only reason we use ARM for phones and tablets is because they are cheaper.

top500.org/green500/lists/2016/11/
Man ARM is really rocking on that list.

I really want a MIPS desktop again.

x86 became the standard because of popularity. It had nothing special

Wheels are backwards compatible.

...

>we
Speak for yourself, mine's from the 90's

No, I believe benchmarks and real application performance. And there was a reason all the high dollar workstations of the 90's used RISC.

>implying ARM makes processors relevant to that list
ARM cores are optimized for their primary use in the market today, which is mobile. Now that ARM is starting to enter the server space this will change. But it doesn't change overnight.

Should RISC make a comeback on the desktop I would be surprised if it was anything other than ARM simply due to the established market. x86 and ARM have huge leads in engineers and developers because that's what's out there.

And there's no guarantee it will. Backwards compatibility is still a huge point in Intel's favor when it comes to the desktop.

Modern x86 chips are only x86 in compatibility. At the lowest layer they're closer to being RISC than CISC.

Backwards compatibility doesn't really matter to the average user. I don't care what ISA comes up as long as I have a non x86 option. I just have a soft spot for MIPS.

We are just using multi-core pentium 3's and something ati did on mobile 12 years ago

The only NEW architectures are Zen, Moongoose and something apple

Netburst and Bulldozer were new lol

just werks

>like mice
Wait
Does it mean my mouse can be as powerful as Apple ][?

I think flash drives might be more powerful than some old microcomputers.

>Why are we still using cpu architecture from the 70s?

What makes you think we're still using CPU architecture from the 70s?

Since then, pipelines have become much longer and more sophisticated; and they've introduced innovative new techniques -- such as super-scalar, which is where they dispatch multiple instructions to parallel functional units in the same clock cycle. And is putting the GPU on the same die as the CPU not a major architectural enhancement?

If you're talking about the process node size, there's been several orders of magnitude change in the number of transistors per unit area, plus innovations to accommodate a massive increase in clock speeds, such as new materials design and new techniques to manage power, heat, and EMI.

Or are you talking about the fact that the latest Skylake chips still support all the instructions from the original 8086 chip? First of all, 64-bit compilers don't generate ANY of those old 8086 instructions anymore -- they all generate the new AMD64 instructions, which have a much more clean and modern design. Second of all, support for that original 8086 instruction set barely occupies 1% of the surface area of a Skylake chip. Are you whining that 1% of the CPU price is spent on continued support for 8086?

Or are you complaining about the von Neumann bottleneck? (That's the fact that the CPU can only execute one instruction at a time.) What do you think all that emphasis on multiple cores and threads was all about? What we've learned now is that the von Neumann bottleneck is not actually a hardware limitation at all -- it's a software limitation, because 99% of all software we use has still not been re-designed to take advantage of all those extra cores and threads.

Sorry, dude, but if you look at the changes in CPUs from 1970 to 2016 and conclude it's the "same architecture" now, then you just haven't been paying attention.

Yes, much has changed. But we keep piling shit on to an ancient design instead of starting out with something modern.

Oh wow
Does it mean caring about performance doesn't matter anymore?

Hardware devs use micro-controllers because it's cheaper and easier to (re)program them than it is to (re)design a board with a bunch of discrete components. So basically all the logic in these devices is handled by another smaller computer.

no, it just means it's probably a product. which is perfectly fine if you're not autistic and work for a living.

>we keep piling shit on to an ancient design instead of starting out with something modern

Ummm, I just said that 64-bit compilers do not generate ANY of the original 8086 instructions, and they use the whole new AMD64 ISA exclusively. That's a perfect example of how they have COMPLETELY THROWN OUT old design and replaced it with something entirely new.

What are you looking for here? A CPU that stores data as photons instead of electrons, or something like that? The fact is that every modern innovation you can think of is currently in the process of being studied as a possible replacement for the existing technology. If we're still using the existing technology, that's because it's still more cost-effective. Is that what you're complaining about? -- that we continue to use older technology because it's still the most cost-effective solution?

I'm looking for a CPU that's not initially based on a design from the 70s. Yes, things have changed, yes, it's extremely different. But the bottom line is modern x86 processors are a fuckload of extensions that were piled onto an ancient design.

SPARC
PowerPC

No shit

Here is a link that elaborates on what you just said.
cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/

Netburst was shitcanned. Core 2 was based on Pentium III.

>So basically all the logic in these devices is handled by another smaller computer.
... and so on ad infinitum.

why are we still using languages from hundreds of years ago? It's 2016, we've come further in every other way but language

Itanium

Kindof a joke these days, but they are at least interesting. Tempted to get a second hand one to dick with.

>Why are we still using spoons? They are like 5000+ years old technology. can't we use something new!?

It is a lot more powerful, the clock is upped from 1 MHz to 16 MHz and the chip has integrated a lot of peripherals including mouse camera tracker and USB communication.

sporks man

We're still using the same wheel architecture (round) just on new hardware

>Why are we still using vaginas from ancient times when we have all the feminine penis alternatives that today has to offer
Sometimes the old stuff just werks, user

>Second of all, support for that original 8086 instruction set barely occupies 1% of the surface area of a Skylake chip.
This is wrong.

from what i understand, there's basically 3 ways to make complex circuit
a. use lots of simple, generic, discrete components (takes up a lot of space)
b. create an ASIC, aka a custom chip (expensive and difficult to design and test, expensive to produce in small numbers, but the most flexible)
c. use a microcontroller supported by few discreet components (much easier to design and test than an ASIC, cheaper for smaller numbers)

microcontrollers are used everywhere, if you can make your product with a micro (as opposed to an ASIC), you'd use these, and this applies to most stuff

-- oh, there's also FPGA's
they're expensive, and only really show up in special cases
but they have advantages of both ASIC's and microcontrollers

>Ummm, I just said that 64-bit compilers do not generate ANY of the original 8086 instructions, and they use the whole new AMD64 ISA exclusively. That's a perfect example of how they have COMPLETELY THROWN OUT old design and replaced it with something entirely new.

AMD64 added instructions and registers as well as changing register and address sizing so that chips would be 64-bit. But it's still the x86 CISC ISA that has been built up over the years. Nothing was "completely thrown out." 64-bit compilers use the old instructions extensively. And if you know assembly for x86 writing AMD64 is mainly a matter of learning what's new.

en.wikipedia.org/wiki/X86_instruction_listings
en.wikipedia.org/wiki/X86-64

A RISC instruction set would make some things easier to implement and/or would allow the task to run more smoothly (fewer stalls): decoding; dispatch; out of order execution; speculative branch prediction; cache prefetching; etc. If something is easier it takes less silicon. If it takes less silicon then you're either using less power or you can put the silicon to use for something else like another execution unit or more cache.

On the flip side, x86 instructions can consume less space in the cache and performance on modern processors is very much about optimizing cache hits.

There are many variables in processor performance so you can't just claim "muh RISC is always faster!" But over the years RISC has generally proven to be faster and/or more energy efficient *when someone has had the motivation and cash to compete with an Intel chip.* That last part is important because for years nobody in the RISC world has had reason to fight for the desktop.

If everything else was equal and you gave two design teams the cash to develop modern desktop class CPUs...one x86-64 and one RISC...the RISC design would win on performance. But when is everything else equal?

T-the same reason nearly all of our electricity still comes from heating water into steam?

>Trinary
REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Not on UEFI systems.

>it's a software limitation, because 99% of all software we use has still not been re-designed to take advantage of all those extra cores and threads.
Yes software is generally written with the assumption that each bit of code executes after the preceding bit of code is executed. A lot of programs are really fucking hard to write without making this assumption since that would kind of fuck them up.

Works on my machine ;)

youtube.com/watch?v=A_GlGglbu1U

Got you covered ;)

computerhistory.org/atchm/adobe-photoshop-source-code/

CTRL / ALT / SCROLL / SYS RQ / PAUSE / BREAK / F1-F12 / ESC / TAB

1970s keys on 2016 keyboards.

mfw no one button Cut/Copy/Paste keys

You would go far with a job at apple.

for now, goy

That shit always annoyed the fuck out of me.
Where are the Cut Copy and Paste keys?
They are such fundamental features of pretty much every OS ever built from the 90s onwards.

Also, fucking INSERT. FUCK
I have NEVER in my entire life seen a good implementation of Overwrite. EVER.
Most worthless key ever. I have disabled it in every OS I've had. Well, recently from XP onwards, made it in to a toggle key for other features, which I still rarely use since I put any custom hotkeys and scripts behind capslock+[key]. Or Alt Gr. Some behind Context Key since nothing fucking uses it either.
Delete is also semi-annoying. Shift+backspace was fine. It STILL works. (try it)
I rarely ever use the Delete key. And I program my ass off. Backspace is almost always easier.
Backspace name itself always sounded stupid to me, even with the explanation for it.
Having it named Delete would have saved the confusion. Delete deletes behind caret, shift-delete deletes anything in front. And it would make sense for Deleting files, or selections, and so on.

What those keys should have been instead was Back and Forward. (or Left / Right, see below)
Home / End next to Page Up.
Back / Forward next to Page Down.
Why the fuck are 2 shitty non-navigation keys in the damn navigation section?!
Another good thing the Back and Forward could have been used for, as mentioned in the brackets, was Left and Right for programs that support tabs, or pages, and so on.
Browsers had a spec for pagination like this. You could move Left and Right, as well as Up to the parent directory. These are not relative assignments, you assigned them in a tag I forgot. (link or meta, not sure again)
I can only remember Opera supporting this. Or at least the last one to.
Or you could have had a third row, which some keyboards do/did have, then put Left, Right, with Up under Page Down. So all the vertical traversal keys are in a line, and the left and right aligned keys in line too.

You know what key is really annoying? Num lock. How frequently do you actually intend to use that key? Isn't the only usage of it basically "oops for some reason my keypad wasn't enabled" and then you turn it on and leave it on forever?

It's been extended... a lot. Keeping the same architecture throughout has made applications able to keep running though. When we transferred from 16 bit to 32 bit, applications did not have to be rewritten for the new platform, but new applications were written for the new platform. Similarly, when we went from 32 bit to 64 bit, applications could again be run with no modifications. The 16 bit applications were basically broken, but there were no recent 16 bit applications, and older applications could easily be run in emulation.

As long as there's no transition problems from one architecture to another, everything is fine.

Yes on UEFI systems.

Why are we still using computer architecture from some dead french guy from the 1940s?

Kill yourself.

The numpad keys have other functions.

I use it mainly for fun, but sometimes for useful reasons.
I wrote a bunch of numpad scripts for various purposes.
Numlock+/ opens a UI to select what mode I want.
Numlock is the toggle key for 2 states of most of those features.
Numlock off is the actual functionality, numlock on is the settings for them.
Scroll-lock off turns off the features entirely and returns to normal numpad operations. (may as well use the thing)

The one I probably use most is numpad mouse. (based loosely on the Autohotkey Numpad mouse, but changed heavily for my needs)
In my case, numlock on let's me change speed of mouse, acceleration, clamping, angle of movement, autoclick speed, whether autoclick does full clicks or auto-repeats key-down events.
And in the case of speed, I have an even slower movement speed I use sometimes. (mainly for looking at large sections of code)
Lowest standard speed is 1 pixel. But that is every tick.
I added the ability to add a delay in milliseconds on top so you can slow it even further. I might combine both of these speeds at one point. Who knows.

Another I made for fun was an implementation of Thumbscript.
thumbscript.com/
It was fun making that.
I was thinking of extending on it with another page of commands in the numlock-on group. But I lost interest.

Only other legit use I ever had with numpad was Excel and Garrys Mod years ago when it actually had a community and wasn't shitty mini-games and memes.
I miss making Wiremod battle machines in space. (and using Forcefield mod to disable gravity so it is actual space battles)

I'm not sure if you've ever noticed, but the keys on the numpad do things when numlock is off.

I wish they kept the m68k lineage alive

They move the cursor around...like the other keys that do that.

When will we upgrade from binary to trinary?

>from one architecture to another

As explained above it's the same instruction set architecture with some extensions. There have been no real "transitions."

An example of a transition to a new instruction set architecture would be Apple's moves from 68K to PPC and then to x86.