RISC vs CISC

RISC vs CISC

Lets have an civil discussion on which is better for what use case. I recently completed learning 8052 (CISC) assembly and now am learning to AVR (RISC) assembly. I cannot find any advantage RISC has over CISC.
Now my programs are longer, and kinda slower (if not by much)

everything is RISC anyway.

Try learning x86 assembly then you'll appreciate how much simpler ARM is.

Smaller more specific cheaper chips.
Less power hungry then x86 counterparts.

And then you have the specifics on how you can get cheaper IO's on RISC.

It doesn't matter anymore, x86 won in performance-per-watt and it incorporates both cisc and risc. Everyone go home now.

>Smaller more specific cheaper chips.
Partly true.

>Less power hungry then x86 counterparts.
False.

Pieplining makes most RISC clock cycles more efficient than CISC since CISC doesn't even always complete an instruction per clock cycle. RISC SoCs are also available for less than $1 in bulk manufacturing tape reels.

But in the end you get what you pay for. Current implementations are fucking shit. They have horrible performance and energy efficiency. The only reason we use ARM in phones is because how dirt cheap it is else we'd use skylake core-m processors with a tdp-down of 3.5W and a custom gpu like they did with the Zenfone 2.

>But in the end you get what you pay for. Current implementations are fucking shit. They have horrible performance and energy efficiency. The only reason we use ARM in phones is because how dirt cheap it is else we'd use skylake core-m processors with a tdp-down of 3.5W and a custom gpu like they did with the Zenfone 2.
That fully depends on what and where you buy. Also, what you're doing. Allwinner != rockchip

You don't need a core-i CPU to count crankshaft rotations, read passenger compartment temp PID data or blink a turn signal.

There are buffered and shielded ARM SoCs available for pocket change with no Intel competitor.

MIPS64 > *

The main disadvantage with CISC is that the variable instruction length makes for large, multi cycle decode stages. This makes the pipeline larger which increases the already large branch mispredict penalty you get from typical order of order cores

I love RISC. I had implement a toy RISC ISA with a 2x superscalar pipeline in verilog. Easily the most interesting / rewarding thing i've ever done.

ZISC is architecture master race though

RISC + parallelism

>he legitimately thinks ZISC is for general purpose processors

obviously i know that ZISC is purely experimental if i took a class about designing general purpose processors you mong

EPIC master race reporting in!

Modern x86 is actually RISC underneath, it just presents a CISC "interface".

This. Though in theory RISC chips should be better than x86 it didn't turn out that way. Current gen ARM chips are dogshit and eons behind the latest xeon-d x86 processors from intel.

Fucking brits, can't do anything right.

This is a false dichotomy these days.

x86_64 translates your instructions to micro-ops which are basically RISC instructions for a ultra-secret Itanium processor.

ARMv8 is a CISC CPU. It can execute older ARM operations natively which are 1byte. Native ARMv8 is 2byte. And its NEON SIMD extensions are 4byte instructions. Internally its RISC micro-ops

MIPS? shut up nobody is using it.

POWER8/POWER8+? Shut up nobody uses it. Data centers say they'll consider POWER to push intel's prices lower. 15% more performance for 100% more power. LMAO

>ultra-secret Itanium processor.
itanium is dead. they're getting one more and that's it. even HP realizes this and i feel bad for the HPUX admins because they'll end up just like the VMS and Novell ones.

HP just sued Intel into getting another generation of Itanium so no. Also I'm using it as a phrase to mean the internal RISC chip that executes the micro-ops

consider suicide

DIE CISC UM

>Also I'm using it as a phrase to mean the internal RISC chip that executes the micro-ops
Which is retarded seeing how Itamium is a separate architecture Intel created to eventually replace x86. Try to remember that pretty much all of Intel's in-house ISA chips since the 4004 have done the microcode translation so using Itanium as a general moniker is stupid.

As for ARM being CISC, we're not talking about anywhere near the same levels of "Let's try to make writing assembly like writing a high level language" as 68k and various mainframe chips. While ARMv8 does support both AArch32 and 64, the fact that it can execute those instructions doesn't mean that they come out any different after decoding.

if you think the main processor translates x86 instructions to arc instructions then you are a retard. those arc processors are used for completely different tasks like ipmi / remote administration

>current year
>using anything other than your own implementation of your custom made CISC ISA

He's not talking about the ARC chips inside of everything Intel has made since the Core 2 Duo, he's talking about the x86 ISA itself. Ever since the beginning x86 has translated into the instructions into micro-instructions at runtime, allowing Intel to change the hardware that actually does the work radically without breaking compatibility.