Icekake confirmed successor to 8th gen and confirmed 10nm+

>The Ice Lake processor family is a successor to the 8th generation Intel® Core™ processor family. These processors utilize Intel’s industry-leading 10 nm+ process technology.

How can AMD even compete?

intel.com/content/www/us/en/design/products-and-solutions/processors-and-chipsets/ice-lake/overview.html

Other urls found in this thread:

wccftech.com/amd-ryzen-4-core-8-thread-raven-ridge-benchmarks/
twitter.com/NSFWRedditVideo

tbhq, for laptops it will be pretty good.

> inb4 edgy Ryzen shills
I know Ryzen is fucking amazing for desktop, but Intel makes amazing laptop processors and AMD won't compete until we see Raven Ridge benchmarks.

>amazing laptop processors
what did he mean by this

Coffeelake isn't even out yet and Intel's 10nm is an abortion of epyc proportions.


tl;dr Icelake is skylake with 3%+ more IPC and it will be launching in the same timeframe as Zen2, I wish it best of luck because it will fucking need it.

Coffee Lake ≠ Icelake

Icelake will focus on mobile.

You seem angry and confused, pajeet.

This is a white/jewish thread, not indian/chink.

Get out.

But Brian you told us Cannonlake focuses on mobile?

>literally copying a reddit title including typo

Cannonlake is mobile focused you fucking favela monkey.
Icelake is the next "architecture" aka tock like Skylake was.

Should've been Icekike

Just leaving this here.

Everyone knows that Intel's 10nm node is a disaster.
It's what, two years late?
Fab leadership my ass.

Intel's low power and laptop processors are pretty good

I hope you understand that laptop processors are simply desktop cores clocked lower with most of the uncore cut out?

>t. Rakeesh Zakari

Zen2:
> No multi-core advantage (like desktop Ryzen vs Desktop Intel)
> Even lower clocks due to mobile limitations (rumors stating a max of 3.3GHz on boost)
> Extremely cut-down Vega graphics probably won't even beat Intel Iris Plus (No HBM2, only 700 processing units and extreme thermal limitation)

wccftech.com/amd-ryzen-4-core-8-thread-raven-ridge-benchmarks/

Ryzen shines on servers and desktop, it will have mediocre performance on mobile, and will only compete with dual-core i5's and i7's.

Nothing will beat those 45W quad-core i7's with Iris Plus graphics for a while.

>currytech
>here is how a chip that isnt out yet will perform

HAHAHAHAHHAHAAHAHA
NOW THATS WHAT I CALL DENIAL

>How can AMD even compete?
With 7nm of course.

Raven Ridge has nothing to do with Zen2 you troglodyte.
Get a clue.

There won't even be 45W mobile Raven Ridges either, only up to 30W because 45W nonsense is paired with a discrete GPU and goes into alienware-tier junk

Raven Ridge is not Zen2, retard

Sorry, I meant Raven Ridge

>Cannon Lake
>Coffee Lake
>Ice Lake

What are all these Lakes and which one is which and which one even fucking exists? What the hell Intel?

Your premise is completely wrong regardless, what determines Raven Ridge's success is how it does at 15W.

And it will do well.

>No multi-core advantage (like desktop Ryzen vs Desktop Intel)
You realize most of the Intel mobile crap are overpriced dual cores, right?
> Even lower clocks due to mobile limitations (rumors stating a max of 3.3GHz on boost)
That is pretty damn good for mobile.
> Extremely cut-down Vega graphics probably won't even beat Intel Iris Plus
Intel Iris Plus is their absolute top-end shit that goes into $400+ laptop CPUs.

*lake - designed by israel team
*well - designed by USA team

I hope for a 4-6W fanless SKU personally.

You do realize that single-core performance is very close to Kaby Lake with a much better efficiency right? Those 4C/8T Ryzens will be competing with mobile i7's with 4c/8t which are expensive as fuck. Vega could be good efficiency at lower voltages/clocks for all we know. AMD loves to overvolt the shit out of their chips.

Raven Ridge will destroy Intel CPU's in performance and power.

Zen's 14nm is literally designed to undervolt and run on lower power, that's why it has trouble getting above 4GHz.

Don't lecture me I know that, AMD's job is getting uncore power draw down because uncore is pulling 20W alone on Ryzen 7, around 100W on a EPYC.

Well RR will have LESS uncore.
Zeppelin has a lot of it.

>abortion of epyc proportions
simply epic

>uncore
The what now?

Then that means the cores are just ultra efficient, making an efficient uncore is easier (it's just a bunch of SerDes) than efficient cores, AMD is on the right track

Sup Forums - technology

>You realize most of the Intel mobile crap are overpriced dual cores, right?
Most of it? Yes. But not all. Most of workstation laptops have quad-core i7's
> That is pretty damn good for mobile.
True, but nowadays Intel mobile i7's are getting 4GHz, Raven Ridge can't compete against that, not without the core advantage.
> Intel Iris Plus is their absolute top-end shit that goes into $400+ laptop CPUs.
Yeah, but Intel HD 620 for instance is still a pretty decent and popular iGPU, Raven Ridge will probably be close to that on graphics.

The problem with RR is graphics. Ryzen was a success but Vega was a massive failure, I bet my ass it won't scale down so well like those AMD slides, and that's the reason we still haven't seem any preview from AMD.

My bet is: It will beat dual-core i7's by a little margin, having only 5% better graphics.

Pretty good for the average consumer, but not enough for workstations.

>marketing buzzwords are technology

Jargon you don't understand are not buzzword you stupid monkey.

You can literally slap a 1700 in ebin werkstation laptops.
And no Vega is still ages ahead of Intel iGPUs, despite being a bottleneckfest.

What? Even underclocked GCN2 is a match for whatever Intel has, it only came on top when Intel used more silicon for the graphics than the entire AMD die + the advantage of 14nm and actual good cores.
CPU side will be close, but the GPU will absolutely demolish Intel.

>Vega is still ages ahead of Intel iGPUs, despite being a bottleneckfest
[citation required]

>True, but nowadays Intel mobile i7's are getting 4GHz
Link one that doesn't throttle.

Uncore is LITERALLY something that Intel uses. In reality it's a northbridge inside the CPU(SoC design).

>GPU will absolutely demolish Intel
Vega is very problematic tho, without HBM it may even be a downgrade on performance.

I see you're absolutely clueless about this subject.

>very problematic
Why.
It's 700 something ALUs.

>Link one that doesn't throttle.
So what? Throttling isn't always bad, short boosts are very effective. AMD does literally the same with XFC.

Vega64 is literally slower than Fury X on the same clock speeds.

What clock speeds do you expect to get on RR GPU? Now remove HBM2.

>moving goalposts: the post

LMAO

Nope. Are you implying they will throttle down to speeds lower than 3.3GHz?

Also, are you implying Raven Ridge won't throttle as well?

So is Pascal compared to maxwell, doesn't stop it from going into mobile.

Clocks don't matter, efficiency is all that matters in mobile.
And efficiency is usually gained from low clocks.

> Ryzen at 5GHz
That kills Intel.

*ganges - designated by India team

...

> Vega
> Efficiency

Ryzen is very efficient, but Vega? Hell no.

Based IBM

You do understand that Vega10 is 4k ALUs?

7700HQ base clock is 2.8Ghz for a reason.

I'm saying that your claim of 4Ghz is bullshit. I'd be surprised if any laptop can get even 3.5Ghz out of Kaby Lake without throttling down after a few seconds.

Efficiency isn't a linear scale. We don't know how it will perform with low voltages and clocks and less CU's.

So you are retarded, they're not sticking a 4096 shader part with 1700MHz clocks into a fucking 15W APU you dumb shitface, it's a 11CU part with sub 1000MHz clocks, for fucks sakes there's already a 150W Vega Nano in the works and that's still 4096 shaders.


Get a clue, stop embarrassing yourself.

Why doesn't Intel just drop the 10nm+ bullshit and just use that shit as well? Everyone else will be using that (Apple, Samsung, Qualcomm, AMD, etc).

>why doesn't intel just drop their core business

>why doesn't Intel use someone else' fabs for its Core arch

Gee I wonder.

What makes you think a cut down Vega will be more efficient than Intel Graphics? There's literally no source on that.

The only thing we know is that Vega was a disaster on desktop. With that is possible to assume that Vega will underperform on mobile as well.

> inb4 just wait

> AMD Ryzen is IBM Ryzen

Because Intel graphics are fucking garbage and there's no Nvidia there.
And because every other GCN version undervolted fine and ran efficient at those power targets.

As said, there's already a 150W Nano in the works that will be some 10-15% slower than a 300W Vega, it's not rocket science, voltage/power curve is not linear.

>I don't know that Intel literally competes with other fabs BECAUSE THEY HAVE THEIROWN FABS
It's literally how they've been ahead all this time you fucking moron. Shut the fuck up if you don't know anything.

Intel worked really hard to get into this situation in the last several years.

Let me make this simple for you.

Even if AMD used GCN1 in Raven Ridge, it would still have better graphics than Intel.
Intel GPUs are such pieces of shit that they have trouble running Diablo 2 without stutters(obvious lack of legacy 3D driver hacks from the GMA days)

>possible to assume that Vega will underperform on mobile as well.
>possible
>assume

Even doing that it will still be better than intel dogshit integrated graphics

Maybe D2 had unoptimized code. HD530 can run D3 in max settings at 1080p 60fps.

>the last real good Intel CPU was Broadwell and Haswell

Hmmm? Really makes you think

I bet Icewell would shit on AMD so bad the courts will drop them with anti-trust and anti-monopoly shit but gotta give shekels to Israel

Broadwell was a fucking disaster, Haswell was great.

It's not Icewell, it's Icelake , meaning it's another Israel design

Intelfags sure have some wild dreams, too bad they're usually fueled by their lack of knowledge.

If current Intel can't solve the huge dies issues and the thermal problems then a theoretical Intel wouldn't be able to either. It's not Intel that's bad, it's that ryzen too good and they had to rush into territory they didn't explore.

reading comprehension

delet.

Actually it wasn't, 14nm was a disaster and so has Skylake and beyond been a distaster.

After 14nm being so late delayed Broadwell a few times, it was found to have BETTER performance than Skylake and was shelved for desktop.
Compare Broadwell-E to Skylake-X performance, Broadwell-E kills it, every time.

Intel is actually getting WORSE over time.

It seems like Raven Ridge will be using either Vega 10 or Vega 8 graphics, so 11CU and 8CU(what previous iGPU used)

Also it seems raven ridge isn't a 1XXX part, but a 2XXX denoting a new generation, when it launches it'll be closely followed by Pinnacle Ridge.

Intel made a tradeoff with Skylake-X, mesh(well, a mesh of ring buses) over ring bus for higher 16+ core throughput, and L2 over L3 for AVX512 and large data performance.

Skylake-X isn't worse, it's just different, and that difference negatively affect gaming performance as gaming likes low latency cores, memory and shared L3 cache.

>tick
>tock
>just

Zen is no different, it trades latency and shared caches for modularity as workloads that actually need 20+ cores and are latency sensitive are rare, latency isn't scalable like throughput.

Personally I'm glad both Intel and AMD give gamers the middle finger, who the fuck do they think these companies should center their architecture around their shitty needs?

It's worse in every metric, every benchmark...
It's even more expensive...

Skylake-X is shit for everyone.

Zen is still the future though, you can't go much forward with clock speed on few cores. Intel held back technology because of the easier production which led developers coding for single core performance. When 8+ cores becomes standard they will learn to optimize games for multicore and then you don't need those things you mentioned.

SOPA

It's not, stop shitposting.

>22c vs 28c
Wow more cores for a parallel workload = more score, who fucking knew!

You're conveniently ignoring that these workloads have been tailored for Intel's previous cache hierarchy of 256KB L2 and 2MB L3 per core for the last 10 years, both skylake-X and Zen need optimizations and they'll get it, the gap will only widen, it's already ahead at this early stage.

You're not smart, just stop.

Also I should point out that
53,052 / 22 = 2411.45 average per core score
60,693 / 28 = 2167.61 average per core score

Both CPUs have 2.8ghz max all-core turbo.

The scores are worse, period.
>bbut it might be better some time in the indeterminate future!!!
I think AMD has a GPU to sell you...

Scaling isn't linear you idiot, IPC is a dickwaving metric on forums, comes only into play after efficiency, features, memory and performance, if it's faster at the same power, you achieved your goal.

And it's a fact that server workloads are tailed to uarches, don't be retarded, this isn't the gaymen market where we still use MMX and SSE1

>it's a fact that server workloads are tailed to uarches
in HPC maybe... clearly you've never worked enterprise.
Moreover, considering that statement - Skylake-X will be handily defeated by Epyc anyway - so it's still a shit arch.

Intel didn't expect Zen to be that good, that's their oversight, they were trying to outpace Broadwell-EP, and they'll suffer for it.

To be fair AMD purposely sandbagged, if the IPC was 15% lower(intial estimates) Intel wouldn't have much to worry about.

I hope this buries them

>Personally I'm glad both Intel and AMD give gamers the middle finger
They're giving lazy devs the finger. A properly threaded game doesn't have significant issues with multithreading. Oh noes they have to think about engine architecture now

It takes two to tango, enterprise can pay for good coders that can get around the mutex horror shows of C++, game studios don't care, they're left waiting for better tools like c++ 20 and 17.
Rust is too young still, it has a very good multithreaded design, but still too young.

underrated post

WAAAAAAAAAAAHHHHHHHH
INTEL ON SUICIDE WATCH
KRZANICH LITERALLY JUST JUMPED OUT OF A WINDOW
and he blamed trump on the way down, but we all know the truth here.

AMADA is at comfymus-maximus levels

hehe good old reliable wccftech setting the record straight as usual

this
fucking devs had issues with jaguar because MUH SERVAL PROCESSORS
then they had issues with Cell because MUH 8 CORES
now they have issues with zen because MUH NO 10 GHZ PENTIUM-X
learn to code monkeys or back to the slums

oh shit, I really should read a tad more slowly.
he just jumped out of some stupid council window, it' wa probably on the first floor.