Ryzen 8C/16T F4 3.6/4.0 Engineering Sample Over 95W At The Moment. 6C/12T Just About 65W

The title is self explanatory.

Other urls found in this thread:

forums.anandtech.com/threads/new-zen-microarchitecture-details.2465645/page-158#post-38675220
semiaccurate.com/forums/showpost.php?p=279511&postcount=5188
sweclockers.com/forum/post/16600771
forums.anandtech.com/threads/zen-es-benchmark-from-french-hardware-magazine.2495505/page-25#post-38657740
anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
twitter.com/NSFWRedditImage

French here : "but they are able to pull through their 6C/12T in (barely) 65 W. Which is anyway very respectable".

Duck pc hardware?
Forewing stabilizer pc hardware?
I don't get this humor.

It's not meant to elicit anything in particular. Most likely pairing two unrelated elements for comedic effect, with the added benefit of being easy to remember.

The French love their dadaism.

Nah, it's because of "Canard WC" which is toilet detergent.
Just a bad wordplay, which french people are even more fond of.

>I don't get this humor.

t. Mohamed Lahouaiej Bouhlel

This and "canard" is also (somewhat old fashioned) slang for newspaper.
Regardless, CPC hardware were also the ones that did the first Ryzen tests.

merci d'avoir clarifié

forums.anandtech.com/threads/new-zen-microarchitecture-details.2465645/page-158#post-38675220

F4 OPN(B1 stepping) chips are still 95w TDP. CanardPC was saying that their A0 ES hit slightly over 95w at 3.6ghz/4ghz overclocked. They're saying how impressive it is that the chips reach those clocks at that power level.
Google translate is just confusing people.

There are non SMT enabled chips coming as well.
Plenty of SKUs available at launch.

This is absolutely incredible. Considering selling my 6600k but I need to see actual FPS numbers. I play at 144hz which is always CPU/mem frequency bound.

Can you use ddr4 4266 with the enthusiast chipsets? Shit gave me a 25fps boost in my main game

Unknown yet. One vendor said that DDR4000 works. All of MSI's boards say 2667mhz+, and they said that 3200mhz works with OC.

B1 stepping sounds like a base layer respin too. That could easily reduce power loss.

ANOTHER


MASSIVE


DISAPPOINTMENT

>Considering selling my 6600k


that would be a really dumb idea

Why?

AMD

>he thinks zen is for gaymen fags
just buy an apu and stfu faggot. the gpu market is such dogshit gaymen is the last thing anyone cares about currently.

Ohhh...

>même

What did they meme by this?

Hey.. Hey.
Is this thing actually more power efficient than Core?

It better fucking be because AMD does have a lot of good engineers and Intel has been sitting on its ass for 5 years.

It could be depending on how you look at it. Efficiency isn't a flat metric, and due to various process characteristics, not all clock scaling takes place on the same curve.

In certain instructions at say 3.5ghz Zen might equal or exceed Broadwell in efficiency. It might lose out when comparing other instructions though.
It might be an out of the park homerun below 3ghz depending on the scaling curve also.

Not if he wants to do something besides gayming with his machine.
Quad-cores are truly the cancer of our age.

We'll see when the 22+ core server chips show up, that's where you measure efficiency, and where it actually matters.

Kek, I knew it was a housefire, 8 cores at 95W made no sense at all, it's gonna be at least 130W realistically

Try again. See:

>snail eater
go home everyone, party's over

>believing AMD's lies

>trusting a snail eater sub human more than AMD

Seems you're too caught up in your own to see reason.

Now give me a good consumer mobo with consumer features that can support 2 of those 16T processors.

Many thanks, ami.

semiaccurate.com/forums/showpost.php?p=279511&postcount=5188
Text for those who don't want to give S|A a page hit:

From sweclockers forum we have a hint that IPC jump over XV core is around ~55% (latest Zen ES).
Since XV is around 15% faster than PD as Per Anadtech's Carrizo generational comparison article, it follows that Zen should be around 1.8x faster than PD , core vs core (no SMT). 1C/2T should be around 31% faster than 1M/2T assuming SMT gain on Zen is 25% and CMT penalty on PD is 15%.

Since Skylake is around 60% faster than XV core at the same clock , Zen should be within ~5% or so from Skylike, IPC wise. Also it should be roughly on Broadwell level or slightly below since Broadwell is ~3% slower IPC wise than Skylake.

Interesting year ahead.

Relevant links:
sweclockers.com/forum/post/16600771
forums.anandtech.com/threads/zen-es-benchmark-from-french-hardware-magazine.2495505/page-25#post-38657740
anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9

Zen+ only needs to bring about a 10% IPC uplift and marginally higher clocks at equal power and AMD will be neck and neck competitive with intel.
The Shit Wrecker did it. Its unprecedented for a brand new arch to have such a huge performance uplift in its first iteration.

Never doubt the Shit Wrecker. Hes certified.

well, he make the base design for the Apple A series processors, and those fuckers have been shitting on everything in sight for years.

Makes you wonder what he'll achieve at Tesla.

One could only speculate. As far as I know Tesla has no intention to design their own ASICs, so I'm not exactly sure what Keller would be doing.

I suppose its possible that they'll try to move away from off the shelf hardware to create a more specialized integrated system, but the financial investment would be immense.

>dadaism

I just got flashbacks from my debater roommate and his marxist textbooks and tendencies.

fucking hell

an MSI booth dude told some youtuber (I think it was paul's hardware) that the boards were 4000mhz OC (can't remember what the chip was, but iirc it was a 2666mhz).

I don't think those OC memories on the mobo's ever presented a problem, I've never had a kit that reached the rated OC limitation but I also never heard anyone not being able to reach the stated upper thresholds; on premium boards that would definentely become news worthy...

if not, then i'm fucked.
I got a 3600mhz corsair lpx kit following an user advice to get memory sooner rather than later in account of price hikes.

>non SMT enabled chips coming
just_jew_my_shit_up.gif

>AMD besting anything
listen to yourself

>. Its unprecedented for a brand new arch to have such a huge performance uplift in its first iteration.

no it isn't, he's done it twice before.

The title is shit... What is this thread about?

so it's basically shit?

they already fucked up by not releasing it after ces along with vega, I was wainting for zen to build a new pc, but I can't wait anymore. already ordered skylake and a gtx.

It literally says that zen is meme.

Good goy.

nice try poojeet
you're late on the market as usual

even if it's better, they fucked up by not releasing it after ces.

Ho я жe cлaвщит. Neck yourself, newshit.

You want another Phenom with bug in final silicon?

I'd do the same,but I have 4ghz modules hope come in time

I want to upgrade. amd apparently, don't want my money

Buy Intel then. Who gives a shit.

>you will never be a demi-god of computing architecture and engineering

I feel like a fucking loser when I see his accomplishments.

Same with Raja. AMD has quality engineers. They just need half decent marketing team.

that reminds me when nvidia marketing top man almost left nvidia for amd
but was pulled back IN, and not a peep form here ever again, leather jacked man dealt with him appropriately in chinese mafia traditions

>so it's basically shit?
With this high of a jump in performance, you'd literally have to be an Intel shill to not congratulate AMD on their new design.

They closed the gap, in one generation, from miles apart to meters.

how far is skylake from broadwell? 7-9%?
5% from skylake for zen
basically it is slightly faster than broadwell but at the same time slightly slower single core than skylake making it best 8 core at a time?

how is this bad?

>Who gives a shit
amd

but it's not faster and not as power efficient.

Skymeme is like 2% better compared to Haswell. It's Haswell that was better than Ivy by like 12%.

They don't, you're a single consumer.

all the facts are against your theory though

>implying
amd fucked up again, just accept it.

It may be faster and it's already as power efficient as current Intel lineup.

>facts
what facts, it's all just theory and rumours from controlled benchmarks.

Canard PC is the Charlie Hebdo of hardware.

(you).

>controlled
Meme.

Expecting it to be faster is absurd. Intel's had 8 years to refine and hone their architecture to an ultrafine point.

It is very very close to being as power efficient, if not as power efficient. 8 cores at 95W at 3.6Ghz/4.0Ghz, when they were expecting 10W per core at 3.15Ghz and 16W per core at 3.5Ghz.

Instead, they're at ~12W per core at 3.6Ghz, and only pushing higher the closer to their release they get.

>Can you use ddr4 4266 with the enthusiast chipsets? Shit gave me a 25fps boost in my main game
Bullshit.

Depends on the game.

Nope.

very very fast DDR4 can give you much lower latencies overall which can give your CPU a small to decent performance increase.

Also faster access to anything cached in system memory vs VRAM is always a performance boost.

It wont give you 25 FPS in any kind of video game.

Still falling for the MORE CORES meme like in 2011. Some things never change

But Intel is moar cores company now.

ARMA

Who the fuck still plays Arma?

Only on really high end chips used for workstations. The consumer products are limited at 4c/8t

Not for long. Coffee Lake is going for moar cores to be competetive.

Source.

sense sandybridge, all intel has done was sat on their dick and stop soldering cpus.

amd in one fell swoop apparently not only caught up ipc wise, but also surpassed them in efficiency.

it may not clock as high as an i7, but to the enterprise that doesn't matter, what matters is powerdraw, and considering amd looks to have slightly less power use then a 6900 while matching it... they got that.

intel is about to be caught with their pants down jerking it into the tmi vat.

1) dont compare yourself to the shitwecker, we all come up wanting.

2) there is a difference between keller and raja.
keller gets shit done, while raja has the most fantastic ideas on paper you ever seen, but the actual product is left wanting.

keller put a team together who can help each other and improve, raja has a team that doesn't think of drivers or software implementation.

I mean, gpus are probably harder then cpus to make, and make well, I mean we have a standard called direct x, but apparently every single fucking game needs to have work done on the gpu side to fucking work? the cpu side sure as fuck doesn't work this way. but still... seriously, even if vega is amazing, the showing they are giving currently are not what we expected, not to mention how pissed we were when the countdown was to fucking 5 3 minute youtube videos.

In all honesty amd should have stuck with mantle and not fucking gave it to kronos, followed through with the beta test on their hardware, and then opening it up to everyone like was the original goal. at least more than one fucking game used mantle and every game that used it saw a performance uptick over dx11, not this dx12 shit where it goes backwards or vulcan where there is only 1 game that really implemented it.

for fuck sake they drop the ball do fucking hard and often you think it was soap, in prison, and they were the biggest faggot that ever lived.

from haswell to broadwell was 1% at minimum to 4% at maximum

from bradwell to skylake was negative, as in they lost 2.5% ipc, to positive 4% at best, we leave out the dolphin benchmark as it somehow has generations of intel where it gained nearly 40% ipc.

skylake is not some magical 'so much better the broadwell' jump that everyone seems to think it is.

It already clocks higher than 8-core i7.

yes and no.
96% of intel's 6900k hit 4.2ghz across all cores

stock, yea, its already better, but that's not saying it can hit higher clocks necessarily.

it is promising though, ill give it that.

Countdown was to arch preview, and we got preview, it was misleading tho and AMD needs to gas it's marketing team. Again. Raja is good boy tho.

fallout 4
that game is retarded in how it uses the cpu for rendering shadows, and fps is HEAVILY determined by ram

he is ideas, what he fucking needs is someone to sit there and as him 'will this make the gpu faster now'

look at fucking gcn and tell me that the way it was wasn't a massive mistake? the 7000 line was good ill give them that, but for fuck sake, its taken them 5 fucking years and 4 iterations to get the shit to not be its own fucking bottleneck, and even that's is assuming vega doesnt again bottleneck itself somewhere.

Countdown was to arch preview, and we got preview, it was misleading tho and AMD needs to gas it's marketing team. Again. Raja is good boy tho.

>consumer
>multi socket

use to not be that uncommon.

Well he's proven himself to be multi-talented. He played a large role in creating the 64bit adaptation of intel's x86 instruction set when he worked at AMD.

It was actually hilarious timing because the pentium 4 was such a massive piece of shit with negative gains while AMD was rolling out AMD64 which was better in every single way.

So yeah he can probably pull magic out of his ass however he seems fit.

But 6600 is quite good sempai, and that's coming from an amd fanboy.

Cpu's don't require specific optimizations in the same way as GPU's do because they all use the x86 instruction set. But it doesn't mean they shouldn't get more attention.

For example, one of the most appealing things about mantel, and subsequently vulkan, is better utilization of cores. We've had dual core chips for ages and quad cores almost as long. Yet DX and openGL stayed the course with 80% of the load on core 1, only haphazardly slapping on "multi core support" the same way api revisions have basically just had feature supports and specs slapped on since xp.

And a lot of focus since the early days has been shifting as much load to the gpu as possible because cpu's are so shit at anything beyond basic vertices.

AMD DID put mantle out for free, but devs didn't bite because they didn't want to alienate nvidia who flat out said they would never support it. And nvidia happened to own like 80%+ of the market at the time.

Having the openGL guys pick it up is the best thing that ever could have happened. The ps4/pro uses gcn and can't use dx12 so we may see more console ports with vulkan support.

And don't harp on dx12 too much. The examples we've seen were shoddy attempts at forcing dx12 support into dx11 titles very late into development. Which was possible before, but dx12 is completely new and needs to be considered early on.

then he would be really dumb, if he didn't buy any i7 if he uses his cpu to work

that's what amdcucks believe

Are you referring to the fury? Because that's the only time you could say CGN "bottlenecked itself."

And to the fiji's credit, had it not been the biggest possible die on an end of life node well past the point of being worth sinking more money and time into, a hardware revision probably could've fixed the pipeline saturation issues.

GCN as an architecture has stayed relevant for a good 5 years now without any major overhauls to the underlying framework. That's pretty badass, especially with how terrible general optimization has been over the past decade or so thanks to lazy fucking console devs.