I haven't been keeping up with Ryzen since I think it's been sinking ship for the past few months...

I haven't been keeping up with Ryzen since I think it's been sinking ship for the past few months, but I remember hearing that their CPU performed as well as a 6900k on reveal.

I also remember reading that AMD actually did something to the 6900k to make it run slower, something was only at half efficiency I believe is what was said. Was it the memory channels? Did I read wrong and they were saying the AMD chip only has 2 memory channels and did that good, or did AMD stack the deck in their favor again?

Other urls found in this thread:

anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
fudzilla.com/news/processors/42586-intel-is-covering-qualcomm-s-success
twitter.com/AnonBabble

The first demo AMD did had a Ryzen CPU competing against an i7 6900k both clocked at 3ghz.
Their recent public demo had a Ryzen CPU clocked at 3.4ghz competing against an i7 6900k at stock settings, and Ryzen still won in performance.

Dual vs quad channel has marginal to no performance benefits in the majority of workloads.

Okay thanks.

I'm not super smart with this kind of stuff, just wanted to make sure.

The blender demo numbers check out for the 6900k. It was on a publicly available version of blender to boot. I believe AMD also provided the sample files for the render test in the notes with the presentation.

The thing that most people were calling AMD on during the presentation was the streaming section, where they pitted it against a 6700k or something along those lines, and basically the intel was stuttering to fuck, in dota. Which doesn't happen depending on who you ask. Some streamers I speak to report some minor stutters when CPU encoding only on their 6700ks, others report none.

AMD said that Zen would be a 40% uplift in IPC over Excavator on average, and they've delivered. Right now Engineering samples place themselves at around Broadwell-E levels. As reviewers are getting their hands on later engineering samples they're reporting higher clock rates to boot. A0 samples at around 3.15/3.4, with A3 samples we're seeing 3.6/3.9 from some sites.

Zen is slated for a Q1 release, so anytime before or during March.

>dual vs quad channel has marginal to no performance benefits
Same with HBM2

It entirely depends on the workload. Some workloads, especially synthetic ones typical of benchmarks, actually do benefit quite strongly from higher memory BW. In fact, contrary to outdated belief, very recent games have experienced a performance gain when switching from dual channel, low end to high end DDR4, which is already high BW on a traditionally memory agnostic use case in the first place.

Memory bandwidth only does something if its being used. An 8 lane highway does nothing when there are 2 drivers on the road.

>I also remember reading that AMD actually did something to the 6900k to make it run slower
nope. it ran at stock settings. you got memed by a shill.

in fact, the Ryzen chip itself was clocked lower (3.2ghz IIRC) and had no boost enabled, yet still won in the shown benchmarks, proving its superior IPC in those scenarios at least. we know that the final product will clock even higher than that.

>Was it the memory channels?
since Ryzen only supports dual-channel memory and X99 supports quad-channel, it's likely that the 6900K would perform better in a memory-intensive workload if it happens to be equipped with quad-channel memory. but to the quasi-totality of desktop users this is meaningless.

>or did AMD stack the deck in their favor again?
it's possible that they showed us best case scenario workloads, and that in most real-world scenarios the 6900K will handily beat it.

it's also possible that Ryzen can truly duke it out with Intel's ultra-expensive HEDT stuff, but that the 4c/8t SKUs will fail to clock as high as Kaby Lake i5s and i7s making it worse in the market segments that actually matter. in this case, we would likely have another Bulldozer/Sandy Bridge situation (with AMD having moar weaker cores for the same price), but in this case Ryzen would be more than good enough to play games without throttling GPUs in many games unlike Vishera.

So Zen is basically a 2-person village using an 8-lane highway, in cpu form?

HBM2 has an immediate benefit in being much more power efficient than GDDR5(X) though.

this. and the fact amd cherrypicks their benches and lies everytime you can be sure now that the ryzen with the goodygoodies 3.6ghz and turbo will compete for real with the 6900k

As someone who has written erotic slashfic featuring Lisa Su and yaoie featuring prince raja, even this made me laugh.

Why is it that mentioning AMD is the only way for anybody to give a shit about desktop power draw?

Because 290X uses 50w more then a 780ti in gaming workloads.

PAJEET GO
A
J
E
E
T

G
O

Because kids still read shit articles from 2011.

Blender and Handbrake are hardly cherrypicked, you retarded little kid.
CanardPC's ES review showed exactly what AMD's own demos did. They explicitly stated it, Zen has near Broadwell IPC.

>hurr le 10$ a year power bill
do I really need to mention heat/cooling (and noise), PSU requirements and laptop viability?

it doesn't matter as much as perf but it's still something that should improve.

Like I don't get it, so what if my nvidia GPU uses 300w? So what if my AMD CPU uses 220w? Nothing's overheating, and power is 8 cents per 1000 watts running for 1 hour
I don't care if a desktop component has low power draw or high power draw, I want performance

What about pricing? If its competing with a $1000 chip, certainly they won't price it to compete with shit like the 7700k, right?

Are there going to be more CPU's released with the one previewed, or is this it?

they're going to price it lower. design-wise Ryzen is much cheaper to fab than Intel's X99 stuff.

They're not competing with the Intel core line because it would get demolished core vs core, intels core line is specifically the line with the highest IPC.
AMD is going after what AMD does best, multicore multithreaded powerhouses. That's why they're going after the 6900K, because it's a multicore monster with normal single thread performance and an absurd pricetag
If AMD was releasing with a quad core chip focused for single threaded tasks for $300 they would not be able to do it.
That 6900k comparison is based on the fact that a 6700k would even beat it in single thread

>Are there going to be more CPU's released with the one previewed, or is this it?

Honestly if you think AMD are going to launch their best performing product in years, and only release one version of it, then I hate to be the one to say it, but you're retarded mate.

Pricing is yet to be established. But some anons ran the numbers, and if AMD priced Zen along the same lines as Bulldozer, so around $300, then they'd be making 400% profit on each chip. Roughly. The 8c/16t are no doubt going to be priced toward the higher end though, with quad cores no doubt taking their mainstream seat.

anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
Skylake is less than 3% higher IPC than Broadwell.
Kaby Lake is only 1% different from Skylake.

All that mainstream quad core i5s and i7s have going for them is higher stock clock speeds.
Stop talking out of yoru ass, Sup Forumsirgin.

im so confussed by all these shills im outta here.

I guess it doesn't matter if you get intel of ryzen as long as it just werks

I eamn i'm a 6700k user and i think ryzen is just fine we need cpu wars we don't even care about kaby lake itr was such a let down that even the shills had to try jusify its pathetic upgrade with :oh but there is other features" and "oh but you unlock 4k netflicks etc"

and all this face saving is doing my head in.

HEEEEEELP!~

>Quad core zen will fail
>Multicore Zen is worth it
>Multicore Zen will fail
>Quad core zen is worth it

So should I be regretting getting a 7600k?

Tbh I'm not but should I?

I never said that Intel was getting better, just that they are better, especially for IPC
I mean if the IPC was actually equal then zen should've performed twice as well as the 6900k in blender

>I mean if the IPC was actually equal then zen should've performed twice as well as the 6900k in blender

They have the exact same core and thread count, you retarded Sup Forumsirgin. Stop posting here.

Intel hasn't improved for almost 10 years and they still will beat zen outside of multicore

CanardPC's game tests only had 1 title that used more than 4 threads.
While being clocked about 10% lower, their 3.15ghz A0 ES was about 10% behind the i7 6900k. That is neck and neck per clock performance with Broadwell.

You can't spin this any other way.
Consumer chips will come clocked up to 3.6ghz/4ghz turbo. They'll have plenty of OC headroom. That is competitive performance.

>Consumer chips will come clocked up to 3.6ghz/4ghz turbo.
Could* come clocked up to 3.6/4Ghz.

While signs are promising, nothing is set in stone.

do any of the currently announced AM4 boards have enough power headroom for serious overclocking?

I haven't tried that on a build in a long time, but I think I might with Ryzen.

>Quad core Zen will fail
>Multicore Zen will fail
>Intel will continue rereleasing 2600K's until the end of time

Judge for yourself.

wtf who cares?

my board has rgb faggot

>do any of the currently announced AM4 boards have enough power headroom for serious overclocking?

There have been plenty of boards shown with seemingly decent VRM based on phase count, but we don't have any idea what kind of mosfets they're using per phase.

Expect $700 on the 8c/16t part and like $50 less than the nearest clocked i7 for the 4c/8t part. There will be a 6c/11t part too.
Maybe they'll drop a 4c part with hyperthreading disabled to compete with i5's but I wouldn't count on it until production has gotten far enough to start binning.

These are like the most optimistic I can be with prices. Expecting AMD to cut prices in half is retarded as it hinders their profit potential. They only need to undercut intel just a bit. There's enough fanboys and enthusiasts out there to swallow up stock for months like the 480.
Intel will retaliate with their first real, genuine, not for a limited time only, money back guarantee price cut since phenom ii.
Amd will try to keep some wiggle room but we'll have to wait to see how it turns out.

Judging from what's visible it would appear no

>these 16 phases don't look good for overclocking, guys
How do people like you even live day to day?

Zen arch is literally going to BTFO intel from two small sections of the enterprise market and take sales from their mid range consumer offerings. Zen is demonstrating vastly superior price/perf for the coprocessor/GPU-accelerated HPC market and in any situation where a large number of virtual machines are hosted.

All Ryzen chips are unlocked and early versions are getting 3.6/4.0Ghz clocks with a confirmed IPC very close to broadwell at a lower TDP. The 4 core version will be set at an attractive price/perf compared to the locked i5 and i7 offerings (in particular the i5-7400, i5-7500) and we can expect at the very least similar or better performance at that price range. The main contender for Ryzen 4 core offerings will be people who want performance over price for gaming and the 7600k will likely not have competition there. Interestingly though, the new unlocked dual core i3-7350k which was "competitively" priced at 170 USD from intel will probably be made irrelevant on Ryzen's release.

My GPU has more phases than your computer

Too bad ASRock is shit

>16+2 Power Phase
>4+4 CPU power
It's down to the mosfets to see what overclockers can get out of these things.

Everybody agrees that the unlocked i3 was priced retardedly, especially considering the unlocked i3 was an attempt to spice up their unchanging CPU lineup

>4+4 CPU power
>Implying this has any implications at all
I don't get it

> u.2 port

you have my attention, GB

Can't wait to SLI that with the gpus that people want

Are people able to make reasonable guesses yet about where Ryzen could clock for all cores with water?

>I also remember reading that AMD actually did something to the 6900k to make it run slower

they actually gimped their own chip efficiency by locking it at 3.4 ghz.

>I also remember reading that AMD actually did something to the 6900k to make it run slower

nope, they actually made the Zen CPU run slower; capped it at 3.4GHz while the 6900k was allowed to turbo freely.

No one apart from CanardPC has talked about it, mainly because they're one of the only organizations with a chip that isn't under NDA.

They got a single core to 5ghz on air. They also got all 8 cores to 4ghz. They could have went higher but their test board's VRM was lacking.
Still a 3.15ghz base clock engineering sample hitting 4ghz isn't bad.

I'd bet that consumer chips should hit 4.4 to 4.5ghz without power consumption being crazy. Should be able to hit higher if you can keep them cool enough.

We've got limited information in regards to overclocking on the engineering samples. But the boards are, in theory at the very least, quite capable. But we don't know much about the actual silicon itself. So it's hard to say. 400 Mhz would be expected at the very least with these boards and any decent cooler. If the silicon is solid, well. We don't know.

No.
Don't believe anybody who says they can.
All we know is that AMD made a huge step forwards in IPC and single core performance, assuredly enough to match intels "current" gen

CFX on Vega would probably start feeling PCIe bandwidth limitations given Summit Ridge's 24 free lanes from the CPU.
(and I don't see a PLX switch anywhere nor a board layout suggestive that it might be under the chipset shroud)

And Nvidia appears to be letting SLI wither on the vine, so I'm not sure you could convince me to buy 2 1080s or higher.

kabylake is 0% from skylake.
but they're better overclokers.

>AMD CPU can't even run AMD GPU
I'm gonna laugh when Radeon starts paying nvidia for gsync instead of the other way around
[spoiler]itll be a dark day

Luckily PCIe 4.0 will make all current gpus able to be fully run by PCIe x4

Blasphemy

>CFX on Vega would probably start feeling PCIe bandwidth limitations
First of all, bull fucking shit.
A GTX 1080 has virtually no performance loss between PCI-E 3.0 X16 and X8.
GPUs are still not even remotely close to being limited by PCI-E bandwidth. The only issue they have here is latency.

The X370 AM4 chipset provides lanes for an additional 2 X16 3.0 slots regardless.

no it's not.

Even at 4k, an 8x slot is enough.

How do I plug my 1080 into my M2 slot I need to test this

> AMD monomaniacs unironically believe that AMD won't shoot again its foot

They haven't got their out Foundries. Get over it. As long as Intel have their own Foundries, they will be winning.

Do you have any idea how much a Foundry costs. 100s of billions of dollars. You have to beg others to make your chips for you at a premium.

Intel will always have the upper hand in that status quo.

>x370 alone can run 4 GPUs on the performance level of GTX 1080s
>the CPU can also run 2 with its available lanes
>FUD posters are trying to act like this is an issue

I believe that a couple of reviewers already have Ryzen chips on hands. The way they've been talking about Ryzen rumors always get me under the impression that they know exactly what's going on.

Bu i'll contradict myself by saying that it wouldn't make much sense for them to have the review samples 2 months away from launch, since I think that the most solid dates are falling towards the end of february.

Multi GPU is dead and buried forever.
Remember all the hype for better multi GPU scaling being a huge part of the low level api's? To such an extent that that stupid AotS bench allowed using a fury and a 980ti together for over 200% scaling? And then in beta 2 the scaling was closer to 150% and when the game came out it had no multi gpu support whatsoever and fucking still doesn't?

well if it turbo to 4.0ghz than I think it's a safe bet to say 500mhz on top of that, especially on water.

CFX pushes frames across the PCIe bus, unless SLI with its aging but barely functional dedicated bridge.

Individual UHD frames are 25MB each (or more for HDR), so you'd definitely be choking most of the bandwidth on 3.0 x4 interfaces, and on x8 interfaces you could expect to see more hiccups/MegaTexturing fail blobs than otherwise, but this would need actual testing.

The most likely sceanrio is for Nvidia to start paying for Freesync 2 certification.

They're unlocked, so if you want your chip to be 3.6/4 you can have your chip be 3.6/4. And they're coming with Hyper 212 equivalent stock heatsinks.

how much will they cost?

> As long as Intel have their own Foundries, they will be winning.

they're already losing to Qualcomm

Am I looking at this right? 16 stage VRM?

the cpu is a concern because there is only so much you can do to cool that effectively, and at 220 watts, you almost cant do air at all.

but that's besides the point, their efficiency dictates about how much you could overclock it, because if a motherboard is made to be able to handle 200~ watts, and the cpu at stock is only 95watts, you have a very large margin for oc headroom.

On the gpu side its almost a non issue

16+2

I thought freesync was free and open and nvidia were just to big of salty cunts to support it? Did they taksies backsies after it became clear manufacturers preferred not being price fucked by nvidia as much as their fans seem to?

>3.0 X8 performance: 100%
>3.0 X4 performance: 97%
>2.0 X8 performance: 98%

The GPU isn't in need of all the bandwidth provided by PCI-E 3.0 X8 if performance scaling ends there, and barely increases from an X4 slot.
There wouldn't be any issues.

A top end enthusiast tier GPU in 2017 still isn't anywhere near requiring an X16 slot.

Multi-GPU is inherently awful for modern renderer pipelines, which tend to rely heavily on previous frames for half their effects.

TXAA, screen space reflections, etc. all need the last frame (if not also G-buffers) to work, which just increases frame delivery latency since each GPU has to wait on huge framebuffer transfers. And then you have shadow maps, which are only feasible at all due to caching when at all possible, which is just another set of things that have to be pushed back and forth non-predictably.

I'm talking about CrossFire X only, as in AMD's multi-GPU solution.

Individual Nvidia card tests aren't relevant to this argument.

AMD doesn't have foundries because they spun them off because your logic is stupid.

Freesync is open and free, Freesync 2 will need certification.

But do I get performance?
Will Intels 5W Kaby lake mobile chip have good OC headroom?
Does a 980ti not have any OC headroom?

>100's of billions.
Intel newest fab was actually 19 billion dollars. Enough to put any car factory to shame but let's keep the hyperboles a bit more tame.

That's fucking cold holy shit.

How long before vulkan and HBM require licenses?
That'd be amazing. AMD has somehow been driving the entire market towards all this stuff, if they turned around a put up a toll booth they could fuck over so many companies and make massive profits. I mean it would be an awful thing and fuck us all over as consumers but it'd be like wolf on wallstreet shit.

>Intel will always have the upper hand in that status quo.

Yeah, you keep saying that big boy, Everything's gonna be alright.

fudzilla.com/news/processors/42586-intel-is-covering-qualcomm-s-success

an I7 for 200$ or less is still a fail, or are you thinking that they will price everything a few dollars short of the intel equivalent rather than where the chip should be priced?

amd has been floating the 8 core 16 thread costing the same as an i7 for a while now, but no one will listen to them.

Hell, during ces they basily confirmed they are price matching an i7 with the 8 core 16 thread but to get two stories out of it they went 'we could price it at 500$, and that would be a solid move but not affect most people, or we could price it at 330$, and change the landscape for good, but its not final yet'

going on the ryzen event, where they showed the 6700 v ryzen v 6900, they apparently already know the price point are are fucking with us for more press.

If they put it out at 500$, they lose the entirety of the consumer market because why not just get intel when you know intel will perform good, why go with amd? intels 6 cores have quad channel memory and a track record of being good if you need it. amd 8 core would be hard for normal people to notice, and hard to get anyone to buy if a reliable intel in a similar performance bracket was right there.

Then you have the 6 and 4 core skus, these are aimed at i5 and i4, and are areas where people don't care about multicore, but if amd went 500 at 8 that would be 350~ at 6 cores and 250~ at 4... there is a good chance that intel overclocks more and to the i5 segment they either don't care about 6 cores or only care about single thread, same with the i3 but that is even more budget so you will have people go there just for cost regardless of what amd does.

8 6 and 4 core being 350 250 and 150 respectively makes the most sense to shake everything up and call intel into question, not a 500 350 250 price point.

>And Nvidia appears to be letting SLI wither on the vine
I've switched from 290X CF to 1080 SLI last year when Pascal came out, while support is still lacking and far from perfect it's much better than CF. I don't know what/if AMD changed since then, but after a solid 2 years of utter disappointment regarding CF I wouldn't bet on it being all too amazing.

Also yeah, SLI at 4K at least does feel the PCIe bandwidth limit at x8/x8 and to a lesser extent at x16/x8 PCIe 3.0. Some games with certain settings enabled (temporal AA generally) are potentially restricted even at x16/x16 PCIe 3.0, though there's no way to test since that's the fastest we've got. PLX switches do help since the bandwidth constraint is between the graphics cards themselves and not on the CPU side. I've gotten a solid extra 15% out of TW3 at 4K on x16/x16 on a PLX Z97 mobo vs. x8/x8.

I'm pretty sure CF suffers similarly, especially since they dumped the dedicated bridge. I remember AMD recommending people to turn off AA in TW3 if they wanted CF to work decently.

I'm sure people could've seen it was an exaggeration, I mean there's only like 30 100's of billions of dollars in the world, Intel is not that rich

PCIe literally doesn't work like that

What?

More likely AMD will keep backwards compatibility with freesync and nobody will use Freesync 2 so it will die on the vine

Then after a decade of Freesync being dominant AMD will release an open Freesync 3 and Sup Forums newfriends will ask why they skipped a number

7700k is 349 usd.
I wish AMD would go for that price point for the 8c/16t Ryzen chip, but i think that 450 is an way more reallitically base price, up to 600 for the ceiling.

They should go against intel's mainstream core line, but I think that they'll match more closely to the extreme line, price wise. Inbetween both.

The person that talked and said anything at all likely doesn't know exactly what the high level finance and marketing group are considering though. The 8 core going for 500 has been rumored around for quite a while, and would still undercut the 6-core HEDT from intel. Even better odds since we've got AMD people specifically mentioning the price and documents showing up with the number too. On a price/perf basis the Zen 8c should look acceptable in comparison to the 6900k considering it's literally half the price. I would personally wager the final pricetag being very close to 500 rather than matching an i7/4c price tag. Besides, in the 200usd range AMD should be outperforming the locked i5/i7 and can disrupt the market that way.

There's probably going to be more than one sku for each core configuration, $350 for the 3.4ghz 8core which is the one they've been showing and getting press on. Black edition 3.6+ for 450-550.

Then the 6 cores, why not something like $200-250 for low to high clocked six cores.

Coming in last a 4/8 or even 6c/6t at the low end $130-170.

Just think how many SKUs intel has in those segments as well. They've established price points which oems are used to paying already, if amd hits them with superior performance heads will turn, finally bringing hedt performance to the mainstream tiers.

yeah, but the 6900k is a pipe dream even for hardcore enthusiasts, which will recommend or purchase themselves the 6850k for ~600-650.

I think their deca core pricing was Intel saying "lulz, whatevs, get it if you faggit" and testing the waters. They could bring that price point down on a whim while still keeping very high profit margins.

That's why I believe AMD will aim to undercut the 6850k. So my bet is on a price approaching 550... 529 MSRP, because marketing voodoo perception stuffs.

No independent reviews have been done nor the price of the CPU.

>No independent reviews have been done
Factually incorrect.

where do you live? it costs more than that per month to have a pc on that does not even have a gaming gpu.

There are 720 hours in a month
1KWh is 1000 watts running for one hour, and in Idaho that costs 8 cents
To get a $10/month power draw you'd need to be running it at 175w 24/7 every second of the day. Unless you have a specific situation where you're using more power than this (at that point you should already know the power draw of what you're getting), nobody would reach that usage
Even with a more normal 8 hours a day of 100% maximum loaded PC usage you would still need to be running it at 550w for each 8 hours
That just does not happen consistently