Ryzen 5 Releases April 11th
Other urls found in this thread:
en.wikipedia.org
twitter.com
>they cheap'd out on the L3
No one cares tho. poFags already have their Pentiums and i3s
>200 buck more for an expensive cpu
>hahaha you pay to much says I the intel shill
>(native American ha) Hahahahaha no one wants your cheap cpu what are you too poor for intel hahahahaha
>Same TDP for fewer cores/same frequency
These really are just failed R7's huh.
I bet that a 1400/1500X will beat a 1600/1600X purely down to the fact that instructions will stay within a single CCX (20 cycles vs 190 cycles). In theory they should clock higher too, but the TDP and default clocks are confusing..
They're either hungry/hot trash that can't clock for shit, or those default numbers are very modest and just designed to keep more expensive chips looking attractive. The 1400 should be a 35-40W chip. Also why are the 4/8 chips not Ryzen 3's??
How? the 6/12's have 33% more L3 per core and the 4/8's have equal per L3 core than 8/16's
cause it going to be the 3.5 meme except it 2 and .5
> the extra .5 will never appear and just for show unless you do those extreme tests
The fuck? L3 is shared between all cores, less coarz = moar L3 per core.
R3's are 4c/4t clocked quite low if leaks are accurate
nope, L3 is one big pool, it doesn't need to be optimized for it's just there
unless these 6c will be fucked in some way compared to full r7, but I doubt it's going to be that L3 is not as complicated as cores to manufacture
So it's clean +4MB available to all cores.
R3 are APU most likely, and under 35w, which is confirmed by r7 tests, which runs very low power under 3.3Ghz
What are they gonna do without any motherboards?
What do you mean m8?
Most AM4 mobos are out of stock.
oh, a meme :^)
more like april 1st, amirite?
that's not how TDP works.
Please explain to me how TDP isn't relevant when the maximum thermal output of a 8 core chip is rated the same as a 4 core of roughly the same average all-core frequency? If the two CCX units shared resources that couldn't be disabled along with the cores then it would make sense, but the chip is very literally halved.
TDP does not specify thermal output. It specifies the cooler. If the 1800x needs a 95W cooler and a 1600x needs 1/4, then it needs a 71W cooler. Because there are no 71W they specify the same 95W cooler.
Intel's 6, 8 and 10 core processors all have the same TDP for the same reason.
>TDP does not specify thermal output.
not that user, but stay retarded
en.wikipedia.org
>
The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component
Ryzen 6/12 chip should have 3.9Ghz base and 4.2 Ghz boost and Ryzen 4/8 with 4.2Ghz base and 4.5Ghz boost but we know AMD won't deliver because of shitty 14nm LPP.
>shitty
Neck yourself. It's the perfect process for server chips.
I will probably be getting one of those 6 core ones.
>can't clock above 4.1Ghz to keep up with Broadwell-E
>perfect
>cockspeeds matter in serverfarms
Consider suicide.
>he thinks that servers use overclocked desktop i7s
>server chip
>muh Hertz
fuck off, kid.
Then AMD Should neck themselves for releasing a server chip which was heavily marketed for gaming.
...
This. This pissed me off to no end. It's just damaging the brand for no reason that they did that.
I have zero brand loyalty and I'm not shopping for a new machine right now. But watching this from outside just made no damn sense.
>Wikipedia
go be tech illiterate somewhere else.
Pic related both have 140W TDP.
4c/8t chip
>Of course it performs like shit! Its a server design. What did you expect? Desktop performance?
Why are you linking power draw in a conversation about TDP?
>calls others tech illiterate
...
Of course TDP specifies thermal output, traditionally.
But it makes sense if AMD don't use the term in a traditional sense; if it really does just relate to which AMD heatsink model it needs. That seems silly to me and it would sell some chips short. Silly AMD.
...Power draw may correlate with TDP, but you can't conflate the two or make absolute assumptions based thereon. You literally just showed that yourself. How is it bait and how is the user's wikidefinition inaccurate?
>Silly AMD
Intel does exactly the same thing.
Silly everyone.*
What the fuck? VIA master race
power and heat are the same thing.
TDP gives the maximum average power (Intel and AMD use slightly different definitions here) a cooler has to be able to dissipate in order for the chip to function normally.
TDP is specified in categories because cooler manufacturers only make 35W, 65W and 95W coolers, so any CPU that consumes between 65W and 95W gets a 95W TDP sticker on the box.
>AYYMD
>aka 20% worse performance for the same price!
Yeah, not falling for bulldozer 2.0.
>power and heat are the same thing
no, go be retarded somewhere else
neo-Sup Forums everyone
Well to be fair, heat can be measured in watts.
As I said, power and heat are very closely related when it comes to PU's, but cannot be conflated; factors such as physical density/size and general transistor technology will affect the power>heat conversion
TDP is a horribly non-standard measurement, but you're a fucking imbecile if you thing that the electrical power consumed by a computing system becomes anything but heat in milliseconds (i.e., DRAM refresh cycle timeä) or less.
Heat is energy and heat rate/flux is power, and (heads up, protip incoming:) if you don't actually let the heat out of a processor, you're gonna have a bad time.
>1800x doing worse than a 7500
You have to be fucking braindead to even considering buying ryzen.
The argument wasn't that power doesn't translate to heat in processors; it was that if it translated at 1:1 when it comes to dissipation requirements. You can't accurately gauge dissipation requirements from power draw any more accurately than you can gauge the reverse.
Sure, maybe a processor may generate 90W of heat from 90W of power, but how is it distributed? What is physical volume, surface area and density of the heat source? What's underneath it? What material is between the die and the heatpreader?
That's why TDP has to exist, as gay as it is. But that's not an excuse to omit nominal power draw figures.
FURTHERMORE there should be a simple calculation to scale the TDP depending on climate. Who fucking knows what they're basing their TDP on. Fucking north pole? Africa? Cunts
>wrestling mange
Source? I need this in my life. I have bike manga and fishing manga already.
Is the 1500 even worth getting or is the 1400/1400X just better?
The 1500X and 1400 are the same; you're just paying $20 more for (on average) better binning. I'd rather gamble and just get a 1400 personally.
>binning
what does this mean?
Why didn't they get a high-frequency 4-core?
Intel would be on suicide watch. WHY IT'S ONLY 65W
1 4 0 W A T T S
Isn't this a good thing?
DOESN'T THIS LITERALLY INDICATE THAT A 4C CPU IS A SINGLE CCX
It means some chips are able to reach higher clocks. Of course, they are tested in bulk, so stock frequency is the absolute minimum, some chips will be able to go higher than the top Ryzen.
It's probably a good thing that bulldozer was such of a power hungry housefire because now AMD has better scaling than intel
Basically due to unavoidable flaws in manufacturing processes, some chips can't enable all cores, some chips need more or less voltage to run, reach higher or lower frequencies etc.
The entire Ryzen lineup is essentially an 1800X, but;
the 1700X was an 1800X that couldn't stably clock as high
the 1700 was an 1800X that couldn't stably clock as high as a 1700X
the 1600X was an 1800X that had 1-2 failed cores
the 1600 was an 1800X that had 1-2 failed cores and couldn't stably clock as high as a 1600X
the 1500X was an 1800X that had 3-4 failed cores
the 1400 was an 1800X that had 3-4 failed cores and couldn't stably clock as high as a 1500X
Thing is, these categories are made on older BIOS revisions than you would use if you bought one. So, by the time it hits the shelves you could probably get a decent frequency out of the failure chips than expected.
On the same note, you may be able to get disabled cores back up and running. Especially if AMD start forcing cores shut just to keep market segments flowing. A perfect example would be the Phenom II X2 B55. Came stock at 3GHz with two cores, but with a simple patch would unlock the dormant cores to X4 that AMD fused off; just so they could offer a budget CPU without hurting their upper lines. My B55 ran happily at 4.2GHz on air (corsair A50) with all four cores. Cost me all of around £50 at the time. Let that sink in -- a 4.2GHz quad for £50 in early 2010. Would fucking love if that happened again.
CCX architecture is for modularity. 1 CCX for 4 cores, 2 for 8, 3 for 12 etc...
This is most retarded thing I have ever read today, good job.
dumb frogposter go back to r eddit
CCX units are in blocks of 4. You think that if they got a 2 CCX chip with 1 healthy core in CCX1 and 3 healthy cores in CCX2 that they would just trash it? Nope. Say hello to a shiny new 1500X/1400
There is no way for the 6 core, 16MB L3 variants to exist if this wasn't the case. This is what they do. Die shots for the FX-4300, 6300 & 8300 are all identical, for example.
the 4 cores are a single ccx. Hencce the 8MB cache
It is possible that they have a unique 4 core silicon (TO BE SEEN), but that isn't a valid argument against the 6 core not just being a failed Ryzen 7
Of course all the R3/R5/R7 variants are just different binnings of Zeppelin dies, but you're oversimplifying what a "failed" core is.
It's not always just that a part won't work, and more often that there are a few too many leaky transistors that just suck down too much power at a given voltage desired to hit a certain clock point.
There are likely plenty of samples where 7 cores will hit 3.6 GHz and 1 only realistically goes to 2.8 or whatever, so you just fuse it and the 2nd worst one and get a 1600X or a 1600.