What went wrong?
What went wrong?
Other urls found in this thread:
twitter.com
twitter.com
Your parents were related and somehow, an extra chromeosome inserted itself into your DNA. I'm a dick, I know, but someone needed to tell you before you reached adulthood. Learning the truth too late in life could be soul crushing.
I fell for the cores meme. When you hear something like 8 cores for the same price as an Intel 2 core it seemed too good to pass up. They are OK but nowhere near as good as even intels mid-low tier CPUs.
that's what you get for buying a multi core for single threaded workloads
Shared scheduler, cache misses, and Globalfoundries not delivering on 20nm. Tried to build a speed demon but it couldn't reach the clock goals at reasonable power on the 32nm process. Bad communication between design divisions.
AMD bet money that software devs would not be lazy pieces of shit and would optimize for more than 2 cpu threads.
They were wrong and thus why they made Zen.
Other than that FX processors were good for the multi-core performance they offered and despite consuming more electricity they made virtually no difference to user electric bills.
Nothing, FX was competitive and generally good when it came out.
I don't get the hate. Semi-8-core CPUs for the price of an i3 is pretty good. Never had any problems either, thing works nicely, and you can overclock. Except obviously that it starts melting after 4.6GHz, but that's 140 bucks for you.
They started focusing on more cores,
instead of focusing on being the first to implement graphene and break the single core performance threshold.
The problem is that it's basically 4 cores + HT, not 8 cores actually. Intel does the same but they call their 8 cores i7.
Nothing, absolutely nothing
Pic related
Not even remotely accurate.
AMD being retarded
Closest I can think of would be the slow-as-fuck huge-ass L3 cache.
They could have cut it off and instead tried to make the L1/L2 work as fast as possible. Lose a percent in some benchmark but easily shave off ten bucks on every chip.
The L3 is terribly slow, but its a victim cache. Its not used like intel's Core i arch uses its L3.
Overall low bandwidth in the L1 and L2 hinder IPC greatly in the Bulldozer family. The poor memory controllers are also a major bottleneck.
But the AMD approach worked better for multithreading since it used 8 physical threads instead of one physical+one virtual.
AMD processors with half the IPC were still outperforming Intel Ivy bridge i7s in raw multithreaded application performance
...
This is pretty much it. Plus, the worst part, shared FPU.
You could not be more wrong.
In performance test bench i get FPU score around the same as an i7 3770k which aint bad. Never had issues with my 8320 @ 4.2 ghz, does great when i run many applications side by side which i often do, does fine in lightly threaded enviorments too. I play ssbb on dolphin with ease even though it only uses 2 threads. The 5-10% difference in fps between intel and amd cpus has more to due with driver overhead than it does fpu. I dont really see where this is a huge issue otherwise
Hii can you retweet this?? It is for a contest twitter.com
I can't actually remember where I heard the shared FPU thing was a big problem, so maybe I got that wrong. I'd imagine it'd be pretty workload dependent though.
>being worse than phenom2
>overshilled
Same is happening with zen.
The FX chips - for all their flaws - were (and are) years ahead of the software curve. The real issue though is Gloflo a shit and a speed demon design (ala P4) is not very effecient. The current trend of effeciency over performance means there is no way in hell any CMT design (bulldozer or otherwise) could ever compete.
Its why SMT (particularly IBM's implementation) is such hot shit.
>the worst part, shared FPU
No it wasn't. Shared FPU wasn't an issue on most workloads.
>being worse than phenom2
Worse single thread IPC does not = worse than Phenom 2. Bulldozer was designed to be a speed demon (high clock rate). The fact that it didn't reach it's clock goals (4 to 5.5 ghz) was Globalfoundries fault. They never delivered on 20nm.
power draw
They thought software would start taking advantage of multiple core CPUs, that didn't happen
this
>Stock vs overclocked
Don't you think that's a little misleading? One could also make a case for comparing the FX x3xx series to the core i 2xxx series as well, instead of the 3xxx series and especially the when you're talking about sandy e, which uses ddr4 instead of ddr3.
they stopped using Amada
The BD arch was never intended for any 20nm node.
20nm planar processes were never intended for high performance parts.
Global Foundries did have enormous issues running IBM's 32nm PD-SOI process, but the process inherently aimed for a certain degree of leakage. They did end up reached decent frequencies though several years too late to matter.
The biggest issues were within the arch itself. Bulldozer that ended up launching in 2011 was nothing like the arch AMD originally spoke about in 2005. Its a hacked together core that was rushed to market after being revised and restructured numerous times.
You're too retarded to read.
>You're too retarded to read.
Aside from the part where I was wrong about it using DDR4, please, elaborate. Please tell me how one of them being overclocked by 25% is not disingenuous. Please tell me how comparing an enthusiast chip to a novelty chip is not misleading.
Vishera is Vishera, you spastic child.
The FX Centurion line is the exact same die as the FX 8350 with a factory OC.
The real gem of the bulldozer line are the vishera 'E' chips (read: 8320e and 8370e). Even today they are a fucking marvel of process maturity and are the ONLY 8 core x86 chips made to fit within a 95w envelope (Intels 8c/16t chips are 140w rated).
Its why the current zen ES leaks are so amazing - even with the low(ish) clocks a 8c/16t chip for 95w is a serious world shaker for the server world.
they're just too good for their price, how dare they?
Pic related.
AMD claimed Zen has the same energy per clock as Excavator.
Excavator in Carrizo needs 10w to hit 3150mhz, with increasing efficiency at lower clocks. At 2450mhz only 5w is required.
A recent Summit Ridge ES was leaked showing a 3150mhz clock. 10w x 8cores +15w uncore adds up to a 95w chip.
If they manage to improve on this any further it'll be fairly impressive for their first go at a new arch on a new process that forum shills claimed couldn't reach moderately high clocks.
yes
I would like to point out their +40% IPC number is against their latest excavator chips as opposed to vishera which makes a 1:1 comparison impossible given exacavtor's lack of L3. Though the blender test isn't a bad test it hardly covers all instructions (a lot of Intel's gains are due to baking in AVX2).
Don't they still have a botnet engine on-die like every post-Excavator CPU does?
>this is what amdpoojeets actually believe
>security coprocessors are botnet!
This tech illiterate freetard meme needs to die.
If you don't trust the company then don't use the product.
>I want my CPU to be entirely controlled by a secondary CPU that can't be monitored, examined, audited, shut off, disabled or stopped in any possible way (except by the NSA of course)
>comparing 8 half-cores to true 8 core CPU
amdfags everyone
Vishera is literally just bulldozer + respin. Whatever bulldozer shipped with vishera has.
>I don't understand what could constitute as a core
By your logic some early cpus have zero cores due to lacking FPU.
If you don't trust the company, don't use their products.
You should never be on the internet under any circumstances whatsoever if you actually believed a single word you just typed.
Matter of fact you shouldn't be using any modern motherboard that has its own bootable OS and self check functions.
It is impressive, but only from a rating perspective. My 8350 runs at 4.4ish and doesn't use more than 65w
>lack of hardware still qualifies it as true core
Every amdpoojeet excuse. Funny how amdpoojeets point out how nvidia doesn't have "true HW async" but make exuse for their gimped cores.
Unless you have a golden chip thats bullshit - at full load an 8350 is a 125w chip. Most 8350's run at 1.3v territory at stock - the E versions run closer to 1.1v and really are 95w chips.
Pic semi related - my 8320e running at 4.7ghz (in practical terms good luck getting a 9590 to boost to 5ghz).
I know what the rating is, but AMD always rounds way up on their power ratings, like parts manufacturers should. Keep in mind this is without oc
This, all these faggots on here do is get mad that 32nm hardware bottlenecks their 1337 gaming GTX meme cards.
>Why
way better performance
way better efficiency
same with Nvidia
Depends on what gen you're talking about broham... If you mean the FX-55 or FX-57, the answer is nothing. At the time they were walking all over Intel's face. Bulldozer however, was a complete disaster for reasons already discussed in this thread.
The E was binned and cheaper correct? Good deal.
NOTHING at all
How to check CPU ASIC Quality ?
im never buying amd cpu again
>(Intels 8c/16t chips are 140w rated).
false, I have one with 115W TDP
Everything!
>an extra cromosome inserted into DNA
My goodness...
Why do idiots like you still have access to electronic devices?