>>63956978

I hope 7nm isn't a big disappointment.
I know CPUs won't have 5.0ghz base clocks like AMD fags say, but being able to overclock past 5.0ghz would be nice.

Depends 0n the power target. IBM POWER chips hit like 5.6GHz

it will either have a 40% power drop on the same freq
or
it will have 55% more perfomance on the same power draw as today

pretty sure 5.0ghz will be the norm unless glofo fucked up amd once more

We've had this thread yesterday.
Yes, BK desperately needs to rope himself at IDF.
>unless glofo fucked up amd once more
Fucking up 7LP means fucking up IBM, and boy, that was the deal with the devil.

Btw, I have a weird feeling GloFo will be first to GAA geometry.

It's not like manufacturing process is everything. For example, polaris uses same process but doesn't run at 3 GHz.

>For example, polaris uses same process but doesn't run at 3 GHz.
You're comparing CPU with relatively narrow cores versus veeeeeeeeeeeeeeeeeeery wide (Ume-sensei would like it) SIMD crunchers.

if polaris was running at 3ghz given how wide it is hell it would have been able to run every game at 16k 240 fps

I mean, they can always port it to 7HPC for lulz.
400-500W of UNLIMITED POWER.

>7HPC
apply to them into the hovis method and you have a monster

sadly amd wont bring hovis to mainstream anytime soon

>apply to them into the hovis method and you have a monster
Only Sapphire is dedicated enough to cooling even the wildest flamethrowers; see their 64 Nitro+.

I'm simply point out there may be design limitations to the frequency.

Zen is 14-19 stages deep.
What design limitations?
It hits ~4GHz on POS 14LPP node with fins smaller than your penis.

>I know CPUs won't have 5.0ghz base clocks like AMD fags say, but being able to overclock past 5.0ghz would be nice.

Base clocks mean literally nothing.
They set them considering that an average housewife will never ever open the case of her dell and it will have to work for 20 years with every fan clogged shut with dust and 0 airflow. And some safety margin to make sure it totally doesnt fail even if you run it on a 250W FAKE chinkshit PSU.

But if you can set to to 5ghz - then it is a 5ghz CPU.
GloFo themselves 5ghz is the target for thein 7nm

Not really. They look at the perf/watt curve, pick reasonable points for different segments, choose thermal limits and finally bin the dies.

Seen tons of talk about this shit lately but I don't understand the majority of it.

tl;dr is this gonna be like the halcyon days of Athlon 64 eating Intel's lunch?

Does this mean AMD is again a serious contender?

Will I be able to get a reasonably fast/capable/advanced set-up with low power requirements?

Cutting edge APUs when?

That means jackshit.
If they made a 300Wt CPU they could simply bundle it with Le Grand Macho + pre applied liquid metal box cooler or even a 3 fan closed loop and it would work just fine and quietly.


There's one real limitation - safe voltage.
For current CPUs it's considered harmful and limiting lifetime to fun it above 1.35-1.4 Volt.
So the real limit is what clock it can achieve on the maximal safe voltage.

>tl;dr is this gonna be like the halcyon days of Athlon 64 eating Intel's lunch?
Worse, Intel has no jewbois working on P6 iteration besides Hillsboro guys now.
>Does this mean AMD is again a serious contender?
So serious Intel pooped it's pants on EPYC before it actually passed the validation cycles.
>Will I be able to get a reasonably fast/capable/advanced set-up with low power requirements?
You will choke on shitty DRAM/NAND prices.
>Cutting edge APUs when?
Soon™, if the leaks are anything to go by.
Though it's probably a MCM.

>tl;dr is this gonna be like the halcyon days of Athlon 64 eating Intel's lunch?
Ryzen is selling like hot cakes NOW.
Right now they would offer like 130FPS against Intel's 140 fps in the same game (like it matters) but at a much better price and with a much better support and upgradability.
>Does this mean AMD is again a serious contender?
Yes
>Will I be able to get a reasonably fast/capable/advanced set-up with low power requirements?
Define low.
Ryzen 5 are 65W tdp high performance desctop chips. It's lower than ever.
>Cutting edge APUs when?
A consloe slayer isn't coming.
An advanced office machine capable of running minecraft hurtstone and Kerbal better than Intel would is already reviewed very soon on shelves.
But you would be way better off with a low end GPU anyway.

Nice digits thanks for answering my questions.

DRAM prices suck but maybe those fucking chinks can ramp up production or are they just jewing away? Whatever I'm mildly stoked in any case, I just need a nice, quiet, low power yet capable desktop.

>A consloe slayer isn't coming.
>He doesn't know
Samsung is scared shitless because of chinks and is actively ramping DRAM/NAND production.
Their capex is through the fucking roof.

>A consloe slayer isn't coming.
>An advanced office machine capable of running minecraft hurtstone and Kerbal better than Intel would is already reviewed very soon on shelves.
>But you would be way better off with a low end GPU anyway.
Not really worried about games, just need some power for programming and 3D shit. So a cheap GPU is the way to go? Having it combined with the CPU just seems like such a nice idea... Oh well.

I will die of sheer laughter is rumored AMD's MCM package is using SLIM/SLIT/iCube2.0 for 2.5D integration.
That would be a very pleasant surprise.

Where can I learn more about all this shit?

>and 3D shit
Stop right there.

Almost any 3d task is accelerated by a GPU.
If we're talking blender or anything really a GPU does the job a couple of times faster than a CPU it's the same for other software too and applies to both rendering dense wireframe, rendering viewport picture or ray casting for the final render.

So you do need a powerful GPU, or better even a bunch of them.

>Almost any 3d task is accelerated by a GPU.
Expect the ones that don't fit into it's memory footprint. :^)

AMD's MCM already exists it's called EPYC.
All they need to do is to stick 4 low end gpu's into one monster, downclock and downvolt it for lower power usage and they just won the game.

>AMD's MCM already exists it's called EPYC.
MCMing GPUs is orders of magnitude more complex task than MCMing CPUs.
Besides, EPYC uses simple organic substrate, and GPU will need something more complex.
>All they need to do is to stick 4 low end gpu's into one monster, downclock and downvolt it for lower power usage and they just won the game.
How that fuck will you maintain coherence between plethora of FF units living in the chiplets?
They need fucking terabytes/s of BW.

I dont think it is the case.
1 the latest GPUS have fuckhuge memory like 16 gb
2 they already learned to work in tandem with CPU rendering one tile of an image on both devices simultaneously so I guess it can access RAM, can't it?

>1 the latest GPUS have fuckhuge memory like 16 gb
That's not a lot. One (uno) EPYC can handle 2 terabytes per socket.
>2 they already learned to work in tandem with CPU rendering one tile of an image on both devices simultaneously so I guess it can access RAM, can't it?
HBCC/pinned memory or just any fucking DMAing is painfully slow.

Well yeah I know that but my needs aren't extreme. I'd love to have a high end GPU but it's simply not in my budget nor high enough in my priorities. And yeah, by 3D shit I mean like modeling and animation plus programming 3D for fun. And OpenCL (I guess, CUDA is industry standard but AMD seems to be the better path for the future over NVIDIA so why get locked in) for simulations and that sort of thing.

But yeah obviously I don't want to do that on the CPU or be stuck with shitty Intel iGPU like my current situation.

>MCMing GPUs is orders of magnitude more complex task than MCMing CPUs.
Explain please?
Anyway i'm fairly certain that is what they are going to do with their Navi.
>EPYC uses simple organic substrate,
If by organic you mean plastic then no it uses silicon. It''s some low quality silicon with techprocess probably measured in micrometers not nanometers but it's silicon.
>How that fuck will you maintain coherence between plethora of FF units living in the chiplets?
By lowering the clocks a lot?

Well I would say if you're on a budget and want to be low power for some reason then you're better off getting a refreshed R5 1600 when Ryzen refresh soon hits the shelves and a mid range GPU like RX570 or maybe a Vega56 if you can afford it if the miners haven't bought all of them.

Thank god their bubble market is crashing, the miner scum deserve to suffer.
I hope they will have to pay back the loans they took for their miner farms with their kidneys.

>Explain please?
That would be a very long post you won't understand either way.
>Anyway i'm fairly certain that is what they are going to do with their Navi.
I doubt it.
>If by organic you mean plastic then no it uses silicon. It''s some low quality silicon with techprocess probably measured in micrometers not nanometers but it's silicon.
No, you dum-dum. Organic substrate as in your average green-colored PCB.
>By lowering the clocks a lot?
How the fuck would that help maintain coherency?
You still need links with terabytes of available bandwidth per second.
And let's not talk about fucking load balancing.

>No, you dum-dum. Organic substrate as in your average green-colored PCB.
Technically it's compound made out of inorganic glass fibers and organic glue.

And I would think for their Navi they would go for silicon substrate like they use on Vega.
But who knows if Epyc manages to work on PCB just fine then maybe they they can pull it off on Navi.

After all Epyc/Tripper in-between die latency is just 5 times greater than latency within the same CCX and is measured in nanoseconds.
It's three orders of magnitude lower than the latency of outputting the frames that it is supposed to do so i guess it doesn't mean too much.

Si interposers are shit and are in dire need to be replaced with RDL interposers.
The main problem with MCM GPU is not latency (GPUs are exceptional devices are hiding latency) but bandwith and coherency.
Never were GPUs designed to be able to maintain coherency between FF units on multiple chiplets.
Scheduling and load-balancing would also be very complex.
MCM GPUs are like EUV: it's a holy grail that is an absolute fucking asspain to achieve.
Fuck EUV.