THANK YOU BASED INTEL

THANK YOU BASED INTEL

AYYMD IS FINISHED & BANKRUPT

AYYMDPOORFAGS CONFIRMED ON SUICIDE WATCH

Other urls found in this thread:

anandtech.com/show/11738/intel-launches-8th-generation-cpus-starting-with-kaby-lake-refresh-for-15w-mobile
twitter.com/NSFWRedditImage

>4 cores
Pfthahah

>4 laptop cores
>in the U-series
Shit's getting real. I wonder what the TDP's are.

The actual clocks are absolutely anemic on these.
anandtech.com/show/11738/intel-launches-8th-generation-cpus-starting-with-kaby-lake-refresh-for-15w-mobile

d-dont talk bad about A-AMD

>it's literally two Kaby Lake i7s
>they even call it "Kaby Lake refresh"
>LESS THAN 2GHZ BASE CLOCK

Call me when they make a CPU that is good for 3D rendering that doesn't cost more than my car

Under a good cooler, they should be able to maintain a good 3+ GHz

And it will compete with RR.
1950x?

>1.6GHz base
wat in tarnation...

>15w cuckbooks
>good cooler
?

>compete with Raven Ridge
>less than 2GHz
You realize below 3GHz, Zen gets insanely efficient right? At 15W, AMD could realistically push 6/12 at 2-2.6GHz base

Don't forget that they run at up to 4GHz
These are effectively replacing the Q series, and you'll see them in high end gaming notebooks.

That's what i implied.
It's DOA.
Like really.
Hell pajeet even has time to polish Vega right now.
They are effectively U-series. Toys for 15w cuckbooks.

so, whats the difference between i5 and i7 if both of them have 4 cores??

I said from Intel, Ryzen is great for rendering, and very cheap. Intel is all about muh gaymers

Clocks.
>current year
>Intel
Nah.

>4 cores beat a 6 quadrillion core AMD processor
Simply keking at your life m8

Coffee Lake new box design

I think its saver having lava in your house than AMD

Show me how those 4 cores beat a 8 core Ryzen in this

...

>4 cores at ONE POINT SIX GIGAKEKS

ONE

POINT

SIX

No they won't. The surface pro 4 has the best liquid cooling put of any tablet pc out there and it still has to throttle the CPU to 1.2 GHz when the system is under load.

>4c/8th in 2017

I'm literally shaking right now

raven ridge got 3.2Ghz base
ha. ha .ha.

Show me that ryzen doing anything but showing a still image lel

THANK YOU BASED MINERS

GAYMERS ARE FINISHED & BANKRUPT

>CPU
>mining
?

Here's a sneak peak at our coverage of the next-gen Intel processor line-up!

That's really gonna tear intel a new one. The 1700 already consumes less than 65W at the base 30GHz frequency in prime95. I think someone actually put it in a laptop too.

I support everything which prevents gaymers and mactoddlers to create endless useless tech illiterate consumerist threads on Sup Forums

>the i3-8350k will cost only about $10 less than a i5-8400
intel really hates overclockers

Hey bruh. Since weed is legal now how do I into crypto to make free money, yo.

>I think someone actually put it in a laptop too.
ANUS did.
It's one hell of a mobile werkstation...
...that is marketed as gaymen laptop.

>AYYMD
holy shit this meme got stale fast.

>be intel
>bleeding desktop, HEDT, and server market share fast
>only thing left is mobile market
>increase frequencies
>moar coars
>sell chips at a loss
>still get curb stomped by quad-core raven ridge chips with base frequencies of 3 GHz and double the iGPU performance

How could this happen to me?
I made my mistakes
Got nowhere to run
The night goes on
As I'm fading away
I'm sick of this life
I just wanna scream
How could this happen to me?

You realize that's the DEFINITION of booty blasted, right

yes they run at up to 4GHz but only with Turbo, that is, only one/two cores boost up to that value.
Stutter city in games, so no way they'll be using U-series CPUs in gaymen laptops.

>want to upgrade my skylake to the 6-core i5
>intel doesn't even change the socket, it straight up locks away your motherboard if it's too old
>motherboard producers come out saying they can't release a BIOS update

y-you too

I was in the same position. Now I have an R5 1600. Sniff my balls Intel, your socket a day keeps the goyim at bay shit lost you a customer.

>intel HD graphics
>24 execution units
>~384 FP32 GFLOPS
Is this a joke? I though intel was gonna use AMD iGPUs from now on, what happened to that?

vega igpu would double the tdp so they had to use their own

It was a meme.
AMD would never lease their actual GPU tech to Intel.
Feeding a lamb that's about to be slaughtered is retarded.

Vega is generations ahead of Intel's GPU efforts in absolutely everything.

Yeah but most 3dfx cards are, too.

Good joke.

>Turbo boost technology 2.0

B-buy Skylaeg-X for 3.0, goy.

8th gen i7 is supposed to be 6 cores?

why is that shit 4cores?

That's Kaby Lake Refresh, and not Covfefe Lake.
CFL-S comes later, this fall.

They're ULV 15W laptop quad cores

I completely lost any overview of intel's lineup.

It's shit on any level isn't it?

i would be happy for this but
>less than 2.0ghz base clock

>Plus, on all SKUs:
>VT-x not listed

It's shit.

>intel is actually forced to innovate now

What a time to be alive

AMD and Intel are not going to do business together, AMD was burned by Intel too many times.

>innovate
What?
Refreshes are now innovation?

>intel is actually forced to increase core count
fixed that one for you

so as a newfag to custom building what should I get after installing Gentoo?

to be competitive, this isn't anywhere near competitive.

an iphone

Vega 64 consumes less than 300W in furmark. This is with 64 CUs running at a base frequency of 1,247 MHz.

8 CUs running at 800 MHz would probably only consume ~10 watts max running furmark.

8 VEGA CUs @ 800 MHz = ~819 FP32 GFLOPS btw.

>somebody actually does the math
Whoa.
Math is forbidden in Sup Forums.

>looking at GFLOPS on a AMD card
that's not a very smart thing to do
remember that a rx580 has more gflops than a 1070 but is way slower in most scenarios

This is Vega we're talking about moron.
For Vega it's the more ALUs - the better.
Processing geometry in shaders is sure fucking nice.

then vega 64 should be faster than a titan x, but in some cases it's slower than a 1080.

Guess what: NGG Fast Path is currently disabled for gaymen.

>looking at GFLOPS on a AMD card
>that's not a very smart thing to do
Correct, because AMD cards are a such a massive flop already, so you aren't finding anything new by it

Though I don't even think 704 SPs will be bottlenecked by 4 frontends.
What matters is how much BW can DSBR conserve.

Owning AMD products is like having your very own bright sun in a box.

>>looking at GFLOPS on a AMD card
>that's not a very smart thing to do
>remember that a rx580 has more gflops than a 1070 but is way slower in most scenarios
Nope, both have about 5.8 TFLOPS of FP32. The reason why some games get better FPS in one card or the other is because they optimize them to run well on those specific cards and drivers.

Unfortunately AMD has had a history of making bad drivers and not bribing game devs enough to optimize more for their GPU.

In the end the raw compute performance is there and bitcoin miners know this all too well which is why the vega 56 and 64 cards sold out so quickly.

This meme is getting old.

>i was in the same position
>now i have a worse cpu
>and i still had to buy a new motherboard
You're not the smartest kid around.

There's more to GPUs than shader math and hand-picked optimizations.

You do realize that the theoretical peak compute isn't the end all be all of GPU performance, right?
The reason why Nvidia is much faster on the same TFLOPS is because they have a much better geometry throughput and because they don't have retarded ROP bottlenecked designs with massive shader arrays that are never utilized fully.

>GCN
>ROP bottlenecked
?
ROPs are the last thing that can be a bottleneck in GCN.
No, you don't need retardedly overkill fillrates of Maxlel-likes.

GCN is memory starved.
It's designed for massive memory throughput with massive datasets, not the piddly little 4-12GB that a videogame uses.

>GCN is memory starved.
Not Vega though.
~500Gigs/s of bandwith is enough for 4k ALUs, courtesy of DSBR.

>massive memory throughput
>not the piddly little 4-12GB
Why are you talking about two different things in the same fucking sentence?

and award for biggest retard in the thread goes to ....

Look up HBCC. Vega was designed around swapping memory in and out of cache.

HBCC has nothing to do with fucking bandwith conservation.
Like H O L Y F U C K never go full retard.

True. ROPs, TMUs, ALUs, vRAM bandwidth, cooling, throttling, ect. can all affect performance even after drivers and optimizations have been refined and perfected over and over again.

Anyway instead of taking time to research all this I find it easier to just look at the FP32 GFLOP rating because at least by this metric I can say "This GPU has up to XXXX GFLOPS of possible raw compute performance". Also it's not fair to judge a GPU by how many FPS it gets in the newest cowa dooty or dark like my soul calibur 3 XXL.

>has nothing to do with fucking bandwith conservation
No, it's about letting the cores do work while memory ops happen in the background.

>Also it's not fair to judge a GPU by how many FPS it gets in the newest cowa dooty or dark like my soul calibur 3 XXL.
It is lot more reliable than comparing few dry numbers you don't know anything about

Vega is not a gaming architecture. It's a GPGPU simulation monster.

Yeah because games optimized for a specific GPU is better and GPUs are solely used by basement dwellers to play video games all day long.

>Vega was designed around swapping memory in and out of cache.
You just described memory controller

It's literally for all things 3D. Why would they bother with primshader for compute uarch?

Yep. That's kind of the entire point of Vega, though they've focused heavily on cache swapping as opposed to core feeding.

great way to miss my point

The entire point of Vega was to finally make a balanced fucking design.
Things like HBCC are a neat bonus.

Aren't those dry numbers used by miners and CAD/3D program users?

>cations releasing botnet
Goodluck with your life failure.

>balanced fucking design
I don't think so. If it was balanced, it would be good at gaming, which it isn't. It's a platform for simulations, machine learning, and industrial rendering, all of which benefit far greater from the "neat bonuses" than gaming does. For a large dataset application (not gaming), HBCC is the single most important feature of Vega.

All I'm saying is the first metric you should look at on a GPU is the FP32 rating. Then look at the GPU architecture and other complicated shit to determine how much of the total possible FP32 computer performance will most likely be used by specific tasks (like gaymen or buttcoin mining).

IT LITERALLY SUCKS IN GAMING DUE TO USING A FUCKING LEGACY GEOMETRY PIPELINE
LIKE FUCK
ARE YOU ALL TOO FUCKING RETARDED TO LOOK FOR B3D SUITE NUMBERS?

Why do you think the geometry pipeline wasn't the main focus?
Oh, because it's not a gaming GPU.

I don't understand.
Entire point of vega is to have a memory controller?

IT LITERALLY
WAS
THE
FUCKING MAIN FOCUS
ALONG WITH DSBR
OF THE FUCKING VEGA UARCH