AMD, Intel, future of computers

Now that AMD has caught Intel with their panties down, will finally start seeing significant gains in PC performance?

What does the return of AMD as a competitive CPU manufacturer mean for the computer industry?

On a side note, will VIA ever make an attempt to come back or are they just going to struggle along until they die out? They're the only other company with a valid x86_64 licence that I know of, that are still around.

Other urls found in this thread:

cavium.com/ThunderX_ARM_Processors.html
twitter.com/SFWRedditImages

why would anyone bother?

ARM is taking over the norms

uhhh those hips are looking a little...
young

This is for pedophiles.

broad's shoulders are broad
probably dude

Can't compete for desktop performance

Not for a long time because Muh Games.

No ARM processor is powerful enough for fallout 4 1080p ultra settings.

Good for netbooks though.

streaming 4k60 fps on my mobile phone, what more would most consumers want till 2030 except the VR meme that's been emerging and disappearing since the 80s?

Intel is a company that tries to stifle performance gains by releasing as little as they can get away with and charging as much as they can for it.

They are an abusive monopoly that needs to be killed off.

what x86 processor fully supports what you said atm either...? including shaders

>infant posting

dont do this

The latest iris pro shit can do that on med settings
My laptop does that at 30fps minimum settings

You raise a very valid point.

so we openly agree no cpu has hit what you said ?

(btw, iris pro is full of shader bugs due to little support of tesselation and aliasing)


What about a laptop with a bundle of 4 of those SoCs Nvidia introduced a while back? Was the tegra x ones if i remember right.

Love how they compare CPUs like that when a 6600k easily hits 90fps with a better gpu

Lol if you're arguing that ARM is dead because they don't feel like adding a GPU for gaymes then you might as well /thread now

The issue with arm is CPU performance. Adding a GPU through PCIE is something anything PC would have if they expect to Max out modern games.

True. A GTX 1080 and Vega 64 should be fine on ultra 1440p even.

1080 is what gets 90fps at 1440p on my shit btw
The point is it's a moot test

>through PCIe
Are you retarded?
The point of arm is low power, why would they take that and add some non-integrated GPU unit, cooler and PCIe power?
There's more graphical processing units in the world than the 300w PCIe masturbation devices you spend your whole paycheck on

Yeah I always noticed my i7 860 + GTX 670 seemed faster than what was on review sites, I just stopped caring after that. I wonder if it's even valid for comparing performance between AMD and Nvidia anymore if home performance is going to be so vastly different?

Because PCIE isn't limited to just graphics cards you twat.

point with consumer electronics buyers is that they can't think past the sockets they've already used or seen.

Noone in this thread can imagine what a quad tegra x mobo with just some extra ram and storage plugged on it can do.

>Overclocks only certain processors
Throw that shit in the trash. If you're going to include even a single overclocked benchmark you better overclock every goddamn chip that can be

Look harder
Theres stock CPU data pointstoo , just you know, WAY WAY down the list

Look harder, not all overclockable chips are benchmarked after an overclock. Either include them all or don't include them.

And now we're back to the more coarz problem.

Name one mainstream consumer application that can scale perfectly across 4 cores, yet alone a cluster of 4 processor.

The fact is most software can take advantage of multiple cores but it's still heavily reliant on the performance of core 1 as that's where most the load gets dumped. Doubling cores won't magically double performance.

It's for diligence in the data, you don't need to a single hyperthreaded CPU either in that fallout chart, but there's still plenty, just needed to have that .5fps to make "#1 CPU GUYS CHECK MY BENCH"
This isn't a direct authoritative comparison of CPUs, it's a screening, of a video game, of a single GPU
It's a fucking video game benchmark and he added all data he felt like testing, it's not a bad thing he added data
Do you complain about this because it's not every single GPU there is in all circumstances?
Jesus Christ people don't know how to actually recieve information anymore they just assume it's gods will and parrots it

DX11 *can* scale perfectly across 4 cores and DX12 *can* scale perfectly across 12 threads
Honestly it's lazy devs relying on CPUs being fast and themselves being single-minded to properly optimize

so let's get cucked by software devs and never move on with tech. Seems it's about the reason you are buying your niece a threadripper hur hur hur

>name a single GPU use that scales across over 4 cores
But bro what about 4 thousand corelets

>It's for diligence in the data
Diligence would be including all benchmarks possible. The benchmarker did not give all chips the same treatment. I understand not being able to do all possible processors or graphics cards. However, if you are going to overclock some processors you should overclock all that are possible and include those. There is absolutely no reason to not include them. If you are willing to take the time to overclock 9/10 of the processors don't fucking say that last one is too much to include.

do you grasp the concept of a SoC or is it too genius for users that rely on boasting about TWO different pc components they got?

This.
More data is always good, although I somehow doubt the FX series would really be that shit...
Color me surprised.

Which ones were not overclocked?

Yeah it's a shame devs are so shit but it's what we're stuck with. Even DX12 is seeing shitty adoption. I'm just glad emulators are picking up vulkan though.

Literally every single CPU on that chart has stock clock benchmarks though
Again, it's more data, take the data and make your own decisions, I know you think this benchmark picture is absolute law and you feel empty that it doesn't include a totalitarian approach to your technological life, but maybe, just maybe you can pick out the data, and maybe look at a second benchmark
The God that is a picture of video game benchmarks isnt always 100% omniscient

But he didn't include every CPU ever made??????
Wtf?????
If you are gonna benchmark 9/10000 CPUs don't just say the last 9990 are too much dude
He's such a fucking retard

Vulkan is basically the same idea as DX12 except prepared for a much wider device market, when did this meme that it's the best thing ever start?
Was it doom?
It was doom wasn't it

4790k, athlon x4 860k, phenom II x4 965

I know it has stock speeds, and that I'm okay with. What I'm not okay with is putting in the effort to overclock most but not all. It feels lazy, like they were okay with getting most data but not all. It's just a bad way to go about accurate testing.

I never said it's the best thing, I just said I'm glad it's made it's way to emulators. PPSSPP and dolphin on Android have Vulkan support.

Only the 4790k is true, the other 2 were overclocked.
The reviewer is a person just like you and I, I can accept forgetting to overclocked 1 Processor in a list of many.

>accurate testing
ONE VIDEO GAME (based on popularity)
ONE CORRIDOR (based on convenience)
ONE GPU (based on popularity)
if you think this creates an "accurate" and thorough comparison of CPUs then maybe choosing computer hardware is too much for you to handle
Benchmarks aren't there to make choices for you, they're to answer questions when dumbasses say "but how does it run fallout4?"

...

Look harder you pajama wearing basket headed slipper wielder

>a single game works like all games on cpu graphics


sure buddy...tell that to yourself while I'm playing 3 games full of artifacts and no tesselation support on my iris pro setup

i don't blame you, for you it probably takes 10 whole minutes to read "4790k"

>VIA
Holy shit guys they're still around!

Works on my laptop famalam
Which year was yours

>AMD still rekt
What are via CPUs even for? Chinkphones? Embedded shit that's never going to be edited?

this is a test of GTX 960 vs r9 nano
Probably just used the worst CPUs that could use PCIe x8

Nah Kent the c4650 is fairly new.

I'm looking forward to Ryzen laptop APUs

It's a bad camera angle on a phone-sized optic. Warping has occurred.

VIA will never make a comeback in the performance mainstream, no.

>What does the return of AMD mean?
Intel diverting their massive R&D funds (and pulling out more hidden, yet unused IP a'la Bingmesh?) to leapfrog architecture instead of stagnate and rely on node performance.

>Will we finally start seeing significant gains in PC performance?
There's not much more to be had in single-threaded performance design without sacrificing multi-thread performance as a result.
Logically, we might see the creation of separate architectures designed to leverage Single-over-Multi and Multi-over-single performance, but only if the market demands such a split and only if both markets are fiscally viable to develop for.

ARM will never compete on pure performance unless the ISA continues to bloat and it becomes the opposite of what it was meant to be, making it just a (bad) competitor with no niche.

ishiiruka has preliminary vulkan support now.

vulkan is AMAZING from a software, programming perspective. it's smooth and simple to code and very sensible, unlike the clusterfuck that opengl and directx have become. I can't fucking wait for it to become the Next Big Thing so we can stop relying on those ancients.

>Separate architectures designed to leverage Single-over-multi and Multi-over-single
You mean kinda like ARM? A73 vs A53 vs A35?

I don't think I've seen an ARM chip with a PC-class (64-bit x 2+ bank) memory controller on it. Have you?

I'm just excited that all these are coming to Android and it'll only keep getting better.

Sure, the difference with ARM and x86 being

>Arm takes ~18 months from design to pre-production testing
>Between several million and the low tens of millions

>x86 takes 24-30 months
>Between the tens of million to the low hundreds of millions

>(64-bit x 2+ bank)
DDR channels are 32-bit. Do you mean dual channel? And, more than two banks? Do you ultimately mean something more than dual channel with more than one slot per channel? ARM isn't generally designed for such high power uses.
ARM has had plenty of dual-channel designs capable of using high clocked DDR for years. The latest custom SoCs use PHYs rated for DDR4-1866.
The question is, are the CPUs actually capable of utilizing 30+GB/s, and more importantly are they designed with such performance in mind?
>generally, no
and more importantly
>no

Many custom server oriented designs are still in development phase to reach competitive parity even after multiple generations, and regardless ARM is meant for low power. You simply don't need, for example, a quad channel interface running at DDR4-2666. ARM designs are too small to need it, the FPUs are relatively anemic.

Eh, kill me.
For someone reason I thought a DDR channel was 32 bit wide instead of 64
I calculate bandwidth when overclocking all the time I should know better

STOP TRYING TO MAKE ME A PEDOPHILE CIA NIGGERS.

ARM servers are growing in popularity though as it increases in performance, and I'm not just talking about those pi computers that seem to be meeting m getting rather popular.

I wonder how modern ARM processors compete against core 2 quad...

>DDR channels are 32-bit
Really? I am holding in my hand a device that has only a 16-bit wide DDR3 bus on it. As a result, performance kinda sucks.
>ARM isn't generally designed for such high power uses.
>meant for low power
Typecasting much? There are many, many ARM designs with four cores or more, and they are fine general purpose compute devices. It'd be really, really nice to have 64 bits coming in and out of them especially when doing vector ops.
>the FPUs are relatively anemic.
There are too many applications that don't do floating point all de tiem for that to matter generally. Not even a desktop environment will generally use floating point heavily enough to matter.

They'll be a bit hobbled due to narrow RAM interfaces, but stronk in compute/dollar and compute/watt.

So would a Snapdragon 820 be able to compete against a e8200?

>dual channel LPDDR4-1866
Interesting. Just maybe it could.

Fuck yes.

Good to know my phone is possibly at least on par with my dads computer (my old one).

After Vega AMD are just as shady as Intel.

that's a dude

Shut up. I can pretend it's a little girl.

It really isn't.

Can you blame them? Intel got such a massive lead on everyone else in the industry by being shady, cut them some slack here and just hope they don't do it again.

For fucks sake.

Low powered Intel platform IoT. Talking 20W max.

If I was building a HTPC with a low end GPU(1050 or Vega 64) I would look into VIA CPUs.

It's not. If it was I'd save the image.

Where would you even buy it from?

you said
>I don't think I've seen an ARM chip with a PC class memory controller
With your definition being 64-bit * dual channel.

Let me google that for you.
>cavium.com/ThunderX_ARM_Processors.html
Specifically paragraph two, bulletin four:
>"Four DDR3/4 72 bit memory controllers capable of supporting 2400 MHz memories
72-bit obviously denoting ECC functionality, with the product catalog showing 4 full-width DRAM links per SoC.
The Cavium products being throughput focused, this suggests that custom high power ARM designs require a full width DDR4-2400 link for every 12 cores, When not targeting embedded, low power, or mobile applications.

The point being, and also backing up my original response, the majority of ARM devices do not need that kind of bandwidth. Devices are designed to their capabilities, with the truly high end ARM SoCs having exactly what you're asking for.

Your embedded A57 quad core cannot utilize the bandwidth a 64-bit*2 PHY provides, and thus is not built with one.
Even Apple designs their (high end) SoCs with a quad 32-bit PHY instead of dual 64-bit, suggesting that latency and lower power per channel matters more than absolute throughput per channel, For the target market.

Now go be a little ditch digger somewhere else.

aqua_snezhok
or marquita norwood

oh wait
"marquita norwood photography"
pic probably same grill idk

cant find anymore
but second pic killed it

DX12 is literally the same shit just not applicable to other devices (practically speaking, mobile)

>Now that AMD has caught Intel with their panties down, will finally start seeing significant gains in PC performance?
yes
>What does the return of AMD as a competitive CPU manufacturer mean for the computer industry?
see above
>On a side note, will VIA ever make an attempt to come back or are they just going to struggle along until they die out?
ded

Christ, no wonder those other photos didn't show her face

I know right, poor thing.

So embedded garbage then?
What socket do these CPUs even fit in

Face is alright. 7/10 for me with that body...