1000 core cpu

ucdavis.edu/news/worlds-first-1000-processor-chip

AMD should hire them

...

what's the flops/w of this thing because i'm willing to bet it's abysmal.

Maybe for a CPU
GPU's had more compute units for ages now.
And 600 mil transistors? WTF
A CPU today has at least 1500 mil and GPU 6000 mil.
I could literally get a FPGA with more transistors and program it as a thousand core CPU.

>not x86
into the trash it goes

1,78 trillion operations per second needing 0,8 volt.

Its not the worlds first. This was made 10 years ago.

Ye I remembered one in 2010 didn't know the one six years ago the source I posted is shit but needed one in English.

This is a general purpose cpu, gpus are like surgical tools for specific tasks while the cpu is like a utility knife capable of all tasks at an acceptable level.

X86 is not a good option for multi core designs.
Purely supported by software these days.

Nobody gives a shit how much voltage it uses. Modern laptop processors use less than 1.0 Volts.

How many watts does it consume running at 100% CPU load?

Also can it do FP64 stuff else this is fucking useless.

So what could it do?

>World's first
About ten years late.

10 years ago*
Don't shitpost and drink..

>Also can it do FP64 stuff else this is fucking useless.
This kind of legacy shit is what's keeping hardware and software from advancing, why not something new and faster?

Source in op doesn't include this too

General computing, Maybe even windows if it's the right architecture. But I doubt it

10 years ago in 2010?
Get of the booze if you're gonna post...

I don't run 1000 processes on my Windows machine at once for it to be usefull

>X86 is not a good option for multi core designs.
Then why is it the most used cpu architecture in supercomputer and servers?

No consumer would ever use that many cores at once. And it's not even designed to be used under full load as each core is clocked independently to accommodate the main cpu load. It is mainly meant for server and enterprise environments should it be made a consumer product. But that trickled down will take time and by the time we have "100 core cpus" we may have tasks that use that many cores and many more than that in some cases.

Because marketing and legacy shit.

That has been a problem for the last years already.

People are trying to make even the ARM a useful option for servers.

World's first? Pretty much all GPU are many thousand tiny core chips

should have meant, World's first general computing processor

It uses like 17W max

Because 10% faster than super fast means jack fucking shit when it's totally useless?

You can have all the big benchmark numbers in the world, doesn't mean they can do anything.

Coudl have just said you meant multiple cores on a single die

>Because marketing and legacy shit.
All the marketing and legacy shit couldn't save itanic.

>People are trying to make even the ARM a useful option for servers.
ARM is fucking dogshit, nobody wants to use that shit. I don't think it can even do FP64 OPS and performance-per-watt is abysmal.

>inb4 cisc vs risc
Sorry kiddo that debate died when intel started using a risc core with a cisc interpreter.

Find me an ARM chip with better performance-per-watt than pic related.

>It uses like 17W max
like I don't care. Wake me up when they actually stress this raging housefire to 100% and report how many kilowatts of electricity it used.

>FP64
>legacy shit

What were you trying to say?

It used under 20W under full load to calculate it's performance

The sole purpose of China's Tianhe-2 was simply that, to hold the title of the world's "fastest" machine despite doing fuck all for the 3 years it held the title.

That something new does not need to support correct technologies, specially if it could do something new and faster.

That's the problem, people are scared of supporting new things, so they won't develop anything for it. Know working things are a safe bet, that's why legacy support is actually stopping progress.

What would constitute an alternative to floating point arithmetic?

>Dotcom bubble and PCs finally reaching a "good enough" baseline severely weakens Sun, SGI, HP, Compaq, and other traditional non-x86 HPC vendors
>Intel, HP and Red Hat finish them off with the Itanium hype train
>Intel shitcans Itanium
>Inexpensive x86, locked down to shit SPARC, and expensive as all fuck POWER are all that remain for high-performance computing
>the majority of supercomputer vendors naturally choose x86, cheap enough and fast enough combined with faster interconnects and GPGPU computing that you can outrun better architectures by sheer brute force

supercomputing doesn't give a fuck about legacy compatibility, software is generally written for one time and one time only

compatibility > speed outside of HPC, always
one would think you over-optimistic tards would have figured that out after watching what happened to x86's betters in the '90s and 2000s

>they're just scared!
stop this stupid shit

they're not "scared" of anything, they just don't give a fuck, it brings nothing to the table, progress for the sake of progress is no progress at all

people just want to get work done, as long as it gets done in a reasonable time frame, it doesn't matter anymore, for most non-HPC jobs you're more inhibited by your thought process than your system's hardware speeds anyway

actual numbers

vcl.ece.ucdavis.edu/pubs/2016.06.vlsi.symp.kiloCore/2016.vlsi.symp.kiloCore.pdf

What you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul.

it's not very useful that what you posted only gives a small idea of the actual performance
post flops

i doubt windows would run on 1000 processors
even linux needs to be compiled specially to support that many processors (i think default is 64)
>what is multithreading
performance would be shit though, since each processor would suck balls
it wouldn't be 1000 times faster than a xeon core, it would just be 1000 cores that are a 10000th as fast as a xeon core. or less

>it's not very useful that what you posted only gives a small idea of the actual performance
>post one of the most useless metrics of comparing performance across architectures there is instead

No, the NASA should hire them. I mean, a CPU like this and being radiation hardened, would be wonderful for space probes.

Good job, user! You sure showed him!

why
it would probably be shittier than the current RAD750s and similar being used in spacecraft, spaceflight is not computationally expensive

>621M transistors
That's not a lot.

I would like to eat that up

Stop posting your high school bully


fag

Bait

You still need high parallelization for something like that, also the more free threads you have, the better, specially when you need an almost perfect synchronization. If one core fails you, the system is fucked.
Also the RAD750 is power hungry as fuck, compared to this CPU that could run with just AAA batteries. So that means better eficiency and, maybe, longer trips.

We came here to discuss not to insult

Thats not a cpu. It can't even run anything.

It's definitely a CPU.

No, it isn't. It doesn't even have any modern cpu logic. The cores aren't even able to do floatpoint and only operate on 16 bit integers. The whole thing can't even run any OS. The most this thing can be called would be a coprocessor.

Modern GPU ALUs are more advance than this.

It doesn't need to be able to do floating point ops to be a CPU.
>can't run an OS
Was the first x86 CPU not a CPU until an OS was made for it? Then it magically gained this new title even though none of the hardware changed, right? Oh wait, no. It's perfectly possible for a CPU to have no available operating systems.

It has 1000 cores, they're probably ass retarded simple using an instruction set with only add, sub, and mov. They also likely dedicated 0 gates to IPC... 1000 cores is pretty worthless when NONE of them can talk to one another.

This is some idiot journalist reporting on some shit they know nothing about. All these chucklefucks did was make the dumbest sub-par GPU ever.

you need to define a cpu in a context of modern computers or else every dsp/simd/alu is a cpu, at that point what is even the arbitrary line?

A cpu should be able to act as a central processing unit in some way. If it doesn't run an OS, then why would you call it the central processing unit? You'd still need a real cpu to program and run anything on it.

You're an idiot.

Running an OS is not a barometer for what makes a CPU a CPU. In this day & age the thing that separates a GPU and a CPU is dedicated IPC. If dedicated IPC does not exist on die, it's a GPU. If it does exist, it's a CPU.

WTF is a dedicated IPC?

Modern gpus can easily decode and run instructions and have superscaler pipelines. What makes this thing any more of a cpu?

inter-process communication you dumb mother fucker.

L3 Cache, mutex locks, hyper-transport, etc are forms of on the die dedicated IPC.

GPUs are great for anything that can be solved using map-reduce. Anything else can get fucked.

If you have 6+ cores and they can't share high speed memory, or even send high speed messages to one another, then it's not a CPU.

you realize gpus have interprocess communication down to the alu level across a compute unit through the cu datashare right?

>L3 cache
nope, this thing doesn't even have l2
>mutex locks
I think you are thinking of something else. Everything that has coherent caches and memory has to have some form of this.
>hyper-transport
every modern processor has as form of this for moving data to and from caches

So I ask again. What makes this thing any more of a cpu than a GPU?

>No consumer would ever use that many cores at once
I'd like to add that 640K is more memory than anyone will ever need.

The key difference is Kilocore is MIMD. GPU are SIMD. The only other MIMD machine I can think of is the Xeon Phi.

>you realize gpus have interprocess communication down to the alu level across a compute unit through the cu datashare right?
Keyword was HIGH SPEED. High throughput isn't high speed.

As far as I've seen and google is saying, GPUs still only have communication between other physical GPUs. Individual cores cannot communicate without hitting low-speed ram.

>I think you are thinking of something else. Everything that has coherent caches and memory has to have some form of this.
Cache synchronization has an explicit instruction on modern ISAs. It's all done on the metal.

>every modern processor has as form of this for moving data to and from caches
high speed was the keyword, again.

>So I ask again. What makes this thing any more of a cpu than a GPU?
I'm saying it is a fucking GPU. I'm also saying you're dumb as shit.

can't think of a single mission that has needed single-chip parallelism on that scale though, most components each have their own individual control systems, processing instructions that were prepared and sent weeks or even months in advance from very far away

from a safety perspective, most probes today tend to carry multiple backups of their entire computer control system, which I would probably trust more than a single multi-core design that seems vulnerable to cascading failure

but it could have some use for more advanced software decision-making though, to compensate for those barriers, maybe even improved onboard image processing

not every application for a CPU requires an operating system, 64-bit registers or dedicated floating-point hardware, and there are many "modern" embedded CPU designs that have none of these

>If it doesn't run an OS, then why would you call it the central processing unit?
because it controls the entire system? because it processes the instructions that govern what all the other pieces do? many early computers didn't run operating systems at all, neither do most embedded chipsets, an operating system just makes controlling computing resources easier for the end user

this out-of-context quote is dumber than the "everyone ripped off Xerox" meme and I wish it would stop