California Researchers Build The World's First 1,000-Processor Chip

>The 1,000 processors canexecute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery...more than 100 times more efficiently than a modern laptop processor... The energy-efficient "KiloCore" chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors.

>Programs get split across many processors (each running independently as needed with an average maximum clock frequency of 1.78 gigahertz), "and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data." Imagine how many mind-boggling things will become possible if this much processing power ultimately finds its way into new consumer technologies.

slashdot.org/story/312681

Other urls found in this thread:

www-03.ibm.com/press/us/en/pressrelease/19508.wss
tomshardware.com/news/ibm-rapport-kilocore,2575.html
twitter.com/NSFWRedditImage

Das it mane

What instruction set?

Finally they get the Kilocore concept to the full 1000. Been hoping to see them get that for years.

how much fps is that

How it can be powered by a single AA, is this sorcery

Lets run down the list:

>1000 cores
>.7w
>they transfer data directly to each
>1.78GFLOPs
>1.78ghz clock speed
>one op per clock

This sounds like a non Von Neumaan switch on event processor, which is nothing out of the ordinary. The "cores" are so tiny that ARM's Cortex A7 looks like a behemoth in comparison.
The A10 Micro 6700T with a 4.5w TDP pulls 70.4GFLOPs across its 4 small, out dated 28nm cores. Note also that the A10 also includes a GPU in that TDP, and it accounts for another 128GFLOPS giving a total of nearly 200GFLOPs.

Compare flops/watt between these two chips.
44GFLOPS~ per watt
2.5GFLOPS~ per watt

And that shitty old AMD chip isn't even remotely close to the highest perf/watt part out there. Tons of ARM designs now completely trash it.

Davus fags btfo

Is this the same CPU they were trying to develop for phones years ago or a different project? iirc the one I'm thinking of had one 32-bit core with 1023 8-bit cores

And now that I think about it, that name KiloCore was way too familiar.

www-03.ibm.com/press/us/en/pressrelease/19508.wss
tomshardware.com/news/ibm-rapport-kilocore,2575.html

These are from 2006. They had 1024 tiny 8bit ALUs with a single PPC core.
These researchers do not appear to have done a single thing original. In fact, I'm not sure what they've done at all.
What exactly is the news story here? University "researchers" take existing IBM design and clock it higher, while still using IBM as a fab?

I too am entirely confused by this, because these KiloCore parts actually existed. They were too shitty to do anything a decade ago, and nothing has changed now.
This is a fluff article everyone is just passing around the echochamber, or its one of the laziest scams I've seen in a long time.

It does not mention anything about floating point operations, I imagine that it's much lower than 1.78GFLOPS.

This processor seems to be the opposite of SIMD, a sort of multiple instruction, single data instead. It simply processes a very low amount of data points over a thousand cores, utilising as many instructions in one cycle as there are cores.

I suspect that its instruction set is quite a minimal one for this reason. The cores are very small and likely only deal with very simple logic on their own, with potential for complexity because they feed each other information.

This also leads back to that I doubt it does 1.78GFLOPS because it will most likely spend a few cycles doing floating point operations.

As far as I can see the "breakthrough" was that they reached a thousand cores.

IBM made these 10 years ago with 1025 cores.
There is nothing new about this.

Yeah, I was dumb enough to not click your links before commenting.

Very much non-news. They basically just clocked them higher.

>The 1,000 processors canexecute 115 billion instructions per second while dissipating only 0.7 Watts

So in effect, each CPU core is only clocked at 115 MHz. A lot of applications aren't parallel enough to handle that shit.

Is it x86?
Will dx12 work?

group of researchers unaffiliated with Intel, AMD, or even VIA
>x86
Of course it isn't.

Oh.
>into the trash

is that way

Amd TRASH! All talk and take a look at their fps on tomshardware.

lance

>muh fps
See

Gee user, can't I feel excited for a moment?

Why? This has already been done a decade ago.

I want to believe, even if it's an illusion, that technology is really moving forward and we're not held by intel and its marginal generational increments.

Find better things, things that aren't bullshit.

Enterprise hardware, bleeding edge mobile SoCs, advances in DRAM, and 3D packaging. FinFETs driving leakage current lower than it has ever been, SOI making substantial performance gains over bulk while reducing cost, utilizing body biasing to turn a single transitor into a low power or high frequency switching device on the fly, high perf SOI based FinFETs. CMOS image sensors having more pixels than a smartphone's objective lens can let in photons in a given span of time, built in optical image stabilization capable of handling over 1g.

These are all things which are genuinely interesting and exciting. Clickbait headlines about rehashed old tech that is a dead end are not what you should get riled up over.

Does it run Crysis?

No you can't because blowing things out of proportion is something they should stop doing.

Oh yeah, I know about some of those, Intels X-Point is something I eagerly await, Samsung's also being doing some nice stuff with their nand. And although im not a phonefag I hope the day they cram x86 chips with ultra low power consumption and enough power to replace a low range laptop comes soon.

As far as I know there is still lots of room for improvements in many areas but CPUs don't seem to be one of them, it's all about reducing power consumption, 5% here and there and that's all. That's why I got excited, pardon my autism.

>Advances in RAM
Like the phase change ram currently being experimented with? I believe there are some working examples made, and the current push is to reliability and scaling it down.

x86 can't scale

it's probably just 1000 single-instruction-set CPUs

x86 with all the legacy cruft removed could actually scale very well.

>California researchers desperate for grant, create non-discovery.

shit I can't remember the last time something was only 2.5 GFLOP/watt

I mean even G80 back in 2006 could hit 3.2 GFLOP/watt

>x86 with all the legacy cruft removed
Then it wouldn't be x86 anymore, user.

Tell that to AMD, they haven't had the hardware to pull x87 ops since 2006. They just run them on the SIMD hardware if they ever come up.

Reminder that this chip use completely different architecture that you find on any consumer, or even server processor that you must learn from scratch again.

>x87
Why do we still have x86 if x87 exists?

x87 was a floating point coprocessor, now it's basically integrated into modern x86 processors.

reeks of gpu chip that was named as cpu