>>55178436

Nobody needs more than 1 petaflop

Other urls found in this thread:

geeks3d.com/20140305/amd-radeon-and-nvidia-geforce-fp32-fp64-gflops-table-computing/
netlib.org/utk/people/JackDongarra/PAPERS/sunway-report-2016.pdf
hplusmagazine.com/2009/04/07/brain-chip/
blogs.scientificamerican.com/news-blog/computers-have-a-lot-to-learn-from-2009-03-10/
artificialbrains.com/openworm
twitter.com/SFWRedditImages

It's never about what one needs, only about what one wants.

they also said no one would need more than 5 mbs of storage, people like you have no real imagination you just take little linear steps and assume the next step will always be like the last one. When information technology gets more powerful we create new applications that uses it, always, we'll never run out of imagination and uses for more computing power

>>RISC architecture
>we future nao
You do realize that modern x86 processors use a RISC core with a CISC interpreter right? The RISC vs CISC debate died a long time ago.

Also this supercomputer is power hungry as fuck. They should have used pic related but I guess the chinks love wasting electricity.

>arstechnica

REDDIT SHILLS GET THE FUCK OUT

uh it's literally triple the system it intends to deprecate for only 15MWs that's pretty crazy good.

The CPU (ShenWei SW26010) looks like a chinese clone of the Cell.

I'm curious, why are supercomputers able to do so much fucking work despite GPUs shitting on them in terms of GFLOP performance?

Example:
This chink supercomputer: 93 GFLOPS
R9 295X2: 1408 GFLOPS (FP64)

geeks3d.com/20140305/amd-radeon-and-nvidia-geforce-fp32-fp64-gflops-table-computing/

On that note why not just use a shitload of GPUs for supercomputers instead?

Per-node performance isn't all that impressive.

The impressive part is that they figured out a way to connect 2.5X more nodes together than the fastest American supercomputer without having the between-node latency shoot out of control, bringing down the overall performance.

>93 GFLOPS

It's 93 PFLOPS, not GFLOPS.

That's 93,000,000 GFLOPS.

Whoops, I'll go sit in a corner by myself now.

American supercomputers look better

I thought Obabo was working on a new supercomputer? I bet he's pushing for that shit hard now.

>I bet he's pushing for that shit

You mean IBM, Nvidia, and Intel are pushing for that shit for that sweet, sweet federal bucks.

Did you just ask why we use CPUs when GPUs are faster?

>tfw you will never look upon a marvel of engineering and swell with pride knowing your country built it
>tfw you will always be irrelevant

I'm surprised to see Saudi Arabia in the top 10.

oil and gas discovery requires a ton of computational power

I'm not. You have any idea how much money that country has?

The progress being made is absurd here..
it was only 10 years ago that
>With 280.6 TFlop/s on the Linpack benchmark it is still the only system ever to exceed the 100 TFlop/s mark.
And now this system is 330x faster!

Can we expect the first exaflop system to arrive in ~5 years ?

Lots of countries build supercomputers for all kinds of purposes. The US will probably have the fastest one next year, then the year after that China will have the fastest, and so on.

They just bought a computer from Cray.

Is this enough to run Devil May Cry 3?

Supposedly the US will reach the first exaflop system by 2020.

New Intel supercomputer: only 5% faster than the old one

It can even run Crysis, on medium at 30fps.

>DEC alpha
we cyberpunk nao
too bad it runs on linux

I can't think of any country outside of Africa and South America that doesn't have a megastructure of some sort somewhere.


I mean even Liechtenstein has a pimpin ass castle

And nowadays, you can make a faster supercomputer for about 12k.

Nigga everything runs on Linux. It's literally everywhere except your personal computers. I don't know when we're having sentient AI, but I'm sure it'll be running Linux.

Chile has the ESO which contains many of the largest telescopes in the world. Peru has Nazca and Machu Pichu. Rio has that famous statue of Jesus.

Egypt has the Pyramids, Sphinx, etc.

>I don't know when we're having sentient AI, but I'm sure it'll be running Linux.

It would probably commit suicide when it realises that it's powered by systemd

I meant a modern marvel of engineering.

top kek

I wonder what the africans are up to, beside starving.

> I don't know when we're having sentient AI, but I'm sure it'll be running Linux.
so it means that Linux® will finally get usable drivers?

Fuck off man, Linux has drivers. Even my shitty chinese tablet has built in drivers compared to Windows.

I'd probably just kill myself if I were born an African.

same. You gotta hand it to them though, they are pretty mentally strong to not kill themselves despite years of poverty and unstable/abusive governments.

you do know that drivers for Zhang Wei Shenzen Network Eletronics wi-fi module doesn't count

>tfw your country has the LHC

Now we just need to turn it into a superweapon

In terms of power/watt, this is quite efficient than the US Titan.


In any case, if the power is utilized, it could very well outstrip the US's computing capability soon. Remember not only does China have the top two, but they also have more supercomputers than the US in the TOP500.

This could very well lead to a snowball effect if US doesn't keep up.

>reddit shills
>tripfag

Can I use this for my condensed matter physics simulations?

>supercomputer-hits-93-petaflops-tripling-speed-record/
public record maybe.

That's a European project, no country can lay claim to it.

the country of Europe

The EUssr

> netlib.org/utk/people/JackDongarra/PAPERS/sunway-report-2016.pdf
> Each CPE Cluster is composed of a Management Processing Element (MPE) which is a 64-bit RISC core which is supporting both user and system modes, a 264-bit vector instructions, 32 KB L1 instruction cache and 32 KB L1 data cache, and a 256KB L2 cache. The Computer Processing Element (CPE) is composed of an 8x8 mesh of 62-bit RISC cores, supporting only user mode, with a 264-bit vector instructions, 16 KB L1 instruction cache and 64 KB Scratch Pad Memory (SPM).

Literally just a mountain of PS3-style chips.
State of the art this is not.

>In terms of power/watt, this is quite efficient than the US Titan.
True that's not really shocking. The US Titan uses ancient AMD Opteron and nvidia tesla housefires and despite that it only uses 8.2MW of electricity for the ~20 PFLOPS of raw computing power.

So 5 x US Titan super computers would use about 41MW of electricity for ~90 PFLOPS of raw computing power. That makes this chink supercomputer only ~2.66X more energy efficient than the US Titan that uses ancient outdated hardware.

Anyway this chink supercomputer would get BTFO if a US supercomputer was made with AMD polaris GPUs and super energy efficient Xeon-D processors like which might happen soon.

Shit, maybe the US won't even need to use many jewtel xeon-d processors either. A single Rx 480 GPU can supply ~5 TFLOPS and only uses a max of 150 Watts. So 18,000 Rx 480 GPUs would provide ~90 TFLOPS and only use about 3MW of electricity. In addition those GPUs alone would only cost about $3 millions dollars too. Maybe toss in a few xeon-d processors just to overlook things and do a few calculations the GPUs can't do efficiently or at all.

CHINK SHIT

>*So 18,000 Rx 480 GPUs would provide ~90 PFLOPS
typo

i for one accept our asian overlords

Wait so does this mean this chinkshit is DOA because of ?

The Rx 480 has an energy efficiency of ~33 Gflops/watt right?

This 6 Gflops/Watt chinkshit is starting to look like a joke.

chinks fell for the 'big data gives you insight' meme

wew

Open CL nightmare? It would be a giant waste of resources because 60% of the cores would be unused. Something like that would be unstable and unusable for super large datasets I assume.

Are you a burger?

Meanwhile England has stone henge and even that is broken

because GPUs only do things like move textures around really fast. they don't even have the opcodes required for precise mathematics calculations required for real science.

specifically, in chink applications like making nuclear weapons, you have to solve for stochastic equations that compute the probability that neutron decays will be in the same place at the same time approaching arbitrary precision. you can't do that with a fucking OpenGL call

Are you implying what I think you're implying?

It was probably expensive- my guess is the systems CPU's and mainboard chips have higher bus speeds at the cost of performance; bigger physical chips to decrease latency.

I imagine this was an expensive undertaking, all the CPU's are same, and the power consumption per core must be wasteful.

It would be interesting to know if chips with higher bus's in large systems require more power to reduce latency, than a smaller system with fewer faster cores/chips.

The human brain can't process more than 30 petaflops per second.

How come certain GPUs were better for bitcoin mining than CPUs?

Not trying to be a smartass, I genuinely want to know.

>you can't do that with a fucking OpenGL call
No but you can simplify it enough for it to be done on GPUs. Modern GPUs are capable of FP64 operations so things like ray tracing can be done on them now.

I think at some point the advancement of technology will make its operators dumber. You can already see this nowadays with how many developers are high-level and can still get away with only rudimentary knowledge of what the generation before them had to know.

There might come a time where lousy data storing practices will be accepted because reading speed and storage size will be way too big for any application or use to ever fill it all.

Wonder if that's what happened in Warhammer 40K. There was the Dark Age of Technology where shit like Terminator Armour and Land Raiders were pumped out like lasguns, and the Baneblade was the small, scout tank of the Imperial ground forces, and over time the techpriests and enginseers basically went "Alright newbie, there's a long, convoluted explanation as to how this piece of tech works, but all you need to know is you need to hit this big red button when the light flashes green," and then four hundred years later, the light doesn't flash green anymore and nobody fucking understands why, because the knowledge of how to repair the system was lost in the simplicity of 'hit button when green'.

We must not allow a supercomputer gap!

>A single Rx 480 GPU can supply ~5 TFLOPS and only uses a max of 150 Watts.
That's single precision, so it's mostly irrelevant for HPC. These computations are all double precision.

You wouldn't use a gaming card for a supercomputer. When you have thousands of cards in one machine, reliability is crucial. Cards for HPC are less likely to break and have features like ECC RAM to ensure correct results.

And, as I said above, you need double precision. Gaming GPUs are optimized for single precision performance.

Of course not, I wouldn't be saying that if I was.

Where are you from then?

A third world country.

You know what I find very disturbing? Nobody has bothered to mention that we now have ~3X the estimated processing power to simulate a human brain for only 15MW/hour. That means a single human brain can be simulated with ~5MW/hour. HOLY FUCK.

Soon a human brain will be able to run with just 5KW/hour or $1/hour. The singularity is near, EVERYBODY PANIC!!!

How do you estimate the processing power required to simulate the human brain when you know almost nothing about how it works?

You could probably look at your country's mortality rate and swell with pride at that, I dunno.

>~3X the estimated processing power to simulate a human brain
Estimates from where? The estimates of the brain's processing power that I've seen vary by many orders of magnitude. No one has a clue how powerful the brain is, let alone how hard it would be to simulate it.

I wish we could round up all singularityfags and shoot them over a ditch.

Modern GPUs like the Rx 480 do double precision computations now you dumbass. In fact even ancient graphics cards like the HD 6990 can do double precision computations as well, like over 1 TFLOPS too. Not sure how many TFLOPS of FP64 the Rx 480 can do though.

>yfw a truly logical and self-learning AI unguided by any human initialization would probably turn into a nihilistic statue the moment its turned on

>Modern GPUs like the Rx 480 do double precision computations now you dumbass.
Of course they can do it. It's just far slower than single precision. The 480 is nowhere close to 5 TFLOPS double precision.

HPC GPUs are built with double precision in mind and are far better at it than gaming cards.

that is completely false, I think you should relearn what CISC means.

The English have a couple good Castles, don't worry about them

We know HOW it works, we just haven't mapped it yet so we don't know what it's doing.

But we're getting there. There is at this moment a fully mapped brain of a worm that was dissected in the 80s, and it's being used to operate robots.
Singularity SOON™

>"Researchers estimate that it would require at least a machine with a computational capacity of 36.8 petaflops (a petaflop is a thousand trillion floating point operations per second) and a memory capacity of 3.2 petabytes – a scale that supercomputer technology isn’t expected to hit for at least three years."

hplusmagazine.com/2009/04/07/brain-chip/

That article was written 7 years ago when Cray Jaguar was struggling to reach 2 PFLOPS.

Now we have this chinkshit churning out close to 100PFLOPS and we can now also cram over 1 PB of RAM in just a few thousand nodes too.

Anyway that estimate was done by researchers who knew what the fuck they were talking about.

>not running your own Gadolium Gallium Garnet Quantum Electric Processing Unit
fucking Sup Forums leave my board REEEEEEEEEEEEEEE

>petaflops per second
:^)

They do use GPUs, dumbass, most modern supercomputers are just glorified beowulf clusters loaded with Xeons, Phis and Teslas, where the latter is great at simple computations that can can be parallelized to such embarrassing levels, but they suck ass when they don't. Those ideal synthetic benchmark scores aren't worth a shit when your code doesn't utilize it.

Oh, "researchers" say it! How silly of me. I didn't realize you had "researchers" to back you up. I'm convinced now!

Obviously a transhumanist magazine would claim the Singularity is right around the corner, but they don't represent the scientific community. There is no consensus.

I'm guessing he's going off of what experts have told us about how much a human brain is capable of processing. One such expert is one of the IBM chief scientist creating the DARPA Synapse project. He says its roughly equivalent to about 38 petaflops (yes humans dont do floating calculations but this is rough equivalent power).

>blogs.scientificamerican.com/news-blog/computers-have-a-lot-to-learn-from-2009-03-10/

He predicts that by 2020, we'd have super-industrial grade computers capable of it.

They aren't anything like Beowulf clusters. Beowulf clusters are, by definition, made of commodity machines. Modern supercomputers are all about the interconnect. They have some really exotic hardware and network topologies. They do not look much like commodity hardware.

Remember when that giant fireball shut down China's supercomputer?

>LEADING BRAIN COMPUTER SCIENTISTS ESTIMATED IT SO IT MUST BE TRUE
lmao

Processing power isn't the problem, it's developing the software to run on it.

That isn't the consensus, though. That is the opinion of one expert.

Besides, I think his prediction is pretty clearly wrong. He claimed we would be simulating full human brains by 2018, but we can't yet simulate the brain of C. Elegans brain despite years of work.
artificialbrains.com/openworm
It would take some incredible breakthroughs to stay on his schedule.

Of course, the interconnect is drastically different, but it is definitely much more commoditized than it used to be in the era of Cray vector systems or Thinking Machines.

What other "exotic" hardware are you talking about? Custom ASICs?

Mostly the interconnect, but that's nothing to sneeze at. I don't see when you could call it a Beowulf cluster when the most important component is so exotic.

first part of that computer looks like the old WTC

IBM/DARPA Synapse Project is a redesign of processor to match human brain. Thats the expert in question. So he is working on that "incredible breakthrough" that would allow him to stay on course of his prediction. Not only in terms of being powerful but also in terms of power efficiency.

>IBM
IBM is chinks now.

Just a bit to add. The Synapse processor is supposed to be more than 1000x more efficient in compute/watt than current CPU processor model.

Didn't we explode this last year?

Is there any evidence this breakthrough is coming? It is nowhere to be seen. Simulating a human brain is millions of times harder than simulating the nervous system of C. Elegans. At the current pace of progress, it seems unlikely we'll be simulating C. Elegans by 2018. It is crazy to expect us to simulate entire human brains within the next 18 months.

And "millions of times harder" is conservative. C. Elegans has 308 neurons compared to the ~90 billion of a human, but these neurons are far simpler than human neurons and are connected in simpler ways.