My friend explained to me he saw this shit online where you can have bootup quicker and load faster

My friend explained to me he saw this shit online where you can have bootup quicker and load faster.

>data from storage goes to ram
>data held in ram will go to cpu to be processed

he explained if that is true, then if you install your cpu on the ram itself, things will be processed faster because it wont need to send the data from ram to cpu, it will just process in its place. It seems logical to me but is there something that will prevent that from working? and If this is actually a thing is there any sources or procedure name I can search for? because i can't seem to find anything related.

Other urls found in this thread:

en.wikipedia.org/wiki/Connection_Machine
youtube.com/watch?v=D7J9NxYjNBk
twitter.com/AnonBabble

>What is L1/L2/L3$$

You can't find anything related because this genius idea fundamentally goes against computer science theory. You may as well go nail a toaster to your mom's face to make her more aerodynamic.

It sounds like you are describing the architecture of a Connection Machine: en.wikipedia.org/wiki/Connection_Machine

They failed because they were too hard to program, and not useful for general tasks.

RAM is bad. You do not want to use RAM. Every use of RAM is a failure from a speed perspective. It's a necessity sometimes, but you still avoid it at all costs. This is why chips have multiple levels of cache, to avoid having to reach out RAM like it's the devil itself.

t. someone who has designed a basic processor that handles things like branch prediction.

>You may as well go nail a toaster to your 's face to make her more aerodynamic.

lmfao

>people are still using Von Neumann architecture.

sad!

>you install your cpu on the ram itself, things will be processed faster because it wont need to send the data from ram to cpu,
hahaha

can you have 32 GB of on-chip L1 tho?

Putting the cpu on the ram, or more accurately putting the ram on the cpu, has already been done. That's your 3 levels of cache (L1, L2, L3). They're much faster than regular ram, but prohibitively expensive. You're not going to get 16GB of ram on the same chip as the cpu, especially at the speeds any of the caches operates at.

No.

the problem is that the clock used in ram is slower than the cpu, so you cannot install the cpu inside the ram, only outside.

Fill up ram slots starting with the one closest to the cpu. It's faster because it doesn't need to go as far.

>install your cpu on the ram itself
Man I love Sup Forums

can you have a 32bg stick of ram transfer data as fast as my 12mb of L1? oh i guess your computers going to be slowed the fuck down then

>if you install your cpu on the ram itself

youtube.com/watch?v=D7J9NxYjNBk

ripping yarn old chum

Your computer does something similar (note similar) when you put it to sleep

This fucked me up cause it’s technically true

why not just install the cpu in the ssd?

The difference is negligible.

It's a conceptual architecture called PIM, processing in memory. AMD (and other firms) have been pursuing research in it since the early 2010s as a proposed method of achieving several times the power efficiency in compute as current processors. The reason being that data transfer from RAM to CPU is not only high latency (relative movement within one component) but also very high on energy costs, like dozens of times higher than the actual computing itself done by a CPU or data transfer within a CPU (cache to cores).

As for why it's hard. No clue. But it's got some big names behind it but I don't think it's really gone anywhere since like 2015. I'm sure they're still researching it, just not making it public

Negligible, but still there. I mean, take the speed of light, add the latency from traveling through circuits, and then add the distance from the closest ram slot to the one furthest away from your cpu. The difference in speed will only be like a couple hundred picoseconds at most, but still, it'll be there.

>still there
I bet your neanderthal genes are 'still there' famalam

They sure as shit are, you most likely have them too. That's what makes us... us.

>You may as well go nail a toaster to your mom's face to make her more aerodynamic.
sometimes it may work...

It's called a RAM Disk. Decent programs will import and export all the data on the ramdisk to your non volatile memory (HDD, SSD....) at system launch and shutdown, better ones will also intermittently do this during low i/o usage.

>branch prediction
You're the cause of Spectre. Admit it.

if you're willing to pay 6 gorillian dollarydoos for a rampu

If RAM is the fastest type of memory why even have other kinds? Just store everything on the RAM.

CPU registers are the fastest memory in a computer lol

Why don't computers just do a sleep mode style dump when you turn off your computer (into the HD rather than RAM)?

Light travels 30cm in a nanosecond. If your RAM was on one side of the Earth and your CPU on the other, it would still only take ~42 microseconds to communicate.

That's the kind of negligible we are talking about.

that is literally what windows 10's fast startup does

I thought this was what Windows fastboot did.

>linux fags dreaming of features that were added to windows years ago

oops, I'm still on 7

Computer boots so fast with an ssd it doesn't matter anyway

Not saying it is a bad feature or anything like that, just saying you wont miss Windows fastboot

This, I spend more time on POST than windows load screen.

Exactly. My Linux installation takes about 3 seconds to boot, but my laptop spends 10 seconds posting

>his mobo doesnt support ultrafast boot

I'd rather have bios inputs in case I need them.
>2 seconds fan spinup on black screen
>2 seconds finding AHCI drives
>bios splash screen again
>black screen
>windows desktop
The fuck is going on in my setup

RAM typically is not a big bottleneck to computation.

The way modern computers work is that applications are loaded into RAM, and then the CPU will take some of that data and transfer it to its own internal cache which is significantly faster than RAM and more importantly has way less latency because it's on the chip itself and operates at the same speed as the CPU.

Results of computation are then ferried back off the chip back into RAM.

This system does not bottleneck the CPU, the bottleneck of the CPU is really how much actual computation it can do with the transistors it has and what clock speed it's running at. The on board L1/L2 cache is increased as chips get faster in order to ensure that memory is not a bottleneck for computation. And RAM speeds are also increased to make sure they're not a bottleneck for feeding the L1/L2 cache.

Not only would more on chip memory generally be a waste it's also extremely expensive and because of the limitations of physics and materials we have a limited space to play with so increasing L1/L2 significantly would also necessarily mean decreasing the number of transistors on the same chip which are dedicated to actual calculation.

>RAM typically is not a big bottleneck to computation
oh boy

Both cases in that image are the same thing.

>is there something that will prevent that from working
Costs, CPUs already have a small portion of memory that is used as cache, to speed things up, but this memory is expensive in high quantities.

CPUs operate at around 3 cycles per nanosecond. At that timescale it's noticeable.

>The absolute state of Sup Forums

lmao, you're three orders of magnitude off, try again

Get a stock i7 8700k
Put 3200Mhz RAM in it and benchmark
Then put 4000Mhz RAM in it and benchmark

You'll get the same score because the CPU simply can't crunch through that much data. Now you can overclock your CPU but even at quite aggressive overclocks there's still dimishing returns past about 3600Mhz, and we have RAM that runs at speeds up to like 4200Mhz

Memory isn't a bottleneck for CPUs

I think this is what the guy's looking for.

Everyone is circle jerking about architectures and whatnot but since the guy can barely describe what he's talking about I assume he just wants a faster campooter.

>You may as well go nail a toaster to your mom's face to make her more aerodynamic.

This sounds wrong but I don't know enough about science and engineering to say anything definitively.

For the most average and general of cases I imagine that this won't work but I dare say OP's mother might be mogoloided enough to see the benefits.

OP, how mutated is your mother on an average day?

Latency, bitch.

>if you install your CPU on the ram
OK user. What you've done now (assuming you've done it well) is given yourself a very large last level cache at a cost of a lot of development resources and reduced modularity.
It's not something you're likely to do as a CPU manufacturer. This mostly helps poorly written software. Admittedly those are the ones that bother users the most. But you won't appeal to Google server farms for instance.
Wrong way to put it. It's not realistic. It doesn't 'go against computer science theory'.
Saying that just makes it obvious you don't know what computer science is. It's unrelated.

Why would he not mention the disk. His friend would surely have referenced how the ssd or hdd is slow.

Or you could just download more RAM

ram is the second slowest type of memory, the slowest being the hdd/ssd
it's tiered, from registers, to the various on-die cpu caches, to main ram, to mass storage
when what you want isn't in a register, you fetch it from cache, if it's not in cache, you fetch it from main ram, if it's not in main ram, you fetch it from mass storage (... if it's not there, get it from network/whatever else)

Faster RAM is lower latency...

It doesn't help, once the bandwidth is high enough to sustain constant CPU usage there's literally no benefit.

No, it's not. Get the fastest RAM you can, the latency will still be a little over 10ns.

'windows fastboot' is basically secret forced hibernate to cover up for atrocious load times
>feature
lmao

>but also very high on energy costs, like dozens of times higher than the actual computing itself done by a CPU or data transfer within a CPU (cache to cores).

That may be true but the power consumption of ram operations is still small compared to over all power consumption of the CPU when doing CPU only workloads.

Even if ram speed can speed up a system (by like 10-20%), it's almost never a "true" bottleneck. A faster CPU is still faster with slow ram.

CPUs are an order of magnitude faster than any I/O, but branch prediction and asynchronous I/O negate the effects of it

The RAM is also used for frequently accessed data, because it's faster than accessing the main disk all the time. It's a good way of doing things. The problem with what you've proposed is that the silicon circuitry that makes up the memory itself also takes up space. If your data is going from point A to B and that takes 5 seconds, then doubling that distance to include points C and D would mean that it now takes 10 seconds to get from A to D. The example isn't to scale but I'm sure you'll get the point. The cache is fast because it's so close to the CPU. Making a larger cache means making a slower cache. The memory there is also absurdly expensive and difficult to manufacture, and adding more wouldn't really be beneficial in a lot of cases, as described above. It's all about striking a balance in the system as a whole so that you have the least amount of bottlenecks.

Having a RAM disk was nice back before SSDs were available outside of laboratories as experiments. These days it's pretty shitty since RAM is so fucking expensive and software has bloated up so bad that it takes a while to write it all back to the disk (intermittent writes help a little). I wouldn't even try it unless I had at least 64GB of RAM. Maybe I'll look into it once my Talos II arrives in the mail.

It depends entirely on your work. RAM latency is a huge problem and probably more than half your CPU die is dedicated on prevention of have to halt for the 300 clocks it takes to read it. Cache, HT, pipelined prefetch, branch prediction all reduce the likelihood of a cache miss or the penalty if one occurs.

>he explained if that is true, then if you install your cpu on the ram itself, things will be processed faster because it wont need to send the data from ram to cpu, it will just process in its place.
It is installed directly next to it dude...
The distance is irrelevant the speed at which the signal on the bus changes between the CPU and the RAM is negligible compared to the Frequency of RAM.

All of that isn't _that_ important anyway, since hopefully most of the stuff you do will be in Cache already, thats why it exists...

>This mostly helps poorly written software. Admittedly those are the ones that bother users the most. But you won't appeal to Google server farms for instance.
I beg to differ. Surely badly-written software will benefit from this, but so will well-written software - at a significant rate may I add.

This is essentially the principle of package-on-package. (plus space-saving concerns and minus retarded logic)

better yet, have an L1 cache of 500GB and store everything there

I wonder if we will ever see a cpu like thread ripper with 16 or 32 gigs of HBM

this is the state of Sup Forums
take a free computer architecture course online