My friend explained to me he saw this shit online where you can have bootup quicker and load faster.
>data from storage goes to ram >data held in ram will go to cpu to be processed
he explained if that is true, then if you install your cpu on the ram itself, things will be processed faster because it wont need to send the data from ram to cpu, it will just process in its place. It seems logical to me but is there something that will prevent that from working? and If this is actually a thing is there any sources or procedure name I can search for? because i can't seem to find anything related.
You can't find anything related because this genius idea fundamentally goes against computer science theory. You may as well go nail a toaster to your mom's face to make her more aerodynamic.
They failed because they were too hard to program, and not useful for general tasks.
Michael Diaz
RAM is bad. You do not want to use RAM. Every use of RAM is a failure from a speed perspective. It's a necessity sometimes, but you still avoid it at all costs. This is why chips have multiple levels of cache, to avoid having to reach out RAM like it's the devil itself.
t. someone who has designed a basic processor that handles things like branch prediction.
Charles Smith
>You may as well go nail a toaster to your 's face to make her more aerodynamic.
lmfao
Levi Fisher
>people are still using Von Neumann architecture.
sad!
Benjamin Brooks
>you install your cpu on the ram itself, things will be processed faster because it wont need to send the data from ram to cpu, hahaha
Carter Foster
can you have 32 GB of on-chip L1 tho?
Carson Harris
Putting the cpu on the ram, or more accurately putting the ram on the cpu, has already been done. That's your 3 levels of cache (L1, L2, L3). They're much faster than regular ram, but prohibitively expensive. You're not going to get 16GB of ram on the same chip as the cpu, especially at the speeds any of the caches operates at.
Landon Wright
No.
Benjamin Nguyen
the problem is that the clock used in ram is slower than the cpu, so you cannot install the cpu inside the ram, only outside.
Ryan White
Fill up ram slots starting with the one closest to the cpu. It's faster because it doesn't need to go as far.
Levi Moore
>install your cpu on the ram itself Man I love Sup Forums
Levi Gray
can you have a 32bg stick of ram transfer data as fast as my 12mb of L1? oh i guess your computers going to be slowed the fuck down then
Your computer does something similar (note similar) when you put it to sleep
Ethan Turner
This fucked me up cause it’s technically true
Christopher Harris
why not just install the cpu in the ssd?
Robert Carter
The difference is negligible.
Asher Hernandez
It's a conceptual architecture called PIM, processing in memory. AMD (and other firms) have been pursuing research in it since the early 2010s as a proposed method of achieving several times the power efficiency in compute as current processors. The reason being that data transfer from RAM to CPU is not only high latency (relative movement within one component) but also very high on energy costs, like dozens of times higher than the actual computing itself done by a CPU or data transfer within a CPU (cache to cores).
As for why it's hard. No clue. But it's got some big names behind it but I don't think it's really gone anywhere since like 2015. I'm sure they're still researching it, just not making it public
Jackson Nelson
Negligible, but still there. I mean, take the speed of light, add the latency from traveling through circuits, and then add the distance from the closest ram slot to the one furthest away from your cpu. The difference in speed will only be like a couple hundred picoseconds at most, but still, it'll be there.
Carter Brooks
>still there I bet your neanderthal genes are 'still there' famalam
David Wilson
They sure as shit are, you most likely have them too. That's what makes us... us.
Jayden Green
>You may as well go nail a toaster to your mom's face to make her more aerodynamic. sometimes it may work...
Nicholas Kelly
It's called a RAM Disk. Decent programs will import and export all the data on the ramdisk to your non volatile memory (HDD, SSD....) at system launch and shutdown, better ones will also intermittently do this during low i/o usage.
Aiden Lopez
>branch prediction You're the cause of Spectre. Admit it.
Camden King
if you're willing to pay 6 gorillian dollarydoos for a rampu
Caleb Gomez
If RAM is the fastest type of memory why even have other kinds? Just store everything on the RAM.
William Moore
CPU registers are the fastest memory in a computer lol
Brayden Foster
Why don't computers just do a sleep mode style dump when you turn off your computer (into the HD rather than RAM)?
Gavin Kelly
Light travels 30cm in a nanosecond. If your RAM was on one side of the Earth and your CPU on the other, it would still only take ~42 microseconds to communicate.
That's the kind of negligible we are talking about.
John Howard
that is literally what windows 10's fast startup does
Eli Walker
I thought this was what Windows fastboot did.
Isaac Gonzalez
>linux fags dreaming of features that were added to windows years ago
Adam Richardson
oops, I'm still on 7
Jason Bennett
Computer boots so fast with an ssd it doesn't matter anyway
Adrian Sullivan
Not saying it is a bad feature or anything like that, just saying you wont miss Windows fastboot
Nathaniel James
This, I spend more time on POST than windows load screen.
Austin Powell
Exactly. My Linux installation takes about 3 seconds to boot, but my laptop spends 10 seconds posting
Julian Gray
>his mobo doesnt support ultrafast boot
Justin Murphy
I'd rather have bios inputs in case I need them. >2 seconds fan spinup on black screen >2 seconds finding AHCI drives >bios splash screen again >black screen >windows desktop The fuck is going on in my setup
Nolan Green
RAM typically is not a big bottleneck to computation.
The way modern computers work is that applications are loaded into RAM, and then the CPU will take some of that data and transfer it to its own internal cache which is significantly faster than RAM and more importantly has way less latency because it's on the chip itself and operates at the same speed as the CPU.
Results of computation are then ferried back off the chip back into RAM.
This system does not bottleneck the CPU, the bottleneck of the CPU is really how much actual computation it can do with the transistors it has and what clock speed it's running at. The on board L1/L2 cache is increased as chips get faster in order to ensure that memory is not a bottleneck for computation. And RAM speeds are also increased to make sure they're not a bottleneck for feeding the L1/L2 cache.
Not only would more on chip memory generally be a waste it's also extremely expensive and because of the limitations of physics and materials we have a limited space to play with so increasing L1/L2 significantly would also necessarily mean decreasing the number of transistors on the same chip which are dedicated to actual calculation.
Joshua Perez
>RAM typically is not a big bottleneck to computation oh boy
Samuel Johnson
Both cases in that image are the same thing.
Austin Anderson
>is there something that will prevent that from working Costs, CPUs already have a small portion of memory that is used as cache, to speed things up, but this memory is expensive in high quantities.
Colton Bailey
CPUs operate at around 3 cycles per nanosecond. At that timescale it's noticeable.
Jayden Morris
>The absolute state of Sup Forums
Chase Carter
lmao, you're three orders of magnitude off, try again
Caleb Clark
Get a stock i7 8700k Put 3200Mhz RAM in it and benchmark Then put 4000Mhz RAM in it and benchmark
You'll get the same score because the CPU simply can't crunch through that much data. Now you can overclock your CPU but even at quite aggressive overclocks there's still dimishing returns past about 3600Mhz, and we have RAM that runs at speeds up to like 4200Mhz
Memory isn't a bottleneck for CPUs
Isaiah Sanders
I think this is what the guy's looking for.
Everyone is circle jerking about architectures and whatnot but since the guy can barely describe what he's talking about I assume he just wants a faster campooter.
Aaron Gray
>You may as well go nail a toaster to your mom's face to make her more aerodynamic.
This sounds wrong but I don't know enough about science and engineering to say anything definitively.
For the most average and general of cases I imagine that this won't work but I dare say OP's mother might be mogoloided enough to see the benefits.
OP, how mutated is your mother on an average day?
Angel Peterson
Latency, bitch.
Henry Perry
>if you install your CPU on the ram OK user. What you've done now (assuming you've done it well) is given yourself a very large last level cache at a cost of a lot of development resources and reduced modularity. It's not something you're likely to do as a CPU manufacturer. This mostly helps poorly written software. Admittedly those are the ones that bother users the most. But you won't appeal to Google server farms for instance. Wrong way to put it. It's not realistic. It doesn't 'go against computer science theory'. Saying that just makes it obvious you don't know what computer science is. It's unrelated.
Matthew Parker
Why would he not mention the disk. His friend would surely have referenced how the ssd or hdd is slow.
Brayden Bennett
Or you could just download more RAM
Bentley Torres
ram is the second slowest type of memory, the slowest being the hdd/ssd it's tiered, from registers, to the various on-die cpu caches, to main ram, to mass storage when what you want isn't in a register, you fetch it from cache, if it's not in cache, you fetch it from main ram, if it's not in main ram, you fetch it from mass storage (... if it's not there, get it from network/whatever else)
Samuel Cruz
Faster RAM is lower latency...
It doesn't help, once the bandwidth is high enough to sustain constant CPU usage there's literally no benefit.
Samuel Powell
No, it's not. Get the fastest RAM you can, the latency will still be a little over 10ns.
Jeremiah Turner
'windows fastboot' is basically secret forced hibernate to cover up for atrocious load times >feature lmao
Kevin Ward
>but also very high on energy costs, like dozens of times higher than the actual computing itself done by a CPU or data transfer within a CPU (cache to cores).
That may be true but the power consumption of ram operations is still small compared to over all power consumption of the CPU when doing CPU only workloads.
Even if ram speed can speed up a system (by like 10-20%), it's almost never a "true" bottleneck. A faster CPU is still faster with slow ram.
Liam Jones
CPUs are an order of magnitude faster than any I/O, but branch prediction and asynchronous I/O negate the effects of it
Kevin Reed
The RAM is also used for frequently accessed data, because it's faster than accessing the main disk all the time. It's a good way of doing things. The problem with what you've proposed is that the silicon circuitry that makes up the memory itself also takes up space. If your data is going from point A to B and that takes 5 seconds, then doubling that distance to include points C and D would mean that it now takes 10 seconds to get from A to D. The example isn't to scale but I'm sure you'll get the point. The cache is fast because it's so close to the CPU. Making a larger cache means making a slower cache. The memory there is also absurdly expensive and difficult to manufacture, and adding more wouldn't really be beneficial in a lot of cases, as described above. It's all about striking a balance in the system as a whole so that you have the least amount of bottlenecks.
Having a RAM disk was nice back before SSDs were available outside of laboratories as experiments. These days it's pretty shitty since RAM is so fucking expensive and software has bloated up so bad that it takes a while to write it all back to the disk (intermittent writes help a little). I wouldn't even try it unless I had at least 64GB of RAM. Maybe I'll look into it once my Talos II arrives in the mail.
Adam Garcia
It depends entirely on your work. RAM latency is a huge problem and probably more than half your CPU die is dedicated on prevention of have to halt for the 300 clocks it takes to read it. Cache, HT, pipelined prefetch, branch prediction all reduce the likelihood of a cache miss or the penalty if one occurs.
William Watson
>he explained if that is true, then if you install your cpu on the ram itself, things will be processed faster because it wont need to send the data from ram to cpu, it will just process in its place. It is installed directly next to it dude... The distance is irrelevant the speed at which the signal on the bus changes between the CPU and the RAM is negligible compared to the Frequency of RAM.
All of that isn't _that_ important anyway, since hopefully most of the stuff you do will be in Cache already, thats why it exists...
John Clark
>This mostly helps poorly written software. Admittedly those are the ones that bother users the most. But you won't appeal to Google server farms for instance. I beg to differ. Surely badly-written software will benefit from this, but so will well-written software - at a significant rate may I add.
Aiden Price
This is essentially the principle of package-on-package. (plus space-saving concerns and minus retarded logic)
Jeremiah Nguyen
better yet, have an L1 cache of 500GB and store everything there
Andrew Kelly
I wonder if we will ever see a cpu like thread ripper with 16 or 32 gigs of HBM
Adrian Jones
this is the state of Sup Forums take a free computer architecture course online