1012
you are viewing a single comment's thread
view the rest of the comments
[-] Cethin@lemmy.zip 37 points 9 months ago

It'll never be fast enough. An SSD is orders of magnitude slower than RAM, which is orders of magnitude slower than cache. Internet speed is orders of magnitude slower than the slowest of hard drives, which is still way too slow to be used for anything that needs memory relatively soon.

[-] TurtledUp@lemm.ee 11 points 9 months ago

Need faster than light travel speeds and we can colocate it on the moon

[-] barsoap@lemm.ee 6 points 9 months ago* (last edited 9 months ago)

A SATA SSD has ballpark 500MB/s, a 10g ethernet link 1250MB/s. Which means that it can indeed be faster to swap to the RAM of another box on the LAN that to your local SSD.

A Crucial P5 has a bit over 3GB/s but then there's 25g ethernet. Let's not speak of 400g direct attach.

[-] DaPorkchop_@lemmy.ml 10 points 9 months ago
  • modern NVMe SSDs have much more bandwidth than that, on the order of > 3GiB/s.
  • even an antique SATA SSD from 2009 will probably have much lower access latency than sending commands to a remote device over an ethernet link and waiting for a response
[-] barsoap@lemm.ee 1 points 9 months ago

Show me an SSD with 50GB/s, it'd need a PCIe6x8 or PCIe5x16 connection. By the time you RAID your swap you should really be eyeing that SFP+ port. Or muse about PCIe cards with RAM on them.

Speaking of: You can swap to VRAM.

[-] DaPorkchop_@lemmy.ml 3 points 9 months ago

My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you've got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you'll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there's something I'm missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.

[-] barsoap@lemm.ee 1 points 9 months ago

Maybe there’s something I’m missing here

SFP direct attach, you don't need a switch or transcievers, only two QSFP-DD ports and a cable. Also this is a thought exercise not a budget meeting. Start out with "We have this dual socket EPYC system here with full 12TB memory and need to double that". You have *rolls dice* 104 free PCIe5 lanes, go.

[-] Cethin@lemmy.zip 7 points 9 months ago

Bandwidth isn't really most of the issue. It's latency. It's the amount of time from the CPU requesting a segment of memory to receiving it, which bandwidth doesn't effect.

[-] barsoap@lemm.ee 1 points 9 months ago* (last edited 9 months ago)

Depends on your workload and access pattern.

...I'm saying can be faster. Not is faster.

[-] Cethin@lemmy.zip 1 points 9 months ago

Yeah, but the point of RAM is fast random (the R in RAM) access times. There are ways to make slower memory work better for this by predicting what will be needed (grab a chunk of memory because accesses will probably need things with closer locality than pure random), but it can't be fixed. Cloud memory is good for non-random storage or storage that isn't time critical.

this post was submitted on 02 Oct 2023
1012 points (99.2% liked)

Programmer Humor

31215 readers
513 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 4 years ago
MODERATORS