this post was submitted on 24 Jul 2023
46 points (94.2% liked)

Selfhosted

39948 readers
314 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?

you are viewing a single comment's thread
view the rest of the comments
[–] tburkhol@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

Thee main issues are latency and bandwidth, especially on a server with limited RAM. If you're careful to manage just the data over NAS, it's probably fine, especially if the application's primary job is server data over the same network to clients. That will reduce your effective bandwidth, as data has to go NAS->server->client over the same wires/wifi. If the application does a lot of processing, like a database, you'll start to compromise that function more noticeably.

On applications with low user count, on a home network, it's probably fine, but if you're a hosting company trying to maximize the the data served per CPU cycle, then having the CPU wait on the network is lost money. Those orgs will often have a second, super-fast network to connect processing nodes to storage nodes, and that architecture is not too hard to implement at home. Get a second network card for your server, and plug the NAS into it to avoid dual-transmission over the same wires. [ed: forgot you said you have that second network]

The real issue is having application code itself on NAS. Anytime the server has to page apps in or out of memory, you impose millisecond-scale network latency on top of microsecond-scale SSD latency, and you put 1 Gb/s network cap on top of 3-6 Gb/s SSD bandwidth. If you're running a lot of containers in small RAM, there can be a lot of memory paging, and introducing millisecond delays every time the CPU switches context will be really noticeable.