this post was submitted on 26 Aug 2024
28 points (85.0% liked)

Selfhosted

39175 readers
542 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

After upgrading my internet connection I immediatelly noticed that my HDD tops 40 MB/s and bottlnecking download speed in qbittorrent. Is it possible to use SSD drive as a catch drive for 12 TB HDD so it uses SSD speeds when downloading and moves files to HDD later on? If yes, does it make sense? Anyone using anything simmilar? Would 512 GB be enough or could I benefit from 2TB SSD?

HDD is just for jellyfin (movies/shows), not in raid, dont need backup for that drive, I can afford risking data if that matters at all

All suggestions are welcome, Thx in advance

EDIT: I obviously have upset some of you, wasn't my intention, I'm sorry about that. I love to tinker and learn new things, but I could live with much lower speeds tho... Please don't hate me if I couldn't understand your comment or not being clear with my question.

HDD being bottleneck at 40 MB/s was wrong assumption (found out in meantime). I'm still trying to figure out what was the reason for download to be that slow, but I'm interested in learning about the main question anyway. I just thought I'm experiencing the same issue like many people today, having faster internet than storage. Some of you provided solutions I will look into, but need time for that and also have to fix whatever else I'm having issue with.

Keep this community awesome because it is <3

top 50 comments
sorted by: hot top controversial new old
[–] ShortN0te@lemmy.ml 36 points 2 weeks ago (4 children)

40MB/s is very very low even for a HDD. I would eventually debug why it's that low.

Yes it's possible. FS like zfs btrfs etc. support that.

[–] catloaf@lemm.ee 13 points 2 weeks ago (2 children)

It's probably a 5400rpm drive, and/or SMR. Both are going to make it slower.

[–] Appoxo@lemmy.dbzer0.com 4 points 2 weeks ago

5.4k + smr would explain it at write but not at read.

[–] Markaos@lemmy.one 3 points 2 weeks ago

In my very limited experience with my 5400rpm SMR WD disk, it's perfectly capable of writing at over 100 MB/s until its cache runs out, then it pretty much dies until it has time to properly write the data, rinse and repeat.

40 MB/s sustained is weird (but maybe it's just a different firmware? I think my disk was able to actually sustain 60 MB/s for a few hours when I limited the write speed, 40 could be a conservative setting that doesn't even slowly fill the cache)

[–] acosmichippo@lemmy.world 9 points 2 weeks ago* (last edited 2 weeks ago)

agreed, I think there is something else going on here. test the write speed with another application, I doubt the drive actually maxes out at 40MB/s unless it's severely fragmented or failing.

incidentally what OP wants is how most people set up Unraid servers. SSD cache takes incoming files for write speed, then at a later time the OS moves the files to the spinning disk array.

[–] rambos@lemm.ee 4 points 2 weeks ago (1 children)

Its the cheapest drive I could find (refurbished seagate from amazon), I thought thats the reason for being slow, but wasnt aware its that low. Im also getting 25-40 MB/s (200-320 Mbps) when copying files from this drive over network. Streaming works great so its not too slow at all. Is there better way of debugging this? What speeds can I expect from good drive or best drive?

Ill research more about BTRFS and ZFS, thx

[–] acosmichippo@lemmy.world 2 points 2 weeks ago (1 children)

can you copy files to it from another local disk?

[–] rambos@lemm.ee 4 points 2 weeks ago (1 children)

Yeah, but need to figure out how to see transfer speed using ssh. Sorry noob here :)

[–] not_fond_of_reddit@lemm.ee 2 points 2 weeks ago (1 children)

If you use scp (cp over ssh) you should see the transfer speed.

[–] rambos@lemm.ee 4 points 2 weeks ago (2 children)

I have managed to copy with rsync and getting 180 MB/s. I guess my initial assumption was wrong, HDD is obviously not bottleneck here, it can get close to ISP speed. Thank you for pointing this out, Ill do more testing these days. Im kinda shocked because I never knew HDD can be that fast. Gonna reread all the comments as well

[–] not_fond_of_reddit@lemm.ee 2 points 2 weeks ago (1 children)

The cool thing about rsync is that it goes ”BRRRRRRRRR!” like a warthog… the plane… and it can saturate the receiving drive or array depending on your network and client. And getting 180 with rsync.. on a SATA drive, can’t really hope for more.

And you can run a quick n dirty test is using dd

$> dd if=/dev/zero of=1g-testfile bs=1g count=1

[–] rambos@lemm.ee 2 points 2 weeks ago (1 children)

Thx. Ive seen dd commands in guides how to test drive speed, but I'm not sure how can I specify what drive I want to test. I see I could change "if" and "of", but don't trust myself enough to use my own modified commands before understanding them better. Will read more about that. Honestly I'm surprised drive speed test is not easier, but its probably just me still being noob xD

[–] not_fond_of_reddit@lemm.ee 2 points 2 weeks ago (3 children)

Let’s say you want to test a drive that is mounted on /tmp… you just cd into that directory and you can use my example.

You can use

$> df -h or $> mount

to check how your drive is mounted in the OS Most ”default ” installations will have 1-4 partitions and / being partition 3 or 4.

So if you look at the mount command and / is /dev/sdX3 (where X can be a-z depending on how many drives you have connected) and no other mounts are in the output then every directory under / is on that drive… so you can run my example from your home-directory if you fancy that.

load more comments (3 replies)
[–] ShortN0te@lemmy.ml 2 points 2 weeks ago

The limitation of HDDs was never sequential Read/Write when it comes to day to day use on a PC.

The huge difference to an SSD is when data is written or read not sequentially, often referred to random I/O.

load more comments (1 replies)
[–] johntash@eviltoast.org 13 points 2 weeks ago (1 children)

Unraid has this with their cache pools. ZFS can also be configured to have a cache drive for writes.

You can also DIY with something like mergerfs and separate file systems.

[–] rambos@lemm.ee 4 points 2 weeks ago

Ive heard about all of these before, gonna do more research. Thank you

[–] slazer2au@lemmy.world 9 points 2 weeks ago (3 children)

You can and Qbittorrent has this functionality built in. You set your in progress download folder to be the SSD then set the move when completed to your HDD.

As for the size, that would depend on how much you are downloading.

load more comments (3 replies)
[–] braindefragger@lemmy.world 9 points 2 weeks ago (22 children)

Yes. It’s part of the application and well documented. What did you try and not work?

load more comments (22 replies)
[–] Maxy@lemmy.blahaj.zone 6 points 2 weeks ago (6 children)

qBittorrent has exactly the option you’re looking for, I believe it’s called “incomplete download path” in the settings, letting you store incomplete downloads at a temporary path and moving them to their regular location when the download finishes. Aside from the download speed improvement, this will also lead to less fragmentation on your HDD (which might be part of the reason why it is so slow when downloading directly to it). Pre-allocating space could have the same effect, but I would recommend only using one of these two solutions at once (pre-allocating space on your SSD would only waste space)

load more comments (6 replies)
[–] capital@lemmy.world 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I do this with mergerfs.

I then periodically use their prewritten scripts to move things off the cache and to the backing drives.

I should say it’s not really caching but effectively works to take care of this issue. Bonus since all that storage isn’t just used for cache but also long term storage. For me, that’s a better value proposition.

[–] rambos@lemm.ee 2 points 2 weeks ago (1 children)
[–] schizo@forum.uncomfortable.business 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

<3 mergerfs and <3 my setup, but just a warning: make sure you read the documentation and ensure you've got all the proper options set in your fstab entry for the mergerfs mount.

There's a lot of stuff in there that can interact weirdly with various pieces of software and lead to the most insane debug sessions because, well, why would a drive mount break other software (in my case it was qbittorrent in docker when an upgrade required me to change the mount options to not include direct_io).

[–] capital@lemmy.world 2 points 2 weeks ago (1 children)

Yeah that was fun times.

Luckily, thanks to using docker, it was easy enough to "pin" a working version in the compose file while I figured out what just broke.

For everyone's reference, here's my fstab to give you an idea of what works with linuxserver.io's qbittorrent

## Media disks setup for mergerfs and snapraid

# Map cache to 1TB SSD
/dev/disk/by-id/ata-Samsung_SSD_860_EVO_1TB_S3Z8NB0K820469N-part1 /mnt/ssd1 xfs defaults 0 0

# Map storage and parity. All spinning disks.
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK39X4N-part1 /mnt/par1         xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK3TY5N-part1 /mnt/disk01       xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK4806N-part1 /mnt/disk02       xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD100EZAZ-11TDBA0_JEK4H0RN-part1 /mnt/disk03       xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT0TS-part1 /mnt/disk04 xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT1YS-part1 /mnt/disk05 xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4XFT3EK-part1 /mnt/disk06 xfs defaults 0 0
/dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N6CKJJ6P-part1 /mnt/disk07 xfs defaults 0 0

# Setup mergerfs backing pool
/mnt/disk* /mnt/stor fuse.mergerfs defaults,nonempty,allow_other,use_ino,inodecalc=path-hash,cache.files=off,moveonenospc=true,dropcacheonclose=true,link_cow=true,minfreespace=1000G,category.create=pfrd,fsname=mergerfs 0 0

# Setup mgergerfs caching pool
/mnt/ssd1:/mnt/disk* /mnt/cstor fuse.mergerfs defaults,nonempty,allow_other,use_ino,inodecalc=path-hash,cache.files=partial,moveonenospc=ff,dropcacheonclose=true,minfreespace=10G,category.create=ff,fsname=cachemergerfs 0 0

Yeah, it took me FOREVER to finally land on a useful search result for WTF was going on (thanks Google, you pile of junk!) because the impact was that everything looked perfectly fine, you just... couldn't download anything?

No errors, no faults, nothing in the logs, just adding anything resulted in absolutely nothing happening.

Really freaking weird.

[–] possiblylinux127@lemmy.zip 5 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

bcachefs will fill this role someday.

For now there is ZFS which as a cache drive option. Keep in mind it will absolutely destroy the cache drive by wearing out the flash

You also could look into ZFS special disks. However, if you are going that way already you might as well get a bunch of disks.

[–] rambos@lemm.ee 2 points 2 weeks ago

Ill look into ZFS, but in meantime I found out my HDD is probably not bottleneck. Still want to learn about this so thanks for your comment

load more comments (1 replies)
[–] fiddlesticks@lemmy.dbzer0.com 3 points 2 weeks ago (2 children)

Depends on the file system, I know for a fact that ZFS supports ssd caches (in the form of l2arc and slog) and I believe that lvm does something similar (although I've never used it).

As for the size, it really depends how big the downloads are if you're not downloading the biggest 4k movies in existence then you should be fine with something reasonably small like a 250 or 500gb ssd (although I'd always recommend higher because of durability and speed)

[–] lemmylommy@lemmy.world 2 points 2 weeks ago (1 children)

l2arc is a read cache. Slog only is for synchronous writes.

[–] fiddlesticks@lemmy.dbzer0.com 2 points 2 weeks ago

Welp, guess I should do my research next time. Thanks for the heads up.

load more comments (1 replies)
[–] nitrolife@rekabu.ru 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I used lvm with SSD cache few years, but time to time I have problems with loads after reboot. If forgot about reboots all work great with LVM raid + LVM cache. Cache can be configured without raid. And you can add or remove cache in any time. Docs: https://man.archlinux.org/man/lvmcache.7

load more comments (1 replies)
load more comments
view more: next ›