this post was submitted on 06 Feb 2024
173 points (99.4% liked)

Selfhosted

37811 readers
524 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

As the title says, I want to know the most paranoid security measures you've implemented in your homelab. I can think of SDN solutions with firewalls covering every interface, ACLs, locked-down/hardened OSes etc but not much beyond that. I'm wondering how deep this paranoia can go (and maybe even go down my own route too!).

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[–] JoeKrogan@lemmy.world 16 points 5 months ago* (last edited 5 months ago) (4 children)

Only remote access by wireguard and ssh on non standard port with key based access.

Fail2ban bans after 1 attempt for a year. Tweaked the logs to ban on more strict patterns

Logs are encrypted and mailed off site daily

System updates over tor connecting to onion repos.

Nginx only has one exposed port 443 that is accessible by wireguard or lan. Certs are signed by letsencrypt. Paths are ip white listed to various lan or wireguard ips.

Only allow one program with sudo access requiring a password. Every other privelaged action requires switching to root user.

I dont allow devices I dont admin on the network so they go on their own subnet. This is guests phones and their windows laptops.

Linux only on the main network.

I also make sure to backup often.

[–] constantokra@lemmy.one 8 points 5 months ago (2 children)

Can you explain why you use onion repos? I've never heard of that, and I've heard of kind of a lot of things.

[–] JoeKrogan@lemmy.world 10 points 5 months ago* (last edited 5 months ago) (2 children)

Onion repositories are package repositories hosted on tor hidden services. The connection goes through six hops and is end to end encrypted. In addition to further legitimizing the tor network with normal everyday usage it has the benefit of hiding what packages have been installed on a system.

Here are some notes about them if you want to read more.

https://blog.torproject.org/debian-and-tor-services-available-onion-services/

https://www.whonix.org/wiki/Onionizing_Repositories

[–] constantokra@lemmy.one 2 points 5 months ago

That's pretty neat. I might start doing that, just for kicks.

[–] MigratingtoLemmy@lemmy.world 1 points 5 months ago

That is very interesting, thanks!

[–] BautAufWasEuchAufbaut@lemmy.blahaj.zone 7 points 5 months ago* (last edited 5 months ago)

With Debian it's just the apt-tor package, and the project maintains an official list at.. onion.debian.org iirc?
I don't know if serving onion traffic is more expensive for Debian/mirror maintainers so idk if this is something everybody should use

[–] peter@feddit.uk 5 points 5 months ago (3 children)

Linux only on the main network.

Is that a security benefit?

[–] semperverus@lemmy.world 8 points 5 months ago

If big corporations hoovering your data should be on everyone's threat list, then yea, i'd say its a huge benefit.

[–] JoeKrogan@lemmy.world 5 points 5 months ago* (last edited 5 months ago)

Well I dont trust closed source software and do what I can to avoid it when I can. At least foss can be audited. Also all the linux devices on the main network are devices I admin.

[–] NOPper@lemmy.world 5 points 5 months ago

I guess it cuts the attack surface profile down a bit?

[–] MigratingtoLemmy@lemmy.world 4 points 5 months ago (1 children)

System updates over tor connecting to onion repos.

How does this help, assuming your DNS isn't being spoofed?

[–] JoeKrogan@lemmy.world 1 points 5 months ago (1 children)

Please see my reply below with links.

[–] MigratingtoLemmy@lemmy.world 2 points 5 months ago

Thanks, never thought of that before. I'll certainly try it, great way to help the network!

[–] rekabis@lemmy.ca 2 points 5 months ago (1 children)

Fail2ban bans after 1 attempt for a year.

Fail2ban yes; one year, however, is IMO a bit excessive.

Most ISP IP assignments do tend to linger - even with DHCP the same IP will be re-assigned to the same gateway router for quite a number of sequential times - but most IPs do eventually change within a few months. I personally use 3 months as a happy medium for any blacklist I run. Most dynamic IPs don’t last this long, almost all attackers will rotate through IPs pretty quickly anyhow, and if you run a public service (website, etc.), blocking for an entire year may inadvertently catch legitimate visitors.

Plus, you also have to consider the load such a large blocklist will have on your system, if most entries no longer represent legitimate threat actors, you’ll only bog down your system by keeping them in there.

Fail2ban can be configured to allow initial issues to cycle back out quicker, while blocking known repeat offenders for a much longer time period. This is useful in keeping block lists shorter and less resource-intensive to parse.

[–] JoeKrogan@lemmy.world 1 points 5 months ago (1 children)

My block list is very small actually due to the non standard ssh port. Everything else goes through wireguard.

If it was open to the public then yes I'd have to reconsider the ban length.

[–] rekabis@lemmy.ca 1 points 5 months ago

That makes a lot more sense for your setup, then.