19
submitted 2 months ago* (last edited 2 months ago) by Molecular0079@lemmy.world to c/selfhosted@lemmy.world

I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: https://github.com/containers/podman/issues/8193

This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right?

If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.

[-] Molecular0079@lemmy.world 51 points 3 months ago

No kidding. This solves a major issue with the Steam Deck as well, because now someone else can be playing on the Deck while you use your main PC for another game.

[-] Molecular0079@lemmy.world 67 points 3 months ago

If you use firewalld, both docker and podman apply rules in a special zone separate from your main one.

That being said, podman is great. Podman in rootful mode, along with podman-docker and docker-compose, is basically a drop-in replacement for Docker.

[-] Molecular0079@lemmy.world 167 points 4 months ago

God, it's like they don't want RCS to succeed.

[-] Molecular0079@lemmy.world 50 points 4 months ago* (last edited 4 months ago)

Your issues stem from going rootless. Podman Compose creates rootless containers and that may or may not be what you want. A lot more configuration needs to be done to get rootless containers working well for persistent services that use low ports, like enabling linger for specific users or enabling low ports for non-root users.

If you want the traditional Docker experience (which is rootful) and figure out the migration towards rootless later, I'd recommend the following:

  1. Install podman-docker. This provides a seamless Docker compatibility layer for podman, allowing you to even use regular docker commands that get translated behind the scenes into Podman.
  2. Install regular docker-compose. This will work via podman-docker and gives you the native docker compose experience.
  3. Enable podman.socket and podman-restart.service. First one socket-activates the central Podman daemon, second one restarts any podman containers with a restart-policy of always on boot.
  4. Run your docker-compose commands using sudo, so sudo docker-compose up -d etc. You can run this with sudo podman compose as well if you're allergic to hyphenation. Podman allows both rootful and rootless containers and the way you choose is by running the commands with sudo or not.

This gets you to a very Docker-like experience and is what I am currently using to host my services. I do plan on getting familiar with rootless and systemd services and Kubernetes files, but I honestly haven't had the time to figure all that out yet.

[-] Molecular0079@lemmy.world 43 points 5 months ago

Yes! I've been waiting for more devices to ship with SteamOS. I am tired of these unpolished handheld experiences on Windows. It always ends up being a mishmash of random vendor apps and lengthy Windows updates.

63

Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader.

I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.

[-] Molecular0079@lemmy.world 71 points 8 months ago

ProtonVPN is a no log VPN according to their privacy policy: https://protonvpn.com/privacy-policy

They have servers specifically for port forwarding and P2P traffic. I use them and I haven't gotten a DMCA request yet so 🤷🏻‍♂️

[-] Molecular0079@lemmy.world 43 points 8 months ago

changing settings in Network Manager.

What's wrong with this method? I feel like this is the main one and it works well for me. Even if you were using systemd-resolved, I believe it still works.

[-] Molecular0079@lemmy.world 81 points 8 months ago

So...how long before Apple realizes that game devs are notoriously time-crunched and forcing them to target yet another proprietary graphics API is a stupid move for their gaming ambitions?

[-] Molecular0079@lemmy.world 123 points 9 months ago

It’s always memory management

No wonder everyone's crazy about Rust.

123

I mean, come on, this has to be a joke right XD

[-] Molecular0079@lemmy.world 233 points 9 months ago

Lol, I dunno if it's them expressing their feelings so much as them taking advantage of a business opportunity.

[-] Molecular0079@lemmy.world 49 points 10 months ago

It's because RSS doesn't allow you to serve ads and every tech company right now is either feeling the squeeze or feeling the greed.

14
submitted 10 months ago* (last edited 10 months ago) by Molecular0079@lemmy.world to c/selfhosted@lemmy.world

I am using one of the official Nextcloud docker-compose files to setup an instance behind a SWAG reverse proxy. SWAG is handling SSL and forwarding requests to Nextcloud on port 80 over a Docker network. Whenever I go to the Overview tab in the Admin settings, I see this security warning:

    The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly.

I have X-Robots-Tag set in SWAG. Is it safe to ignore this warning? I am assuming that Nextcloud is complaining about this because it still thinks its communicating over an insecured port 80 and not aware of the fact that its only talking via SWAG. Maybe I am wrong though. I wanted to double check and see if there was anything else I needed to do to secure my instance.

SOLVED: Turns out Nextcloud is just picky with what's in X-Robots-Tag. I had set it to SWAG's recommended setting of noindex, nofollow, nosnippet, noarchive, but Nextcloud expects noindex, nofollow.

1
submitted 10 months ago* (last edited 10 months ago) by Molecular0079@lemmy.world to c/linux@lemmy.ml

I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

  1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

When it attempts to start the container, I see this in journalctl:

Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully.
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1
  1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

All the containers are started using docker-compose with podman-docker if that's relevant.

Any help appreciated!

EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

0
submitted 10 months ago by Molecular0079@lemmy.world to c/linux@lemmy.ml

cross-posted from: https://lemmy.world/post/3754933

While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

[-] Molecular0079@lemmy.world 127 points 11 months ago

Jesus Christ. Why does it feel like tech industry is just getting shittier and more expensive, while all the cool consumer options are being axed. Intel Nucs were a relatively cheap way to get a cute little desktop machine or a home server. I am sad that they're going away. I guess there's always Minisforum, but still...

view more: next ›

Molecular0079

joined 1 year ago