this post was submitted on 22 Dec 2024
22 points (100.0% liked)

Selfhosted

40696 readers
295 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Currently I'm running some services though Docker on a Proxmox VM. Before I had Proxmox, I thought containers were a very clean way of organizing my system. I'm currently wondering if I can just install the services I always use on the VM directly. What are the pros and cons of that?

top 15 comments
sorted by: hot top controversial new old
[–] sylver_dragon@lemmy.world 7 points 7 hours ago (1 children)

I see containers as having a couple of advantages:

  1. Separation of dependencies - while not as big of issue as it used to be, just knowing that you won't end up with the requirements for one application conflicting with another is one less issue to worry about. Additionally, you can do anything you want to one container, without having an effect on another container. You don't get stuck wanting to reboot or revert the system, but not wanting to break a different running service.
  2. Portability - Eventually, you are going to replace the OS of that VM (at least, you should). Moving a container to a new OS is dead simple. Re-installing an application on a new OS, moving data and configs can be anywhere from easy to a pain in the arse, depending on the software.
  3. Easier fall back - Have you ever upgraded an application and had everything go to shit? In my years working as a sysadmin, I lost way too many evenings to this sort of bullshit. And while VM snapshots should make reverting easy, sometimes it just didn't work out that way. Containers force enough separation of applications that you can do just about anything to one container and not effect others.
  4. Less dependency on a single install - Have you ever had a system just get FUBAR, and after a few hours of digging the answer seems to be, just format the drive and start over? Maybe you tried some weird application out and the uninstall wasn't really clean. By having all that crap happen in containers, you can isolate the damage. Nuke the container, nuke the image, and the base OS is still clean.
  5. Easier version testing - Want to try out upgrading to version 2 of an application, but worried that it may not be fully baked yet or the new configs may take a while to get right? Do it off in a separate container on a copy of the data. You can do this with VMs and snapshots; but, I find containers to be less overhead.

That all said, if an application does not have an official container image, the added complexity of creating and maintaining your own image can be a significant downside. One of my use cases for containers is running game servers (e.g. Valheim). There isn't an official image; so, I had to roll my own. The effort to set this up isn't zero and, when trying to sort out an image for a new game, it does take me a while before I can start playing. And those images need to be updated when a new version of the game releases. Technically, you can update a running container in a lot of cases; but, I usually end up rebuilding it at some point anyway.

I'd also note that, careful use of VMs and snapshots can replicate or mitigate most of the advantages I listed. I've done both (decade and a half as a sysadmin). But, part of that "careful use" usually meant spinning up a new VM for each application. Putting multiple applications on the same OS install was usually asking for trouble. Eventually, one of the applications would get borked and having the flexibility to just nuke the whole install saved a lot of time and effort. Going with containers removed the need to nuke the OS along with the application to get a similar effect.

At the end of the day, though. It's your box, you do what you are most comfortable with and want to support. If that's a monolithic install, then go for it. While I, or other might find containers a better answer for us, maybe it isn't for you.

[–] macgyver@federation.red 1 points 1 hour ago (1 children)

Man back when I played there was a community image at least

[–] sylver_dragon@lemmy.world 1 points 42 minutes ago

I'm sure there are several out there. But, when I was starting out, I didn't see one and just rolled my own. The process was general enough that I've been able to mostly just replace the SteamID of the game in the Dockerfile and have it work well for other games. It doesn't do anything fancy like automatic updating; but, it works and doesn't need anything special.

[–] scott@lem.free.as 22 points 10 hours ago* (last edited 10 hours ago) (2 children)

Containers are just processes with flags. Those flags isolate the process's filesystem, memory [1], etc.

The advantages of containers is that the software dependencies can be unique per container and not conflict with others. There are no significant disadvantages.

Without containers, if software A has the same dependency as software B but need different versions of that dependency, you'll have issues.

[1] These all depend on how the containers are configured. These are not hard isolation but better than just running on the bare OS.

[–] machinin@lemmy.world 1 points 8 hours ago* (last edited 8 hours ago) (1 children)

Thanks for this - the one advantage I'm noticing is that to update the services I'm running, I have to rebuild the container. I can't really just update from the UI if an update is available. I can do it, it is just somewhat of a nuisance.

How often are there issues with dependencies? Is that a problem with a lot of software these days?

[–] Passerby6497@lemmy.world 2 points 5 hours ago* (last edited 5 hours ago)

But rebuilding your container is pretty trivial from the command line all said and done. I have something like this alias'd in my .bashrc to smooth it along:

Docker compose pull; docker compose down; docker compose up -d

I regularly check on my systems and go through my docker dirs and run my alias to update everything fairly simply. Add in periodic schedule image cleanups and it has been humming along for a couple years for the most part (aside from one odd software issues and hardware failures).

How often are there issues with dependencies? Is that a problem with a lot of software these days?

I started using docker 3-4 years ago specifically because I kept having issues with dependencies of one app breaking others, but I also tend to run a lot of services per VM. Honestly, the overhead of container management is infinitely preferable to the overhead that comes with managing OS level stuff. But I'm also not a Linux expert, so take that for what you will.

[–] callcc@lemmy.world -2 points 6 hours ago

I beg to disagree about the disadvantages. An important one is that you cannot easily update shared libraries globally. This is a problem with things like libssl or similar. Another disadvantage is the added complexity both wrt. to operation but also in general the amount of code running. It can also be problematic that many people just run containers without doing any auditing. In general containers are pretty opaque compared to os packaged software which is usually compiled individually for the os.

This being said, systemd offers a lot of isolation features that allows similar isolation to containers but without having to deal with docker.

[–] Nephalis@discuss.tchncs.de 3 points 6 hours ago (1 children)

Just to throw another option in: Lxc are containers too. And they are the other major option proxmox comes with.

It feels more like bare metal installations, but are more lightweight and share there ressources they do not use.

I never got why having Proxmox and one VM with several docker containers except I absolutly don't want to deal with installations at all.

On the other hands I wanted to learn about linux and the basics of handling proxmox.

[–] atzanteol@sh.itjust.works -5 points 5 hours ago

I wish the phrase "bare metal" would die..

[–] traches@sh.itjust.works 7 points 8 hours ago

Cons of containers are slightly worse disk and memory consumption.

Pros:

  • ease of installation
  • declarative configuration
  • security
  • dependency management is solved

Stick with the containers

[–] trilobite@lemmy.ml 3 points 10 hours ago (1 children)

I've been asking myself the same question for a while. The container inside a VM is my setup too. It feels like the container in the VM in the OS is a bit of an onion approach which has pros and cons. If u are on low powered hardware, I suspect having too many onion layers just eat up the little resources you have. On the other hand, as Scott@lem.free.as suggests, it easier to run a system, update and generally maintain. It would be good to have other opinion on this. Note that not all those that have a home lab have good powered labs. I'm still using two T110's (32GB ECC ram) that are now quite dated but are sufficient for my uses. They have Truenas scale installed and one VM running 6 containers. It's not fast, but its realiable.

[–] CameronDev@programming.dev 1 points 7 hours ago

Container overhead is near zero. They are not virtualized or anything like that, they are just processes on your host system that are isolated. Its functionally not much more different to chroot.

[–] Voroxpete@sh.itjust.works 3 points 10 hours ago (1 children)

Personally, I always like to use containers when possible. Keep in mind that unlike virts, containers have very minimal overhead. So there really is no practical cost to using them, and they provide better (though not perfect) security and some amount of sandboxing for every application.

Containers mean that you never have to worry about whether your VM is running the right versions of certain libraries. You never have to be afraid of breaking your setup by running a software update. They're simpler, more robust and more reliable. There are almost no practical arguments against using them.

And if you're running multiple services the advantages only multiply because now you no longer have to worry about running a bespoke environment for each service just to avoid conflicts.

[–] machinin@lemmy.world 2 points 8 hours ago (1 children)

Copying a response I wrote on another comment -

Thanks for this - the one advantage I'm noticing is that to update the services I'm running, I have to rebuild the container. I can't really just update from the UI if an update is available. I can do it, it is just somewhat of a nuisance.

How often are there issues with dependencies? Is that a problem with a lot of software these days?

[–] Voroxpete@sh.itjust.works 1 points 5 hours ago

There's no good answer to that because it depends entirely on what you're running. In a magical world where every open source project always uses the latest versions of everything while also maintaining extensive backwards compatibility, it would never be a problem. And I would finally get my unicorn and rainbows would cure cancer.

In practice, containers provide a layer of insurance that it just makes no sense to go without.