I think you can send a SIGUSR1 signal to mumble process to tell it to reload the ssl certificate without actually restarting mumble's process. You can use docker kill --signal="SIGUSR1" <container name or id>
, but then you still need to give your user access to docker group. Maybe you can setup a monthly cron on root user to run that command every months?
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Yea this seems like the most easy solution for mumble specifically.
Create a sudo rule which allows the user to run only this exact command: "sudo docker restart mycontainer"
Did you think about automating the certificate generation? You could put everything into a cron script. That's more or less the reason why certificates have such a short lifetime. To force automation.
I am using it in a cron script, but the mumble server doesn't support automatic reload of the ssl certs, so I need to restart tge mumblr server to load the new certs. That's where my problem comes cause I run it inside docker.
You could write a script that just restarts your container, make sure unprivileged users cannot edit it, and do one of two things:
- make a sudoers entry for your unprivileged account to call just that script as a user in the docker group with sudo
- use setuid on the script to have it execute from the docker group even when executed by users
Most shells ignore setuid on scripts for security reasons.
Not sure how your stack works together, but sudo
will let you run particular commands as a different user and you can be pretty specific with the privileges. For example you can have a script that’s only allowed to run docker compose -f /path/to/compose.yml restart containername
as a user in the docker group. Maybe there’s some docker-specific approach, but this should work with traditional Unix tools and a little scripting.
If a user is in the docker group they can also run docker commands.
Note: Adding a user to the docker is effectively root.
Kubernetes has user accounts that you could use to restart containers in an unprivileged way. Create a role and role binding that gives the "delete Pod" permission to a service account. Kind makes it very easy to run Kubernetes without any setup. You'd just need to convert your docker compose files to Deployments, Services, and PersistentVolumes.
If converting to a kubernetes setup is too big of a leap, you could maybe try to write a C program that uses setuid to gain docker privileges in a restricted way.
Probably easiest to just have a cronjob that restarts the container regularly, though.
I have been thinking about moving to kubernetes, this just adds gives me another notch into doing it.
Yeah, there's definitely a learning curve since it's so different from docker, but there are some good tutorials and everything just makes sense. All the error messages are googlable and everything fits together so well.