Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Hi there,
thats an interesting question. I suppose it depends on what you need to do.
If you can, divert the ports in the run command or compose file with -p 4430:443 (run) Or Ports:
Then you tell the apps that need this port to use that one instead.
Thats the easiest solution I know of.
If you want a more elegant solution, you use custom domains with a reverse proxy like npm (nginx proxy manager).
You spin up npm and start defining hosts like cloud.yourhomedomain.com and define those over your dns if possible (router or in my case, pihole)
Docker is a universe of itself and you can invest hundreds of hours and still feel like a noob (good game mechanic btw, easy go get into, hard to master).
Hit me up if you need more info. Get familiar with stack overflow and the likes because you will need em. :)
Good luck
Thanks a ton! I did not realize you could have a different listing port vs internally used port.
I have done what you mentioned and used a random port internally and kept 443 as the listening port. I am using Caddy to then direct the traffic reverse proxy it.
Thanks again!
if you’re only going to be using those services through the proxy, it can also be a useful security upgrade to not forward their ports at all, and run caddy inside docker to connect to them directly!
if you forward the ports (without firewalling them), people can connect to them directly which can be a security risk (for example, many services require a proxy to add the x-forwarded-for header to show which IP address originally made the request… if users can access the service directly, they can add this header themselves and make it appear as though they came from anywhere! even 127.0.0.1, which can sometimes bypass things like admin authentication)
Thank you! Just to clarify - I should only forward 443 & 80 for Caddy. Then in the Caddy config define the ports within the reverse proxy. Is that correct?
How safe/secure is it to host a public website or services like a Lemmy instance doing this?
For services I don't care to be available outside of my network, I am not adding to Caddy and accessing them directly via internal IP.
so what you ideally want is people to ONLY be able to access your backend service through caddy, so caddy should be the only one with ports publicly accessible, yes
caddy running in the same docker network as your services can talk to those services on their original ports; they don’t need to even be mapped to the host! in this case, you have 3 containers: caddy, service 1, service 2… caddy is the only one that needs to have ports forwarded and you can just forward caddy:443 and no need to worry! then caddy can talk directly to services:80 or services:443 (docker containers show up to other docker containers by their container name! so if you run eg: docker run … —name lemmy, then caddy in the same docker network would be able to connect to http://lemmy:80!)
.. but if you forward say service 1 and 2 on :8443 and :9443 (without firewall, and even with it makes me uncomfortable - that’s 1 step away from a subtle security problem), someone could be able to access <yourserver>:8443, right? so they don’t have to go through caddy to get to the backend service… for some services, that can be a big deal in ways that it’s difficult to understand, so it’s best to just not allow it if possible
an alternative is to make sure your services are firewalled so that nobody from the internet can hit them, but caddy still can… but i like this less, because it’s less explicit what’s happening so it’s easier to forget about
Thank you for all of this info. 443 is now my only open port and directs to my Caddy server. For extra security, I'm going to look into implementing an authentication portal for each backend service that is not "public" for all.