this post was submitted on 21 Feb 2025
242 points (100.0% liked)

Selfhosted

42718 readers
698 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

AFAIK every NAS just uses unauthenticated connections to pull containers, I'm not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).

So hopefully systems like /r/unRAID handle the throttling gracefully when clicking "update all".

Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?

top 46 comments
sorted by: hot top controversial new old
[–] GreenKnight23@lemmy.world 14 points 15 hours ago

and now I don't sound so fucking stupid for setting up local image caches on my self-hosted gitlab server.

[–] lambalicious@lemmy.sdf.org 26 points 1 day ago

Forgejo gives you a registry built-in.

Also is it just me or does the docker hub logo look like it's giving us the middle finger?

[–] warmaster@lemmy.world 79 points 1 day ago (1 children)

Fortunately linuxserver's main hosting is no longer dockerhub.

[–] TheHobbyist@lemmy.zip 15 points 1 day ago (1 children)

Would you be able to share more info? I remember reading their issues with docker, but I don't recall reading about whether or what they switched to. What is it now?

[–] narc0tic_bird@lemm.ee 45 points 1 day ago (2 children)

They run their own registry at lscr.io. You can essentially prefix all your existing linuxserver image names with lscr.io/ to pull them from there instead.

[–] erre@programming.dev 13 points 1 day ago

It's actually a redirect service around ghcr to provide them analytics. There's more info in their FAQ.

https://docs.linuxserver.io/FAQ/

[–] Dirk@lemmy.ml 1 points 1 day ago (1 children)

They do it since quite some time now, right?

[–] narc0tic_bird@lemm.ee 2 points 1 day ago

Couple of years, yeah.

[–] muntedcrocodile@lemm.ee 53 points 1 day ago (1 children)

How long since getting an oracle CEO did this take?

[–] scrubbles@poptalk.scrubbles.tech 26 points 1 day ago (1 children)

Did they really? Oh my god please tell me your joking, that a company as modern as docker got a freaking oracle CEO. They pulled a Jack Barker. Did he bring his conjoined triangles of success?

[–] mac@lemm.ee 5 points 22 hours ago

A "jack barker" 🤣

[–] possiblylinux127@lemmy.zip 21 points 1 day ago

Use a service that's not Docker hub

[–] PassingThrough@lemm.ee 26 points 1 day ago (2 children)

Huh. I was just considering establishing a caching registry for other reasons. Ferb, I know what we’re going to do today!

[–] Daughter3546@lemmy.world 4 points 1 day ago (2 children)

Do you have a good resource for how one can go about this?

[–] jaxxed@lemmy.ml 4 points 14 hours ago (1 children)

You can host your own with harbor, and set up replication per repo (pull upstream tags) If you need a commercial product/support you can use MSR v4.

Harbor can install on any K8s cluster using helm, with just a couple of dependencies (cert-manager, postgres op, redis-op) Replication stuff you can easily add.

I have some no-warranty terraform I could share if there is some interest.

[–] femtech@midwest.social 1 points 3 hours ago

That's what we do internally for our openshift deployment. It will reach out if not in harbor and then cache it there for everyone else to use.

[–] PassingThrough@lemm.ee 8 points 1 day ago (2 children)
[–] carzian@lemmy.ml 3 points 1 day ago
[–] Daughter3546@lemmy.world 2 points 1 day ago

Much appreciated <3

[–] Uli@sopuli.xyz 1 points 1 day ago

Same here. I've been building a bootstrap script, and each time I test it, it tears down the whole cluster and starts from scratch, pulling all of the images again. Every time I hit the Docker pull limit after 10 - 12 hours of work, I treat that as my "that's enough work for today" signal. I'm going to need to set up a caching system ASAP or the hours I work on this project are about to suddenly get a lot shorter.

[–] Shading7104@feddit.nl 35 points 1 day ago (1 children)

Instead of using a sort of Docker Hub proxy, you can also use GitHub's repository or Quay. If the project allows it, you can easily switch to these alternatives. Alternatively, you can build the Docker image yourself from the source. It's usually not a difficult process, as most of it is automated. Or what I personally would probably do is just update the image a day later if I hit the limit.

[–] jaxxed@lemmy.ml 1 points 14 hours ago

You can also host your own with harbor (or MSR v4 if you want a commercial product.) You can set them up to replicate upstream.

[–] merthyr1831@lemmy.ml 7 points 1 day ago (1 children)

I'm quite new to docker for NAS stuff - how many pulls would the average person do? like, i don't think i even have 10 containers 🤨

[–] Darkassassin07@lemmy.ca 23 points 1 day ago (1 children)

I'm running ~30 containers, but they don't typically all get new updates at the same time.

Updates are grabbed nightly, and I think the most I've seen update at once is like 6 containers.

Could be a problem for setting up a new system, or experimenting with new toys.

[–] lemmyvore@feddit.nl 11 points 19 hours ago* (last edited 19 hours ago)

The problem is that the main container can (and usually does) rely on other layers, and you may need to pull updates for those too. Updating one app can take 5-10 individual pulls.

[–] KingThrillgore@lemmy.ml 7 points 1 day ago* (last edited 1 day ago) (1 children)

Well shit, I still rely on Docker Hub even for automated pulls so this is just great. I guess i'm going back to managing VMs with OpenTofu and package managers.

What are our alternatives if we use Podman or K8s?

[–] wireless_purposely832@lemmy.world 14 points 1 day ago* (last edited 1 day ago) (1 children)

The issue isn't Docker vs Podman vs k8s ~~vs LXC~~ vs others. They all use OCI images to create your container/pod/etc. This new limit impacts all containerization solutions, not just Docker. EDIT: removed LXC as it does not support OCI

Instead, the issue is Docker Hub vs Quay vs GHCR vs others. It's about where the OCI images are stored and pulled from. If the project maintainer hosts the OCI images on Docker Hub, then you will be impacted by this regardless of how you use the OCI images.

Some options include:

  • For projects that do not store images on Docker Hub, continue using the images as normal
  • Become a paid Docker member to avoid this limit
  • When a project uses multiple container registries, use one that is not Docker Hub
  • For projects that have community or 3rd party maintained images on registries other than Docker Hub, use the community or 3rd party maintained images
  • For projects that are open source and/or have instructions on building OCI images, build the images locally and bypass the need for a container registry
  • For projects you control, store your images on other image registries instead of (or in addition to) Docker Hub
  • Use an image tag that is updated less frequently
  • Rotate the order of pulled images from Docker Hub so that each image has an opportunity to update
  • Pull images from Docker Hub less frequently
  • For images that are used by multiple users/machine under your supervision, create an image cache or image registry of images that will be used by your users/machines to mitigate the number of pulls from Docker Hub
  • Encourage project maintainers to store images on image registries other than Docker Hub (or at least provide additional options beyond Docker Hub)
  • Do not use OCI images and either use VM or bare metal installations
  • Use alternative software solutions that store images on registries other than Docker Hub
[–] interdimensionalmeme@lemmy.ml 3 points 1 day ago (3 children)

Lxc doesn't use oci images? I always end up using docker in lxc when dockeris the only option (which I have not figured how to makw work on my airgapped side

[–] shertson@mastodon.world 1 points 10 hours ago

@interdimensionalmeme @wireless_purposely832

I believe Graber did a talk at FOSDEM this year about using OCI images in Incus.

Ah, you're right. I'll edit my comment.

incus may be an option for you though. It supports both LXC/LXD and OCI (although not nearly as well as Docker/Podman/Kubernetes - I don't think it supports any compose files).

[–] Dunstabzugshaubitze@feddit.org 17 points 1 day ago (1 children)

https://distribution.github.io/distribution/

is an opensource implementation of a registry.

you could also self host something like gitlab, which bundles this or sonatype nexus which can serve as a repository for several kinds of artifacts including container images.

[–] PassingThrough@lemm.ee 5 points 1 day ago (3 children)

Gitea and therefore Forgejo also have container registry functionality, I use that for private builds.

[–] possiblylinux127@lemmy.zip 3 points 1 day ago

Codeberg as woodpecker CI

[–] macattack@lemmy.world 2 points 1 day ago

Jumping on the forgejo love train

oh, thats good to know, forgejo seems way nicer for self hosting than the limited gitlab open source core.

[–] hedgehog@ttrpg.network 3 points 1 day ago (1 children)

local docker hub proxy

Do you mean a Docker container registry? If so, here are a couple options:

[–] dangling_cat@lemmy.blahaj.zone 3 points 1 day ago (3 children)

Is there a project that acts like a registry? It can proxy the request with TTL, and you can push images to it too?

[–] scrubbles@poptalk.scrubbles.tech 9 points 1 day ago (2 children)

Almost all of them. Forgejo handles containers already for example

[–] robador51@lemmy.ml 4 points 1 day ago* (last edited 1 day ago)

How? I was looking for this (although not very thoroughly)

[Edit] found it https://forgejo.org/docs/v1.21/user/packages/container/

[–] beeng@discuss.tchncs.de 1 points 1 day ago

Pull through Cache / proxy is what you're looking for.

[–] heavydust@sh.itjust.works 6 points 1 day ago

Artifactory is mandatory in some industries because it will keep all the versions of the images forever so that you can build your projects reliably without an internet connction.

[–] renegadespork@lemmy.jelliefrontier.net 3 points 1 day ago (1 children)

I think most self-hosted Git+CI/CD platforms have container registry as a feature, but I'm not aware of a service that is just a standalone registry.

[–] tofuwabohu@slrpnk.net 4 points 1 day ago* (last edited 1 day ago)

It's easy to oversee because of the generic name, but this is pretty much that: https://hub.docker.com/_/registry

Edit: forgot there's jfrog artifactory as well

[–] Mubelotix@jlai.lu -2 points 1 day ago

If only they used a distributed protocol like ipfs, we wouldn't be in this situation