this post was submitted on 09 Jul 2023
52 points (100.0% liked)

Selfhosted

40006 readers
717 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm currently struggling with upgrading some Postgres DBs on my home-k3s and I'm seriously considering throwing it all away since it's such a hassle.

So, how do you handle DBs? K8s? Just a regular daemon?

top 31 comments
sorted by: hot top controversial new old
[–] bookworm@feddit.nl 25 points 1 year ago* (last edited 1 year ago) (3 children)

I just run one mariadb container via docker-compose that all my other services use as their database.

version: "2"
services:
  mariadb:
    image: lscr.io/linuxserver/mariadb:latest
    container_name: mariadb
    environment:
      - TZ=####/####
      - PUID=###
      - PGID=###
      - MYSQL_ROOT_PASSWORD==############
    volumes:
      - /docker/mariadb:/config
    ports:
      - 3306:3306
    restart: unless-stopped

Off-topic but I don't really get the appeal in running Kubernetes (or similar technologies) in a homelab. Unless it's something you want to learn for work of course.

[–] agressivelyPassive@feddit.de 9 points 1 year ago (3 children)

I'm running kubernetes simply because the other options are worse.

Proxmox takes to many resources.

Docker Compose caused countless issues for me when running multiple services (especially network related).

Bare metal is annoying, because you're forced to keep all the services in lockstep, dependency wise.

I'm using kubernetes at with, the overhead is rather small (with k3s) and mostly it's working pretty great.

[–] poVoq@slrpnk.net 5 points 1 year ago

Use Podman with Systemd & Quadlet. Like bare-metal but without the annoyances you mention.

[–] theterrasque@infosec.pub 5 points 1 year ago (1 children)

As a bonus, you can just join multiple machines to the cluster and have work spread out over them.

[–] Auli@lemmy.ca 1 points 1 year ago

Ah yes the clusters of my homelab.

[–] keyez@lemmy.world 1 points 1 year ago

That's funny to hear as daily for work I use k3s and RKE2 for deployments and testing and at home I use unraid specifically because of all the k3s work I do even k3s has too much overhead for updates and backups and all that IMO.

[–] VexCatalyst@lemmy.fmhy.ml 3 points 1 year ago (1 children)

That, and you have to take into account each person’s available hardware and resources.

I have an under powered 10 year old desktop, a resonably specd 5 year old laptop with a busted screen, and 8 Raspberry Pi’s (3s and 4s). And can’t currently afford better hardware.Sometimes clustering those Pi’s makes sense.

You can use whatever you have to hand.

[–] bookworm@feddit.nl 3 points 1 year ago* (last edited 1 year ago) (1 children)

That's a great point I hadn't considered tbh! And that learning new technologies even if there is no "purpose" to it can be... fun! :)

[–] metaStatic@kbin.social 1 points 1 year ago (1 children)

I want to learn docker but don't have anything that can run docker

What do you have? Almost all computers can run docker.

[–] MigratingtoLemmy@lemmy.world 3 points 1 year ago

I don't like Docker as a company, the networking seems unnecessarily obtuse to me, and k3s is a smaller version of k8s, which is here to stay in my opinion (has a bigger learning curve though), and will help me in my career. Those would be my reasons, but if someone doesn't have a use for k3s I suppose there's not much of a point, considering everything is still written for docker

[–] andrew@lemmy.stuart.fun 7 points 1 year ago (1 children)

I'm a big fan of the zalando postgres operator. A lot of the critical features you'd want in production databases are handled and very nicely abstracted.

https://github.com/zalando/postgres-operator

[–] billygoat@lemmy.fmhy.ml 3 points 1 year ago* (last edited 1 year ago) (1 children)

Did they get it working with multi arch setups? I have a few pi’s in my cluster and last time I looked at using that it wasn’t ready for arm64

[–] andrew@lemmy.stuart.fun 2 points 1 year ago (1 children)

I'm not sure, actually. My personal cluster is all x86 so I'm not usually that aware of the multiarch stuff. 😬

[–] billygoat@lemmy.fmhy.ml 3 points 1 year ago

I have found that some things just aren’t ready for arm and I’ll probably swap my worker nodes to x86 only. Should be okay to keep etcd and control nodes as mixed.

[–] bigredgiraffe@lemmy.world 7 points 1 year ago* (last edited 1 year ago) (1 children)

Are we talking database schema migrations or migrating a database between Postgres instances?

If it’s the former, the pattern is usually to run them in init containers or Jobs but I have been wanting to try out SchemaHero for a while which is a tool to orchestrate it and looks pretty neat.

ETA: Thought I was replying to your below comment but Memmy deleted it the first time for some reason, my bad.

[–] agressivelyPassive@feddit.de 1 points 1 year ago (2 children)

It's about PostgreSQL upgrade.

The "pattern" there is to either dump and reinsert the entire DB or upgrade by having two installations (old and new version), which doesn't exactly work well in k8s. It's possible, but seems hacky

[–] bigredgiraffe@lemmy.world 4 points 1 year ago

I can’t think of any situation other than maybe wanting to get better indexing or changing the storage engine that I would need to re-create and re-insert that way so I’m not sure if you have a constraint that necessitates that or not but now I’m curious and I am always curious to find new or better methods so why do you do it that way?

At home to upgrade Postgres I would just make a temporary copy the data directory as a backup and then just change the version of the container and if it’s needed run pg_upgrade as jobs in kubernetes.

In a work environment there is more likely to be clustering involved so the upgrade path depends on that but it’s similar but there really isn’t a need to re-create the data, the new version starts with the same PVCs using whatever rollout strategy applies. Major version upgrades can sometimes require extra steps but the engine is almost always backwards compatible at least several versions.

[–] cdombroski@programming.dev 3 points 1 year ago

I've always used this docker image to do pg upgrades. It runs pg_upgrade to recreate the system tables and copy the user tables (which normally don't have any storage changes). It does require that the database isn't running during the upgrade so you're going to have a bit of downtime. Make sure you redo any changes to any configuration files, especially pg_hba.conf

[–] macro@lemmy.sdf.org 6 points 1 year ago
[–] wgs@lemmy.sdf.org 5 points 1 year ago

I have a single database server because I can't afford two servers with high storage. The servers that need access to it connect over wireguard VPN. This is slow as f**k don't do that.

[–] otl@lemmy.sdf.org 4 points 1 year ago

I avoid software which requires a relational database altogether. For me that’s part of the fun of self hosting: what’s the simplest possible system I can get away with at my tiny scale?

[–] metaStatic@kbin.social 4 points 1 year ago (1 children)

I google why doesn't mysqld work?, then copy paste terminal commands from the first result, then google why doesn't my machine boot? then turn around 360 degrees and walk away.

[–] AnUnusualRelic@lemmy.world 3 points 1 year ago (1 children)

then turn around 360 degrees and walk away.

And how does that work for you?

[–] otl@lemmy.sdf.org 2 points 1 year ago

I imagine they feel like they’re not getting anywhere.

[–] AnUnusualRelic@lemmy.world 3 points 1 year ago

Cautiously.

[–] saint@group.lt 2 points 1 year ago

using mostly operator from percona for kubernetes, sometimes just a simple deployment. Running postgresql for Lemmy from docker-compose as a container.

[–] BrianTheeBiscuiteer@lemmy.world 2 points 1 year ago (1 children)

Never tried it but kubegres seems like a good implementation for kubernetes. I guess if you just have a single-node cluster there won't be much benefit but it seems a periodic backup to NFS is key (you can run NFS on most anything).

[–] agressivelyPassive@feddit.de 3 points 1 year ago

What currently pisses me off is the fact that it's almost impossible to do proper migrations for Postgres in k8s. I'd have to look into kubegres, but all approaches I've seen so far involve basically copying the entire PVC and the data inside into a new structure - and doing so involves hacked together scripts.

[–] exi@feddit.de 2 points 1 year ago

For personal use, I don't bother with databases on k8s. They are waaay easier to manage if you just let your host distribution run it as a regular service and Upgrade it through that

[–] RogerSik@lemmy.sikorski.cloud 2 points 1 year ago

Own vm as regular daemon + acme.sh for tls.

Makes K3s more fun if db's are outside and files (when possible) are on S3 (Minio Docker on Synology).

For the rest than pvcs with longhorn as storage driver.

load more comments
view more: next ›