FancyGUI

joined 1 year ago
[–] FancyGUI@lemmy.fancywhale.ca 1 points 11 months ago

Thanks for trying though! Appreciate it. I'm happy to use nextcloud just for regular files for now anyways.

[–] FancyGUI@lemmy.fancywhale.ca 1 points 11 months ago (2 children)

Nope, all I have is the default phone region not set.

[–] FancyGUI@lemmy.fancywhale.ca 1 points 11 months ago (4 children)

Great to know! Something is really odd on my instance. The DB is definitely working well on its own, all the queries are returned quite fast from what I gather on the monitoring side. Probably something odd on the server side. I'm using redis for caching, it helps a bit as it comes down to a halt without redis.

[–] FancyGUI@lemmy.fancywhale.ca 3 points 11 months ago* (last edited 11 months ago) (10 children)

Well. It just stopped working after the 2k items mark for me. Had to increase php memory and all for almost 6GB to make it work. Still sluggish AF. It’ll be just my file bucket for now on. EDIT: I will like to add, Immich is now with 32k assets for me, working flawlessly and only using 300MB when active of memory.

[–] FancyGUI@lemmy.fancywhale.ca 2 points 11 months ago

I’m exactly on the same page! There was a book in Portuguese that I couldn’t find to purchase as ebook, but found on Anna’s archive! And the feel of the è-ink display scratches that itch for a paper-like reading experience.

[–] FancyGUI@lemmy.fancywhale.ca 11 points 11 months ago

Ahhh! That does make more sense. I’d suggest dropping in on their GitHub and disclosing this. I am not sure if this is in their roadmap as it would be interesting to share metadata that is personal to others. But I would say the “partner sharing” feature could potentially be what you are looking for. I am not sure if it is available yet. But yeah, that could be a challenge

[–] FancyGUI@lemmy.fancywhale.ca 15 points 11 months ago (2 children)

Hey’ I am not sure what you are saying about the sharing library and user passwords. I’ve been using Immich with my wife just fine for a while and it’s been working well for both of us to have our own libraries and share the ones we want. Care to go into more detail on what you mean?

[–] FancyGUI@lemmy.fancywhale.ca 1 points 11 months ago

!selfhosted@lemmy.world is a favorite

[–] FancyGUI@lemmy.fancywhale.ca 1 points 11 months ago

I completely understand what you are saying! I’ve been using OwnTracks for this. I’ll post that in the self hosted community later for other people that might be searching for the same. All you need is to setup your own server and it’ll upload your location there to keep it safe and private. Search and views are not as great as google’s IMO, but I’m tinkering with the available data that I have now to create something more friendly.

[–] FancyGUI@lemmy.fancywhale.ca 2 points 11 months ago

HighAvailability

[–] FancyGUI@lemmy.fancywhale.ca 1 points 11 months ago

Buddy! Renovate self hosted FTW with custom reflex manager for k8s yaml files using helm release. MAGNIFICENT

[–] FancyGUI@lemmy.fancywhale.ca 4 points 11 months ago (1 children)

It’s more about people being able to interact with your public projects without having to sign up to your instance. I’ve been wanting this for a few tools that I create and maintain. It’s awkward for people to open issues if they can’t sign up to my gitlab instance

 

As the title says. I build containers for my platforms/clients/myself-selfhosted@home and you would not believe how much smaller you can get your images. Here's an example when slimming one of my images:

cmd=build info=results status='MINIFIED' by='18.97X' size.original='1.0 GB' size.optimized='55 MB' 

That's a Python app that I didn't have to do multi-staged build with docker because of the Slim command. And it's a working version of that app that I'm using today.

Same for one of my flutter apps that I thought it was as small as it could be:

cmd=build info=results status='MINIFIED' by='1.98X' size.original='66 MB' size.optimized='33 MB'

TLDR: slim your container images!! https://github.com/slimtoolkit/slim

view more: next ›