this post was submitted on 28 May 2024
70 points (92.7% liked)

Lemmy

12459 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

This is just to followup from my prior post on latencies increasing with increasing uptime (see here).

There was a recent update to lemmy.ml (to 0.19.4-rc.2) ... and everything is so much snappier. AFAICT, there isn't any obvious reason for this in the update itself(?) ... so it'd be a good bet that there's some memory leak or something that slows down some of the actions over time.

Also ... interesting update ... I didn't pick up that there'd be some web-UI additions and they seem nice!

top 17 comments
sorted by: hot top controversial new old
[–] nutomic@lemmy.ml 26 points 4 months ago

There were optimizations related to database triggers, these are probably responsible for the speedup.

https://github.com/LemmyNet/lemmy/pull/4696

[–] davel@lemmy.ml 23 points 4 months ago (2 children)

For the moment at least. Whatever problem we had before, it seemed to get worse over time, eventually requiring a restart. So we’ll have to wait and see.

[–] maegul@lemmy.ml 10 points 4 months ago (1 children)

Well, I've been on this instance through a few updates now (since Jan 2023) and my impression is that it's a pretty regular pattern (IE, certain APIs like that for replying to a post/comment or even posting have increasing latencies as uptime goes up).

[–] dullbananas@lemmy.ca 1 points 4 months ago (1 children)

Sounds exactly like the problem I fixed and mostly caused

https://github.com/LemmyNet/lemmy/pull/4696

[–] maegul@lemmy.ml 1 points 4 months ago

Nice! Also nice to see some SQL wizardry get involved with lemmy!

[–] kate@lemmy.uhhoh.com 6 points 4 months ago (1 children)

My server seems to get slower until requiring a restart every few days, hoping this provides a fix for me too 🤞

[–] poVoq@slrpnk.net 5 points 4 months ago (1 children)

Try switching to Postresql 16.2 or later.

[–] kate@lemmy.uhhoh.com 3 points 4 months ago (1 children)
[–] poVoq@slrpnk.net 4 points 4 months ago (1 children)

Nothing particular, but there was a strange bug in previous versions that in combination with Lemmy caused a small memory leak.

[–] kate@lemmy.uhhoh.com 1 points 4 months ago (1 children)

In my case it’s lemmy itself that needs to be restarted, not the database server, is this the same bug you’re referring to?

[–] poVoq@slrpnk.net 1 points 4 months ago (1 children)

Yes, restarting Lemmy somehow resets the memory use of the database as well.

[–] kate@lemmy.uhhoh.com 1 points 4 months ago

Hm, weird bug. Thanks for the heads up ❤️ I’ve been using the official ansible setup but might be time to switch away from it

[–] Blaze@reddthat.com 5 points 4 months ago (1 children)

Reddthat has 0.19.4 too, feels indeed snappier

[–] maegul@lemmy.ml 2 points 4 months ago (1 children)

Interesting. It could be for the same reason I suggest for lemmy.ml though. Do you notice latencies getting longer over time?

[–] Blaze@reddthat.com 3 points 4 months ago (1 children)

It's a smaller server so I guess latency issues would appear at a slower pace than lemmy.ml

[–] maegul@lemmy.ml 2 points 4 months ago (1 children)

makes sense ... but still ... you're noticing a difference. Maybe a "boiling frog" situation?

[–] Blaze@reddthat.com 2 points 4 months ago

I would say it still feels snappier today than before the update (a couple weeks ago?), so definitely an improvement