[-] ruud@lemmy.world 5 points 1 month ago

I did register writefreely.world planning to host that one day, but I need some more selfhosting nerds to help out running all these instances :-) The foundation is now already running a few dozen Fedi instances :-D

[-] ruud@lemmy.world 4 points 2 months ago

Well thanks! ;-)

[-] ruud@lemmy.world 6 points 5 months ago

Ahh yes I meant as opposed to Twitter and Facebook. But I worded it badly 😁

[-] ruud@lemmy.world 7 points 9 months ago

Very cool!!

[-] ruud@lemmy.world 6 points 9 months ago

Ahh nice. I know what I’ll be doing tomorrow.

[-] ruud@lemmy.world 7 points 11 months ago

(I'll add links / descriptions later)

I host the following fediverse stuff:

  • Lemmy (you're looking at it)
  • Mastodon (3 instances)
  • Calckey oh sorry, now FireFish
  • Pixelfed
  • Misskey
  • Writefreely
  • Funkwhale
  • Akkoma (2 instances)
  • Peertube

And these are other things I host:

  • Kimai2
  • Matrix/Synapse
  • Silver Bullet
  • XWiki (3 instances)
  • Cryptpad (2 instances)
  • Gitea
  • Grafana
  • Hedgedoc
  • Minecraft
  • Nextcloud
  • Nginx Proxy Manager
  • Paperless-ngx
  • TheLounge
  • Vaultwarden
  • Zabbix
  • Zammad
[-] ruud@lemmy.world 10 points 11 months ago* (last edited 11 months ago)

I'm still running Synapse. Could I migrate this to Dendrite or others? Or would I have to just re-install and lose all messages..

1
The .world blog: June overview (blog.mastodon.world)
submitted 11 months ago by ruud@lemmy.world to c/lemmyworld@lemmy.world

I blogged about what happened in June, and the financial overview.

1
submitted 11 months ago* (last edited 11 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

It's always the small things you overlook...

The docker-compose.yml I copied from somewhere when setting up lemmy.world apparently was missing the external network for the pictrs container.. So pictrs was working, as long as it got the images via Lemmy. Getting the images via URL didn't work...

Looks like it's working now. Looks a whole lot better with all the images :-)

Edit For existing posts: Edit the post, then Save. (No need to change anything). This also fetches the image.

1
submitted 11 months ago* (last edited 11 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

(Duplicate post :-) see https://lemmy.world/post/1375042)

4
submitted 11 months ago by ruud@lemmy.world to c/voyagerapp@lemmy.world

cross-posted from: https://lemmy.world/post/1303201

We've installed Voyager and it's reachable at https://m.lemmy.world, you can browse Lemmy, and login there (also if your account isn't on lemmy.world)

1
submitted 11 months ago* (last edited 11 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

We've installed Voyager and it's reachable at https://m.lemmy.world, you can browse Lemmy, and login there (also if your account isn't on lemmy.world)

PS Thanks go out to @stux@stux@geddit.social , he came up with the idea (see https://m.geddit.social).

1
submitted 11 months ago* (last edited 11 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

While I was asleep, apparently the site was hacked. Luckily, (big) part of the lemmy.world team is in US, and some early birds in EU also helped mitigate this.

As I am told, this was the issue:

  • There is an vulnerability which was exploited
  • Several people had their JWT cookies leaked, including at least one admin
  • Attackers started changing site settings and posting fake announcements etc

Our mitigations:

  • We removed the vulnerability
  • Deleted all comments and private messages that contained the exploit
  • Rotated JWT secret which invalidated all existing cookies

The vulnerability will be fixed by the Lemmy devs.

Details of the vulnerability are here

Many thanks for all that helped, and sorry for any inconvenience caused!

Update While we believe the admins accounts were what they were after, it could be that other users accounts were compromised. Your cookie could have been 'stolen' and the hacker could have had access to your account, creating posts and comments under your name, and accessing/changing your settings (which shows your e-mail).

For this, you would have had to be using lemmy.world at that time, and load a page that had the vulnerability in it.

1
submitted 11 months ago* (last edited 11 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

We've updated Lemmy.world to Lemmy 0.18.1.

For the release notes, see https://lemmy.world/post/1139237

[-] ruud@lemmy.world 8 points 1 year ago
  • Powerful: Organizations & team permissions, CI integration, Code Search, LDAP, OAuth and much more. If you have advanced needs, Forgejo has you covered.

Selfhosters wanna host. But many people don't. (Ergo: lemmy.world, mastodon.world (GitHub anyone?) so maybe people would like forgejo.world And if not, I'll use it myself! :-)

64
submitted 1 year ago by ruud@lemmy.world to c/selfhosted@lemmy.world

Anyone running Forgejo? It's I think a fork of Gitea. They are also implementing federation

1
submitted 1 year ago* (last edited 11 months ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

For those who find it interesting, enjoy!

1
submitted 1 year ago* (last edited 1 year ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

Another day, another update.

More troubleshooting was done today. What did we do:

  • Yesterday evening @phiresky@phiresky@lemmy.world did some SQL troubleshooting with some of the lemmy.world admins. After that, phiresky submitted some PRs to github.
  • @cetra3@lemmy.ml created a docker image containing 3PR's: Disable retry queue, Get follower Inbox Fix, Admin Index Fix
  • We started using this image, and saw a big drop in CPU usage and disk load.
  • We saw thousands of errors per minute in the nginx log for old clients trying to access the websockets (which were removed in 0.18), so we added a return 404 in nginx conf for /api/v3/ws.
  • We updated lemmy-ui from RC7 to RC10 which fixed a lot, among which the issue with replying to DMs
  • We found that the many 502-errors were caused by an issue in Lemmy/markdown-it.actix or whatever, causing nginx to temporarily mark an upstream to be dead. As a workaround we can either 1.) Only use 1 container or 2.) set ~~proxy_next_upstream timeout;~~ max_fails=5 in nginx.

Currently we're running with 1 lemmy container, so the 502-errors are completely gone so far, and because of the fixes in the Lemmy code everything seems to be running smooth. If needed we could spin up a second lemmy container using the ~~proxy_next_upstream timeout;~~ max_fails=5 workaround but for now it seems to hold with 1.

Thanks to @phiresky@lemmy.world , @cetra3@lemmy.ml , @stanford@discuss.as200950.com, @db0@lemmy.dbzer0.com , @jelloeater85@lemmy.world , @TragicNotCute@lemmy.world for their help!

And not to forget, thanks to @nutomic@lemmy.ml and @dessalines@lemmy.ml for their continuing hard work on Lemmy!

And thank you all for your patience, we'll keep working on it!

Oh, and as bonus, an image (thanks Phiresky!) of the change in bandwidth after implementing the new Lemmy docker image with the PRs.

Edit So as soon as the US folks wake up (hi!) we seem to need the second Lemmy container for performance. So that's now started, and I noticed the proxy_next_upstream timeout setting didn't work (or I didn't set it properly) so I used max_fails=5 for each upstream, that does actually work.

1
submitted 1 year ago* (last edited 1 year ago) by ruud@lemmy.world to c/lemmyworld@lemmy.world

Status update July 4th

Just wanted to let you know where we are with Lemmy.world.

Issues

As you might have noticed, things still won't work as desired.. we see several issues:

Performance

  • Loading is mostly OK, but sometimes things take forever
  • We (and you) see many 502 errors, resulting in empty pages etc.
  • System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)

Bugs

  • Replying to a DM doesn't seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing)
  • 2FA seems to be a problem for many people. It doesn't always work as expected.

Troubleshooting

We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, @nutomic@lemmy.ml is also helping with current issues.

So, all is not yet running smoothly as we hoped, but with all this help we'll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep Lemmy.world running!

1
Need support? (lemmy.world)
submitted 1 year ago by ruud@lemmy.world to c/lemmyworld@lemmy.world

If you need support, best is to not DM me here or mention me in comments. I now have 300 notifications and probably no time to read them soon. Also I don’t do moderation so any moderation questions I have to forward to the moderation team.

where to get support

There’s the !support@lemmy.world community, and another option is to send mail to info@lemmy.world. Mail is converted to tickets which can be picked up by admins and moderators.

Thanks! Enjoy your day!

[-] ruud@lemmy.world 6 points 1 year ago

Your mail address is only stored in the database, to which no-one but me has access, and it shows in your Settings page when logged in. So unless there's a security flaw in Lemmy, your mail address should be safe.

[-] ruud@lemmy.world 3 points 1 year ago

Really awesome work. We need more Lemmy servers!

[-] ruud@lemmy.world 7 points 1 year ago

All on Hetzner.

[-] ruud@lemmy.world 5 points 1 year ago

I run lemmy.world on a VPS at Hetzner. They are cheap and good. Storage: I now (after 11 days) have 2GB of images and 2GB of database.

view more: ‹ prev next ›

ruud

joined 1 year ago
MODERATOR OF