365
submitted 10 months ago* (last edited 10 months ago) by db0@lemmy.dbzer0.com to c/selfhosted@lemmy.world

I posted the other day that you can clean up your object storage from CSAM using my AI-based tool. Many people expressed the wish to use it on their local file storage-based pict-rs. So I've just extended its functionality to allow exactly that.

The new lemmy_safety_local_storage.py will go through your pict-rs volume in the filesystem and scan each image for CSAM, and delete it. The requirements are

  • A linux account with read-write access to the volume files
  • A private key authentication for that account

As my main instance is using object storage, my testing is limited to my dev instance, and there it all looks OK to me. But do run it with --dry_run if you're worried. You can delete lemmy_safety.db and rerun to enforce the delete after (method to utilize the --dry_run results coming soon)

PS: if you were using the object storage cleanup, that script has been renamed to lemmy_safety_object_storage.py

you are viewing a single comment's thread
view the rest of the comments
[-] XaeroDegreaz@lemmy.world 20 points 10 months ago

I'm curious... How does one even test such a thing before distributing it without having offending files to test against.

Like during the development process of this project, how on earth can you test it properly? 😂

[-] Rescuer6394@feddit.nl 18 points 10 months ago

it uses a model that describes a photo, then it searches the generated description for some terms and ranks the image to some levels of safety.

to test it you use a more general filter, for all nsfw for example, and see if the matches are correct.

[-] poVoq@slrpnk.net 9 points 10 months ago

Hmm, thinking out loud... Wouldn't that also make it easy to remove scat porn and Hitler + Nazi flag images?

There has been a lot of spam like that on Lemmy and at least the latter is somewhat illegal to host in Germany as well.

[-] Rescuer6394@feddit.nl 7 points 10 months ago

yes... maybe.

as the dev said, it flags a lot of false positive. so a human should look at them anyway.

maybe when this is a bit more evolved, we can use it to preprocess posts, and if a post gets flagged for something, a mod / admin needs to approve the post manually.

maybe for CASM, it gets sent to an external service specialized to that stuff, so the mod / admin doesn't have to look at the images.

[-] Amaltheamannen@lemmy.ml 3 points 10 months ago

While it's not the case for this project I'm sure there's some poor researcher our there who trained a model on actual confiscated CSAM. Or most likely overworked traumatized thirld world content moderators employed by the likes of Meta.

this post was submitted on 31 Aug 2023
365 points (99.5% liked)

Selfhosted

37771 readers
597 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS