this post was submitted on 22 May 2024
296 points (96.8% liked)

News

22583 readers
5405 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] IHeartBadCode@kbin.social 17 points 3 months ago* (last edited 3 months ago) (1 children)

Quick things to note.

One, yes, some models were trained on CSAM. In AI you'll have checkpoints in a model. As a model learns new things, you have a new checkpoint. SD1.5 was the base model used in this. SD1.5 itself was not trained on any CSAM, but people have giving additional training to SD1.5 to create new checkpoints that have CSAM baked in. Likely, this is what this person was using.

Two, yes, you can get something out of a model that was never in the model to begin with. It's complicated, but a way to think about it is, a program draws raw pixels to the screen. Your GPU applies some math to smooth that out. That math adds additional information that the program never distinctly pushed to your screen.

Models have tensors which long story short, is a way to express an average way pixels should land to arrive at some object. This is why you see six fingered people in AI art. There wasn't any six fingered person fed into the model, what you are seeing the averaging of weights pushing pixels between two different relationships for the word "hand". That averaging is adding new information in the expression of an additional finger.

I won't deep dive into the maths of it. But there's ways to coax new ways to average weights to arrive at new outcomes. The training part is what tells the relationship between A and C to be B'. But if we wanted D' as the outcome, we could retrain the model to have C and E averaging OR we could use things call LoRAs to change the low order ranking of B' to D'. This doesn't require us to retrain the model, we are just providing guidance on ways to average things that the model has already seen. Retraining on C and E to D' is the part old models and checkpoints used to go and that requires a lot of images to retrain that. Taking the outcome B' and putting a thumb on the scale to put it to D' is an easier route, that just requires a generalized teaching of how to skew the weights and is much easier.

I know this is massively summarizing things and yeah I get it, it's a bit hard to conceptualize how we can go from something like MSAA to generating CSAM. And yeah, I'm skipping over a lot of steps here. But at the end of the day, those tensors are just numbers that tell the program how to push pixels around given a word. You can maths those numbers to give results that the numbers weren't originally arranged to do in the first place. AI models are not databases, they aren't recalling pixel for pixel images they've seen before, they're averaging out averages of averages.

I think this case will be slam dunk because highly likely this person's model was an SD1.5 checkpoint that was trained on very bad things. But with the advent of being able to change how averages themselves and not the source tensors in the model work, you can teach new ways for a model to average weights to obtain results the model didn't originally have, without any kind of source material to train the model. It's like the difference between Spatial antialiasing and MSAA.

[–] DarkCloud@lemmy.world 6 points 3 months ago* (last edited 3 months ago) (1 children)

Shouldn't the company's who have the CSAM face consequences for possession of it? Seems like a double standard.

The government should be shutting down the source material.

[–] ricecake@sh.itjust.works 4 points 3 months ago (1 children)

In the eyes of the law, intent does matter, as well as how it's responded to.
For csam material, you have to knowingly possess it or have sought to possess it.

The AI companies use a project that indexes everything on the Internet, like Google, but with publicly available free output.

https://commoncrawl.org/

They use this data via another project, https://laion.ai/ , which uses the data to find images with descriptions attached, do some tricks to validate that the descriptions make sense, and then publish a list of "location of the image, description of the image" pairs.

The AI companies use that list to grab the images train an AI on them in conjunction with the description.

So, people at Stanford were doing research on the laion dataset when they found the instances of csam. The laion project pulled their datasets from being available while things were checked and new safeguards put in place.
The AI companies also pulled their models (if public) while the images were removed from the data set and new safeguards implemented.
Most of the csam images in the dataset were already gone by the time the AI companies would have attempted to access them, but some were not.

A very obvious lack of intent to acquire the material, in fact a lack of awareness the material was possessed at all, transparency in response, taking steps to prevent further distribution, and taking action to prevent it from happening again both provides a defensive against accusations, and will make anyone interested less likely to want to make those accusations.

On the other hand, the people who generated the images were knowingly doing so, which is a nono.

[–] DarkCloud@lemmy.world 1 points 3 months ago (1 children)

They wouldn't be able to generate it had there been none in the training data, so I assume the labelling and verification systems you talk about aren't very good.

[–] ricecake@sh.itjust.works 1 points 3 months ago (1 children)

That's not accurate. The systems are designed to generate previously unseen concepts or images by combining known concepts.

It's why it can give you an image of a pony using a hangglider, despite never having seen that. It knows what ponies look like, and it knows what hanggliding looks like, so it can find a way to put both into the image. Where it doesn't know, it will make stuff up from what it does know, often requiring potentially very detailed user explanation to describe how a horse would fit in a hangglider, or that it shouldn't have a little person sticking out of it's back.

[–] DarkCloud@lemmy.world 0 points 3 months ago (1 children)

I think it would just create adults naked with children's faces unless it actually had CSAM... Which it probably does have.

[–] ricecake@sh.itjust.works 1 points 3 months ago* (last edited 3 months ago) (1 children)

Again, that's not how it works.

Could you hypothetically describe csam without describing an adult with a child's head, or specifying that it's a naked child?
That's what a person trying to generate csam would need to do, because it doesn't have those concepts.
If you just asked it directly, like I said "horse flying a hangglider" before, you would get what you describe because it's using the only "naked" it knows.
You would need to specifically ask it to demphasize adult characteristics and emphasize child characteristics.

That doesn't mean that it was trained on that content.

For context from the article:

The DOJ alleged that evidence from his laptop showed that Anderegg "used extremely specific and explicit prompts to create these images," including "specific 'negative' prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults."