this post was submitted on 25 Feb 2024
103 points (98.1% liked)

Technology

57418 readers
4597 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google says its AI image-generator would sometimes 'overcompensate' for diversity::Google apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would “overcompensate” in seeking a diverse range of people even when such a range didn’t make sense.

you are viewing a single comment's thread
view the rest of the comments
[–] merc@sh.itjust.works 37 points 5 months ago (2 children)

The SALAMI situation is so bad.

Problem: Our training data is super racist, so it always generates white people!

Solution: Modify the prompts so that when a user asks for "a picture of a man" 10% of the time it is changed to "a picture of a BLACK man".

New problem: When the user says "A picture of a Nazi" 10% of the time our fix interprets that as "A picture of a BLACK Nazi"

[–] InfiniWheel@lemmy.one 12 points 5 months ago (1 children)

Also, when the prompt is modified to include "native american" it seems to mostly return the most stereotypically dressed people possible. Like wearing traditional garbs and headdresses when everyone else portrayed is wearing setting appropriate clothing.

[–] merc@sh.itjust.works 5 points 5 months ago

Yep, it's racism piled on top of racism. Aboriginal people are rarely included in the training data, but when they are it's mostly wearing what they wear for tourists, and rarely what they wear on a day-to-day basis in the modern world. As a result, that's what you get in the output.

The real fix would be to fix the training data, but that's difficult. It's much easier to train the SALAMI on the racist things that you find all over the web, than to be selective and say "sure, this may be on the web, but it isn't representative of reality".

[–] TakiMinase@slrpnk.net 11 points 5 months ago (1 children)

So it's a glorified chat database.

[–] merc@sh.itjust.works 2 points 5 months ago

The input to an LLM is effectively a huge quantity of text including chats. What the generative LLM does is nothing more than fancy auto-complete, finding the next word, then the next word, then the next word...