this post was submitted on 31 Aug 2023
595 points (97.9% liked)

Technology

60087 readers
2743 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

I'm rather curious to see how the EU's privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn't have a paywall)

you are viewing a single comment's thread
view the rest of the comments
[–] GoosLife@lemmy.world 30 points 1 year ago* (last edited 1 year ago) (2 children)

If there's something illegal in your dish, you throw it out. It's not a question. I don't care that you spent a lot of time and money on it. "I spent a lot of time preparing the circumstances leading to this crime" is not an excuse, neither is "if I have to face consequences for committing this crime, I might lose money".

[–] Robaque@feddit.it 5 points 1 year ago

Perhaps long pig stew could serve as an apt comparison, lol

[–] Marsupial@quokk.au -1 points 1 year ago (1 children)

Fuck no.

It’s illegal to be gay in many places, should we throw out any AI that isn’t homophobic as shit?

[–] GoosLife@lemmy.world 1 points 1 year ago (1 children)

No, especially because it's not the same thing at all. You're talking about the output, we're talking about the input.

The training data was illegally obtained. That's all that matters here. They can train it on fart jokes or Trump propaganda, it doesn't really matter, as long as the Trump propaganda in question was legally obtained by whoever trained the model.

Whether we should then allow chatbots to generate harmful content, and how we will regulate that by limiting acceptable training data, is a much more complex issue that can be discussed separately. To address your specific example, it would make the most sense that the chatbot is guided towards a viewpoint that aligns with its intended userbase. This just means that certain chatbots might be more or less willing to discuss certain topics. In the same way that an AI for children probably shouldn't be able to discuss certain topics, a chatbot that's made for use in highly religious area, where homosexuality is very taboo, would most likely not be willing to discuss gay marriage at all, rather than being made intentionally homophobic.

[–] Marsupial@quokk.au 1 points 1 year ago (1 children)

The output only exists from the input.

If you feed your model only on “legal” content, that would in many places ensure it had no LGBT+ positive content.

Legality (and the dubious nature of justice systems) of training data is not the angle to be going for.

[–] GoosLife@lemmy.world 1 points 1 year ago (1 children)

You seem to think the majority of LGBT+ positive material is somehow illegal to obtain. That is not the case. You can feed it as much LGBT+ positive material as you like, as long as you have legally obtained it. What you can't do is train it on LGBT+ positive material that you've stolen from its original authors. Does that make more sense?

[–] Marsupial@quokk.au 0 points 1 year ago (1 children)

You do know being LGBT+ in many places is illegal, right? And can even carry the death penalty.

Legality is not important and we should not care if it’s considered legal or not, because what’s legal isn’t what’s right or ethical.

[–] GoosLife@lemmy.world 1 points 1 year ago

Yes I am aware of that. However, I'm not sure how this has anything to do with the fact that it is also illegal to steal data, then continue to use said data to make profits after having been found out. The two are not connected in any logical way, which makes it hard for me to continue to address your concerns in a way that makes sense.

The way I see it, you're either completely missing what we're talking about, or you have some misunderstanding of what the AI language models actually are, and what they can do.

For the record, I'm in no way disagreeing with your views, or your statements that legal and ethical don't always overlap. It is clear to me that you are open minded and well-intended, which I appreciate, and I hope you don't take this the wrong way.