this post was submitted on 31 Aug 2023
595 points (97.9% liked)

Technology

60106 readers
1868 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

I'm rather curious to see how the EU's privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn't have a paywall)

you are viewing a single comment's thread
view the rest of the comments
[–] Pichu0102@kbin.social 6 points 1 year ago (3 children)

I feel like one way to do this would be to break up models and their training data into mini-models and mini-batches of training data instead of one big model, and also restricting training data to that used with permission as well as public domain sources. For all other cases where a company is required to take down information in a model that their permission to use was revoked or expired, they can identify the relevant training data in the mini batches, remove it, then retrain the corresponding mini model more quickly and efficiently than having to retrain the entire massive model.

A major problem with this though would be figuring out how to efficiently query multiple mini models and come up with a single response. I'm not sure how you could do that, at least very well...

[–] Strawberry@lemmy.blahaj.zone 3 points 1 year ago

You could certainly break up training data, but breaking up the models into mini models based on which training data is used wouldn't work with neural networks trained using gradient descent. Basically whatever the state of the model is it depends on the totality of the training data that it has been trained on (and the order) and it isn't possible to go and remove the effect of a specific training data point without then retraining for all of the data that followed that data point (and even that assumes you were storing a snapshot of the model before every single training data point, which I doubt anyone does)

However, that's no excuse and it is of course possible to entirely retrain a network using a clean dataset and that is what these companies should do

[–] HerbalGamer@lemm.ee 2 points 1 year ago

Am I correct in assuming that sounds a bit like libraries used in programming?

[–] eltimablo@kbin.social 1 points 1 year ago

I believe this is how the Tesla FSD beta AI works.