this post was submitted on 28 Nov 2023
91 points (96.9% liked)

Technology

34789 readers
377 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kibiz0r@lemmy.world 27 points 11 months ago (24 children)

So judges are saying:

If you trained a model on a single copyrighted work, then that would be a copyright violation because it would inevitably produce output similar to that single work.

But if you train it on hundreds of thousands of copyrighted works, that’s no longer a copyright violation, because output won’t closely match any single work.

How is something a crime if you do it once, but not if you do it a million times?

It reminds me of the scheme from Office Space: https://youtu.be/yZjCQ3T5yXo

[–] S410@kbin.social 14 points 11 months ago* (last edited 11 months ago)

"AI" models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves.

For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. A language, thought? Or an image classifier? That is not possible to create by hand.

With "AI" instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system.

If we were to try to make a regular calculator that way and all we were giving the model was "2+2=4" it would memorize the equation without understanding it. That's called "overfitting" and that's something people being AI are trying their best to prevent from happening. It happens if the training data contains too many repeats of the same thing.

However, if there is no repetition in the training set, the model is forced to actually learn the patterns in the data, instead of data itself.

Essentially: if you're training a model on single copyrighted work, you're making a copy of that work via overfitting. If you're using terabytes of diverse data, overfitting is minimized. Instead, the resulting model has actual understanding of the system you're training it on.

load more comments (23 replies)