this post was submitted on 16 Jul 2023
94 points (100.0% liked)

Technology

37516 readers
703 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

"Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease," they added. "We term this condition Model Autophagy Disorder (MAD)."

Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.

you are viewing a single comment's thread
view the rest of the comments
[–] h3ndrik@feddit.de 0 points 1 year ago* (last edited 1 year ago) (1 children)

Wow. How is this going to affect all the projects that fine-tune Meta's Llama model with synthetic training data?

[–] lloram239@feddit.de 0 points 1 year ago (1 children)

Not much at all I would think. The Llama models get trained on the superior GPT-4 output, not on their own output. In general I think it's a bit of an artificial problem, nobody really expects to train AI on their own output and get good results. What actually happens is AI being used to curate real world data and use that curated data as input, which gives much better results than feeding the raw data directly into the AI (as can be seen by early LLMs that just go completely off track and start repeating comment section and HTML code, that has nothing to do with your prompt, but that just happens to be part of raw websites).

[–] h3ndrik@feddit.de 1 points 1 year ago

Thank you for explaining. Yes. Now that i have skimmed through the paper i'm kind of disappointed in their work. It's not a surprise to me that quality will degrade if you design a feedback loop with low quality data. And does this even mean anything for a distinction between human and synthetic data? Isn't it obvious a model will deteriorate if you feed it progressively lower quality input, regardless of where you got that from? I'm pretty sure this is the mechanism behind that. A better question to ask would be: Is there some point where synthetic output gets good enough to train something with it. And how far away is that point. Or can we rule that out because of some properties we can't get around. I'm not sure if learning from own output is even possible like this. I as a human certainly can't teach myself. I would need some input like books or curated assignments/examples prepared by other people. There are kind of intrinsic barriers when teaching oneself. However I can certainly practice stuff. But that's kind of a different mechanism. And difficult to compare to the AI stuff.

I'm glad i can continue to play with the language models, have them tuned to follow instructions (with the help of GPT4 data) etc