It's a form of coattail riding. It's no different from tracing.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
As an artist who has had her art stolen before for usage in an AI output, being against any and all art theft is the default and perfectly reasonable standpoint for an artist. On some art websites, AI generated images fall under the rule against art theft. This is because AI models scrape artists' work without their consent, and the output of a prompt is reliant on the amalgamation of the aforementioned scraped artworks. I've personally seen some AI images in which the mangled remains of artists' signatures are still visible.
The best analogy I can offer to explain why this is theft is that typing in a prompt into an AI image generator is like commissioning an artist to draw something for you, except the artist turns out to be someone who traces people's art and picks stolen artwork to trace from to match the prompt, and then claiming that it was you who created the image.
AI feels like a Lovecraftian horror to me. It's trying to look authentic, but it's wrong on a fundemental level. Nothing's the right shape, nothing's the right texture, nothing's consistent, nothing belongs together... But somehow, nobody else has noticed what should be blatantly obvious! And when you try to point it out, you get a hivemind responding that it's good actually, and you're just a luddite.
But let's assume AI stops being awful in a technical sense. It's still awful in a moral sense.
Artists are poor. That's a well known sentiment you see a lot and, given how many times I see commission postings, it's pretty accurate. That artist needs to work to live, and that work is creating art.
AI is deliberately depriving these artists of work in order to give the AI's owner a quick, low quality substitute. In some cases, it will copy an artist's style, so you're deliberately targetting a specific artist because they're good at their job. And it's using the artist's work in order to replace them.
Isn't this point also valid for any kind of automation? Machines removed worked from manual workers, software engineers remove work from manual and office workers since they started, way before LLMs. The point that artists actual love their work could also be made for other people whose work have been automated before.
I think the real issue is that automation should benefit everyone equally, and not only its owners.
The key in my mind is that this technology cannot work independently. A bucket excavator can replace the work of many people digging by hand. But the operation of the machine truly replaces the laborers. Some hand labor is still required in any excavation, but the machine itself is capable of operating just fine without the workers it is replacing.
But LLM image generators? They are only possible from the work of artists. They are directly trained off of artists' work. Even worse, the continued existence of LLMs requires the never-ending continual contribution of humans. When AI image generators are trained off the results from AI image generators, things rapidly generate into literal static. It's making a copy of a copy. If all art becomes made by LLMs, then the only recent data to train future models will be the output of other LLMs, and the whole thing collapses like a snake devouring its own tail.
This is also the crucial difference between how image generators and actual artists work. Some will say that how LLMs work is simply the same learning process that humans learn through. The image generator trains off pre-existing art, and so does a human artist, proponents of AI will say.
But we can see the flaw in this in that real artists do not suffer generational decay. Human artists have trained off the work of other artists, in a chain unbroken since before the rise of civilization. Yes, artists can learn technique and gain inspiration from the work of other artists, but humans are capable of true independent creation. Image generators OTOH are just blindly copying and summarizing the work of others. They have no actual sense of what art is, what makes it good, or what gives it soul. They don't even have a sense of what makes an image comprehensible. They're just playing a big blind correlation game of inputs and outputs. And so, if you train one AI off another AI's output, it decays like making a copy of a copy.
This is a crucial difference between AI "art" and human art. Human art is an original creation. As such, new art can be endlessly created. AI "art" can only blindly copy. So unless the AI can get continual references from actual real human art, it quickly diverges into uselessness.
The ditch digger replaced by an excavator has no real means to legally object. They were paid for their previous jobs, and are simply no longer needed. But real human artists and AI? This software is going to be a never-ending vampire on their creative output. It has only been created by stealing their past work, and it will only remain viable if it can continue to steal their work indefinitely into the future.
Wow, thank you, I think this is the first argument that clicked for me.
But it does raise for me 2 questions:
- If the technology ever gets to a point where it does not degenerate into static by creating its own feedback loop, would it then be more like an excavator?
- What if this is the start of a future (understandably a bad start) where you have artist who get paid to train AI models? Kind of like a an engineer that designs a factory
About your first point: think of it like inbreeding, you need fresh genes on the pool or mutations occur.
A generative model will generate some relevant results and some non relevant results, it's the job of humans to curate that.
However, the more content the llm generates, it is used on the web and thus becomes part of it's training data.
Imagine that 95% of results are accurate, from those only 1% doesn't get fact checked and gets released into the internet where other humans will complain, but that will be used as input of an llm regardless. Anyway, so we have a 99% accuracy in the next input, and only 95% of that will be accurate.
It's literally a sequence that will reach very innacurate values very fast:
f(1) = 1
f(x_n) = x_n-1 * 0.95
You can mitigate it by not training it on generated data, but as long as AI content replaces genuine content, specially with images, AI will train itself from its own output and it will degenerate fast.
About the second point, you can pay artists to train models, sure, but that's not so clear when talking about text based generative models that depend on expert input to give relevant responses. About voice LLMs too, any given money would not be enough for a voice actor because doing so would effectively destroy their future jobs and thus future income.
-
Can it ever get to the point where it wouldn't be vulnerable to this? Maybe. But it would require an entirely different AI architecture than anything that any contemporary AI company is working on. All of these transformer-based LLMs are vulnerable to this.
-
That would be fine. That's what they should have done to train these models in the first place. Instead they're all built on IP theft. They were just too cheap to do so and chose to build companies based on theft instead. If they hired their own artists to create training data, I would certainly lament the commodification and corporatization of art. But that's something that's been happening since long before OpenAI.
Thank you, out of all of these replies I feel like you really hit the nail on the head for me.
Technically, yes, but I would argue that this is worse.
An excavator saves you days of digging a single hole. An assembly line saves you from having to precisely construct a toy. A printer saves you from having to precisely duplicate a sheet of paper. All of this is monotonous and soul-destroying work that people are happy they don't need to do.
But you still need to decide where to dig the hole. You still need to design the toy. You still need to fill in the first sheet of paper. All of the work left over is more creatively fulfilling.
We are now attempting to automate creativity.
Yes, this. This particular comment best summarises how I feel about the topic.
Why would you need an argument beyond AI art is lifeless?
Because it means nothing to me. sorry to disappoint but I don't even understand that argument, I saw plenty of AI images that looked full of life to me, so what does that even mean that it is lifeless? Maybe explain it instead of just being condescending about it.
When a human creates art, there is some intent on it, some emotions they felt when they decided the color pallete, the form... The fact that someone created it and that there's some story behind it gives the piece weight.
Why is an abstract monument created by humans something other humans like to see, and doesn't happen the same on a landslide? Because there's a story behind it.
AI art is lifeless because there's no intent behind it, you don't appreciate the skill of the author behind it. It's just prompt mastery and anyone can replicate it, it's cheap.
It's like comparing human made sculptures with 3d printed sculptures, if 3d printers could create details and work in big sizes. It's cheap.
Okay, I guess I just don't connect to that argument because intent and understanding the artist is rarely a thing I look for in day to day art. 99% of the images I see that make me feel anything do so because of the imagery itself plus sometimes my own experience that might come to mind from it.
It's the difference between having a friend, and having an AI friend.
AI art proved beyond a doubt that death of the author was always 99% bullshit justifying media illiteracy. Now that we have art without an author and it is totally void of expression.
Death of the author is the idea that reader interpretation matters more than author's intent, and it's absolutely fair for media analysis. Sadly, too many people bundle it together with the idea that the author didn't mean anything at all.
Heck, "the curtains were blue" applies authorial intent that there was no meaning behind the curtains. The death of the author reading shows that the curtains had a symbolic reason to be blue.
Who uses the Death of the Author to justify media illiteracy? I think you may be misunderstanding what the term means?
When people say "the author is dead", what they mean is that, when interpreting a piece of art, it doesn't matter what the original artist meant to say with it - for the purpose of the interpretation they are dead and you cannot ask them what they meant.
It's always a personal matter what you see in art, any interpretation that makes sense to you is valid, even if it may not be what the artist intended. (That does not mean you can bullshit your way through poem analysis in school, different situation)
Art is largely about feeling and emotion, but you insist on rejecting arguments that are arguing about emotion.
Interesting.
From an artists view, it basically makes them obsolete. Sucks. Also, legally trained AI has a lot less training data, therefore worse output and so illegal models will always be preferred.
From a tech view, AI does not create anything new. It remixes. If we remove artists, which will happen as AIs are simply cheaper, we won't have anything new. From there on, you can imagine it like that: An artist creates images that are 99-100% of what the goal was, dictated by clients or digitally identified by tags, due to logic, reason, creativity and communication. And they only get better. With AIs, they have like 90% accuracy, due to technical limitations. And once a generated image, which only has 90% accuracy, is used as training data for new images, it only gets worse.
For example, if there are enough images with 6 fingers, created by AI, in training data, that will become the norm.
Basically, authors, artists etc. will be obsolete for a few years, until the AI bubble mostly collapses and quality is so bad that companies and individuals hire professionals again. Then AIs will be used for low-requirement things only again, eg. private memes or roleplay.
So artists are probably angry because they are replaced by much inferior things, that leeched off of themselves and will be gone in a few years anyway. AI just does not make sense, in most cases.
I heard a bunch of explanations but most of them seem emotional and aggressive, and while I respect that this is an emotional subject, I can’t really understand opinions that boil down to “theft” and are aggressive about it.
Why the anger?
How do you earn a living yourself? Or even better, what is your most precious hobby? Whatever it is that you love doing for the love of it (that's the definition of a hobby) try imagining being told one day, out of the blue: 'Guys, my fancy but completely soulless computer can do as good as many of you. And it can do it in seconds. Wanna compete?'
Now, imagine it's your job and not your hobby, the way you earn your living (and pay your rent/mortgage and those always more expensive bills) and imagine being told 'That way you used to earn a living? It's gone now. It instantly vanished in a magical cloud of 1 and 0s. This AI-thing can do in mere seconds something that would take you weeks and it can do it well enough that quite many of your customers may not want to spend (a lot more) money to pay you for doing the exact same job even if you do it much better. How happy would you feel about that?
So, yeah, like you said it's kinda 'emotional' topic...
is there an argument against models that were legally trained?
Being 100% sure there exists such a database that contains no stolen creation, and then that AIs were indeed restricted to it for their training is already something worth debating and doubting (the second it is not open source), imho.
There had been a similar problem a few centuries ago, when photography first appeared many painters rightfully considered photography a threat to their business model as one could have their portrait (edit: or have a picture of a landscape) made in mere minutes (it was a longer than that, early days photography was far from being as quick as we know it but you get the idea).
What happened to them and their practice?
- Some painters had to find rich sponsors that were OK to pay in order to get a portrait that would be more unique than a pĥotography (I know what I would prefer between having my photo taken by even a decent photographer or, say, a painted portrait made by Sargent), others found niche domains were to could still earn a living, while others simply went out of business.
- Others decided painting could be much more than just being realistic like it (mostly) was before photography became a thing and they quickly started offering us amazing new kind of paintings (impressionism, abstract painting, cubism, expressionism,...)
And here we are in the XXI century. Painting is still doing fine in its own way (exposed in art galleries and in the home of rich people). There is also a lot more hobbyist painters that will paint all they can including realistic scenes no matter how much 'better' a photo could be. They don't care. Next to those, there are many photographers taking countless photos (many of which being worthless too), some of them trying (and many failing) to earn a living selling them.
is it something past the saying that AI art is lifeless?
Maybe it will get better, most probably it will, but so far I feel real sad for people that are unable to see, to feel and to understand how lifeless and how clueless AI art is.
Edit: typos (yeah, this was handwritten without the help of any AI :p)
I get what you are saying. But does it not sound like the horse farmers when the car came out? It sucks, I don't blame artists for fighting it and for hating it, but isn't it inevitable that it will happen to most jobs at some point? I work in cyber security, and it would suck a lot once AI gets good enough to start taking me out of business, but I also accept that it is inevitable and the solution of fighting against technological advances has rarely worked historically.
But does it not sound like the horse farmers when the car came out?
but I also accept that it is inevitable
Look where we're heading as regard to pollution (to which all our engines are not a little factor) and ask yourself: would have we known what we know today, was this 'inevitable' path we decided to follow (ultimately it was a choice, nothing more: the choice of using much cheap(er) energy and workforce as a way to gain more power/money faster) was it really the smartest one? Or should we have tried to follow another less obvious path but maybe less destructive? Destructive, like AI is in regard to the OP question but it obviously is not limited to AI.
fighting against technological advances has rarely worked historically.
That's one of the most glaring lie (not yours, I mean it in a general way) in regard to tech: criticizing it or one of its form is not being 'against tech'. It's a critic of tech and/or a refusal of a certain type of tech. The choice is not between ''using tech' and 'being a caveman'. It's about questioning the way we use tech (to do what? Do we really need machines to do creative work?), how we control it (who decide what it's allowed to do and how it is trained), and who owns it (who get all the money? Not the artists they were trained upon, obviously). And who controls all of that?
Also, keep in mind that exactly like AI or the smartphone are considered 'high tech' today, the horse and the cart were also considered high-tech back in their days. Do you think their users were hostile to tech? I don't think so ;)
Interesting thought about the lie, I guess sometimes it's hard to determine what is a criticism against a use case of a tech and what is criticism against the tech itself.
You can't understand why people don't like being stolen from by corporations, and why others don't want to buy stolen work?
You can't understand the difference between digital piracy, humans taking media from corporations for personal use, and the above, corporations taking from humans for commercial use?
For professional artists, AI art is taking away their livelihood. Many of them already lived in precarious conditions in a tough job market before and this is only getting worse now, with companies increasingly relying on cheaper AI art for things like concept art etc.
For me, as a hobbyist and art consumer, the main issue is AI art invading "my" spaces. I want to look at Human-made art and have no interest in AI-generated content whatsoever. But all the platforms are getting flooded with AI content and all the filters I set to avoid it barely help. Many users on these platforms roleplay as real artists as well and pretend their art isn't AI, which annoys me quite a bit. I don't mind if people want to look at AI art, but they should leave me alone with it and don't force it down my throat.
If you worked hard, learned a craft, and spent countless hours honing it and I took your work without asking you and used it to enrich myself and my talentless tech bro buddies, how would you feel?
It would suck, but I wouldn't blame others for enjoying a service that they perceive as convenient. Of course I would blame you for theft/piracy, as I think artists should against illegally trained models.
You don't make LLMs with the enormous amount of training data they require to work well without theft/piracy.
Are you starting to understand why people are upset about this?
When a comedian becomes good enough at doing a Stephen Hawking impression, you don't suddenly expect them to start publishing science studies.
I have a 1 hour video from a digital artist/ programmer that will tell you essentially why it being lifeless matters.
Essentialy, everything before AI was either of mechanistic natural beauty, derived from biological chemical, physical processes, like the leaf of a plant the winding of rivers the shape of mountains etc. ,or it was made by human desicions, there was intentionality thought and perseverance behind every sentence you read, every object you held or owned, every depiction you would look at.
And this made the thing made by humanity inherently understandable as a result of human descion making, creativity, you might not agree with the causes and the outcomes of those decisions, but there was something there to retrace, and this retracing this understanding, made it beautifull, unique or interesting.
Same with the natural objects and phenomenon, you could retrace their existence to causes, causes that unfold a world in their own right, leading you to ask questions about their existence, their creation, their process.
In this retracing, these real links to people, to land and to nature lies the real beauty. The life so to say is them being part of this network for you to take a peek into, through their art, their creation, their mere existence.
Now we have a third category a thing or text or image that exists solely because an imitation machine, an AI is able to crate it, and it can fulfill some profit motive, there is no thought and no intentionality behind this writing this art and so on, it's a result of statistical models which are built on what existed in the real world, and robs most if not all of these building blocks by just existing. It fills their place, it takes the energy they needed, the intelligence and decision-making they can create, and uses it to replace them, gradually over time.
And it doesn't really give back, it doesn't create value in the sense that we can retrace and understand what it makes, it's a statistical result, there are no causes to peek into besides pretty boring math, and a collection of data it was trained on, a collection so big and varied that looking at it's entirety might as well just be looking at everything, it tells us nothing, it doesn't lead us to ask what there is behind it in the same way.