221
submitted 11 months ago by trashhalo@beehaw.org to c/technology@beehaw.org

Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski's style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski's art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

top 50 comments
sorted by: hot top controversial new old
[-] doeknius_gloek@feddit.de 108 points 11 months ago

While some argue this is unethical, others justify it since Rutkowski's art has already been widely used in Stable Diffusion 1.5.

What kind of argument is that supposed to be? We've stolen his art before so it's fine? Dickheads. This whole AI thing is already sketchy enough, at least respect the artists that explicitly want their art to be excluded.

[-] FaceDeer@kbin.social 26 points 11 months ago

His art was not "stolen." That's not an accurate word to describe this process with.

It's not so much that "it was done before so it's fine now" as "it's a well-understood part of many peoples' workflows" that can be used to justify it. As well as the view that there was nothing wrong with doing it the first time, so what's wrong with doing it a second time?

[-] Pulse@dormi.zone 34 points 11 months ago

Yes, it was.

One human artist can, over a life time, learn from a few artists to inform their style.

These AI setups are telling ALL the art from ALL the artists and using them as part of a for profit business.

There is no ethical stance for letting billion dollar tech firms hoover up all the art ever created to the try and remix it for profit.

[-] FaceDeer@kbin.social 23 points 11 months ago

No, it wasn't. Theft is a well-defined word. When you steal something you take it away from them so that they don't have it any more.

It wasn't even a case of copyright violation, because no copies of any of Rutkowski's art were made. The model does not contain a copy of any of the training data (with an asterisk for the case of overfitting, which is very rare and which trainers do their best to avoid). The art it produces in Rutkowski's style is also not a copyright violation because you can't copyright a style.

There is no ethical stance for letting billion dollar tech firms hoover up all the art ever created to the try and remix it for profit.

So how about the open-source models? Or in this specific instance, the guy who made a LoRA for mimicking Rutkowski's style, since he did it free of charge and released it for anyone to use?

[-] Pulse@dormi.zone 28 points 11 months ago

Yes copies were made. The files were downloaded, one way or another (even as a hash, or whatever digital asset they claim to translate them into) then fed to their machines.

If I go into a Ford plant, take pictures of their equipment, then use those to make my own machines, it's still IP theft, even if I didn't walk out with the machine.

Make all the excuses you want, you're supporting the theft of other people's life's work then trying to claim it's ethical.

[-] FaceDeer@kbin.social 16 points 11 months ago* (last edited 11 months ago)

Yes copies were made. The files were downloaded, one way or another (even as a hash, or whatever digital asset they claim to translate them into) then fed to their machines.

They were put on the Internet for that very purpose. When you visit a website and view an image there a copy of it is made in your computer's memory. If that's a copyright violation then everyone's equally boned. When you click this link you're doing exactly the same thing.

load more comments (25 replies)
load more comments (2 replies)
[-] jarfil@beehaw.org 19 points 11 months ago* (last edited 11 months ago)

One human artist can, over a life time, learn from a few artists to inform their style.

These AI setups [...] ALL the art from ALL the artists

So humans are slow and inefficient, what's new?

First the machines replaced hand weavers, then ice sellers went bust, all the calculators got sacked, now it's time for the artists.

There is no ethical stance for letting billion dollar tech firms hoover up all the art ever created to the try and remix it for profit.

We stand on the shoulders of generations of unethical stances.

[-] Pulse@dormi.zone 17 points 11 months ago

"other people were bad so I should be bad to."

Cool.

load more comments (1 replies)
[-] Kara@kbin.social 20 points 11 months ago* (last edited 11 months ago)

I don't like when people say "AI just traces/photobashes art." Because that simply isn't what happens.

But I do very much wish there was some sort of opt-out process, but ultimately any attempt at that just wouldn't work

load more comments (2 replies)
[-] Zeus@lemm.ee 13 points 11 months ago

pirating photoshop is a well-understood part of many peoples' workflows. that doesn't make it legal or condoned by adobe

load more comments (22 replies)
[-] Otome-chan@kbin.social 24 points 11 months ago

no one's art is being "stolen". you're mistaken.

[-] grue@lemmy.ml 22 points 11 months ago

That's true, but only in the sense that theft and copyright infringement are fundamentally different things.

Generating stuff from ML training datasets that included works without permissive licenses is copyright infringement though, just as much as simply copying and pasting parts of those works in would be. The legal definition of a derivative work doesn't care about the techological details.

(For me, the most important consequence of this sort of argument is that everything produced by Github Copilot must be GPL.)

[-] rikudou@lemmings.world 19 points 11 months ago

That's incorrect in my opinion. AI learns patterns from its training data. So do humans, by the way. It's not copy-pasting parts of image or code.

[-] MJBrune@beehaw.org 13 points 11 months ago

At the heart of copyright law is the intent. If an artist makes something, someone can't just come along and copy it and resell it. The intent is so that artists can make a living for their innovation.

AI training on copyrighted images and then reproducing works derived from those images in order to compete with those images in the same style breaks the intent of copyright law. Equally, it does not matter if a picture is original. If you take an artist's picture and recreate it with pixel art, there have already been cases where copyright infringement settlements have been made in favor of the original artist. Despite the original picture not being used at all, just studied. Mile's David Kind Of Bloop cover art.

load more comments (3 replies)
load more comments (5 replies)
load more comments (5 replies)
load more comments (10 replies)
[-] RygelTheDom@midwest.social 48 points 11 months ago

What blurry line? An artist doesn’t what his art stolen from him. Seems pretty cut and dry to me.

[-] fades@beehaw.org 36 points 11 months ago* (last edited 11 months ago)

I don’t disagree but stolen is a bit of a stretch

[-] KoboldCoterie@pawb.social 22 points 11 months ago

I don't fully understand how this works, but if they've created a way to replicate his style that doesn't involve using his art in the model, how is it problematic? I understand not wanting models to be trained using his art, but he doesn't have exclusive rights to the art style, and if someone else can replicate it, what's the problem?

This is an honest question, I don't know enough about this topic to make a case for either side.

[-] jamesravey@lemmy.nopro.be 32 points 11 months ago* (last edited 11 months ago)

TL;DR The new method still requires his art.

LoRA is a way to add additional layers to a neural network that effectively allow you to fine tune it's behaviour. Think of it like a "plugin" or a "mod"

LoRas require examples of the thing you are targeting. Lots of people in the SD community build them for particular celebrities or art styles by collecting examples of the that celebrity or whatever from online.

So in this case Greg has asked Stable to remove his artwork which they have done but some third party has created an unofficial LoRA that does use his artwork to mod the functionality back in.

In the traditional world the rights holder would presumably DMCA the plugin but the lines are much blurrier with LoRA models.

load more comments (1 replies)
load more comments (23 replies)
[-] teichflamme@lemm.ee 21 points 11 months ago

Nothing was stolen.

Drawing inspiration from someone else by looking at their work has been around for centuries.

Imagine if the Renaissance couldn't happen because artists didn't want their style stolen.

[-] FaceDeer@kbin.social 19 points 11 months ago

His art was not "stolen."

[-] falsem@kbin.social 18 points 11 months ago

If I look at someone's paintings, then paint something in a similar style did I steal their work? Or did I take inspiration from it?

[-] Pulse@dormi.zone 15 points 11 months ago

No, you used it to inform your style.

You didn't drop his art on to a screenprinter, smash someone else's art on top, then try to sell t-shirts.

Trying to compare any of this to how one, individual, human learns is such a wildly inaccurate way to justify stealing a someone's else's work product.

[-] falsem@kbin.social 14 points 11 months ago

If it works correctly it's not a screenprinter, it's something unique as the output.

[-] Pulse@dormi.zone 18 points 11 months ago

The fact that folks can identify the source of various parts of the output, and that intact watermarks have shown up, shows that it doesn't work like you think it does.

load more comments (21 replies)
[-] CapedStanker@beehaw.org 44 points 11 months ago

Here's my argument: tough titties. Everything Greg Rutkowski has ever drawn or made has been inspired by other things he has seen and the experiences of his life, and this applies to all of us. Indeed, one cannot usually have experiences without the participation of others. Everyone wants to think they are special, and of course we are to someone, but to everyone no one is special. Since all of our work is based upon the work of everyone who came before us, then all of our work belongs to everyone. So tough fucking titties, welcome to the world of computer science, control c and control v is heavily encouraged.

In that Beatles documentary, Paul McCartney said he thought that once you uttered the words into the microphone, it belonged to everyone. Little did he know how right he actually was.

You think there is a line between innovation and infringement? Wrong, They are the same thing.

And for the record, I'm fine with anyone stealing my art. They can even sell it as their own. Attribution is for the vain.

[-] hglman@lemmy.ml 32 points 11 months ago

Greg wants to get paid, remove the threat of poverty from the loss of control and its a nonissue.

load more comments (5 replies)
load more comments (5 replies)
[-] kitonthenet@kbin.social 43 points 11 months ago

what I'm getting from all the AI stuff is the people in charge and the people that use it are scumbags

[-] MossyFeathers@pawb.social 22 points 11 months ago* (last edited 11 months ago)

Pretty much. There are ways of using it that most artists would be okay with. Most of the people using it flat out refuse to use it like that though.

Edit: To expand on this:

Most artists would be okay with AI art being used as reference material, inspiration, assisting with fleshing out concepts (though you should use concept artists for that in a big production), rapid prototyping and whatnot. Most only care that the final product is at least mostly human-made.

Artists generally want you to actually put effort into what you're making because, at the end of the day, typing a prompt into stable diffusion has more in common with receiving a free commission from an artist than it has with actually being an artist. If you're going to claim that something AI had a hand in as being your art, then you need to have done the majority of the work on it yourself.

The most frustrating thing to me, however, is that there are places in art that AI could participate in which would send artists over the moon, but it's not flashy so no one seems to be working on making AI in those areas.

Most of what I'm personally familiar with has to do with 3d modeling, and in that discipline, people would go nuts if you released an AI tool that could do the UV work for you. Messing with UVs can be very tedious and annoying, to the point where most artists will just use a tool using conventional algorithms to auto-unwrap and pack UVs, and then call it a day, even if they're not great.

Another area is in rigging and weight painting. In order to animate a model, you have to rig it to a skeleton (unless you're a masochist or trying to make a game accurate to late 90s-early 00s animation), paint the bone weights (which bones affect which polygons, and by how much), add constraints, etc. Most 3d modelers would leap at the prospect of having high-quality rigging and UVs done for them at the touch of a button. However, again, because it's not flashy to the general public, no one's put any effort into making an AI that can do that (afaik at least).

Finally, even if you do use an AI in ways that most artists would accept as valid, you'll still have to prove it because there are so many people who put a prompt into stable diffusion, do some minor edits to fix hands (in older version), and then try to pass it off as their own work.

load more comments (1 replies)
load more comments (1 replies)
[-] AzureDusk10@kbin.social 32 points 11 months ago* (last edited 11 months ago)

The real issue here is the transfer of power away from the artist. This artist has presumably spent years and years perfecting his craft. Those efforts are now being used to line someone else’s pockets, in return for no compensation and a diminishment in the financial value of his work, and, by the sounds of it, little say in the matter either. That to me seems very unethical.

[-] millie@beehaw.org 21 points 11 months ago

Personally, as an artist who spends the vast majority of their time on private projects that aren't paid, I feel like it's put power in my hands. It's best at sprucing up existing work and saving huge amounts of time detailing. Because of stable diffusion I'll be able to add those nice little touches and flashy bits to my work that a large corporation with no real vision has at their disposal.

To me it makes it much easier for smaller artists to compete, leveling the playing field a bit between those with massive resources and those with modest resources. That can only be a good thing in the long run.

But I also feel like copyright more often than not rewards the greedy and stifles the creative.

load more comments (2 replies)
[-] trashhalo@beehaw.org 25 points 11 months ago* (last edited 11 months ago)

Re: Stolen. Not stolen comments Copyright law as interpreted judges is still being worked out on AI. Stay tuned if it's defined as stolen or not. But even if the courts decide existing copyright law would define training on artists work as legitimate use. The law can change and it still could swing the way of the artist if congress got involved.


My personal opinion, which may not reflect what happens legally is I hope we all get more control over our data and how it's used and sold. Wether that's my personal data like my comments, location or my artistic data like my paintings. I think that would be a better world

[-] FaceDeer@kbin.social 16 points 11 months ago

Copyright law as interpreted judges is still being worked out on AI. Stay tuned if it’s defined as stolen or not.

You just contradicted yourself in two sentences. Copyright and theft are not the same thing. They are unrelated to each other. When you violate copyright you are not "stealing" anything. This art is not "stolen", full stop.

load more comments (11 replies)
[-] fwygon@beehaw.org 25 points 11 months ago* (last edited 11 months ago)

AI art is factually not art theft. It is creation of art in the same rough and inexact way that we humans do it; except computers and AIs do not run on meat-based hardware that has an extraordinary number of features and demands that are hardwired to ensure survival of the meat-based hardware. It doesn't have our limitations; so it can create similar works in various styles very quickly.

Copyright on the other hand is, an entirely different and, a very sticky subject. By default, "All Rights Are Reserved" is something that usually is protected by these laws. These laws however, are not grounded in modern times. They are grounded in the past; before the information age truly began it's upswing.

Fair use generally encompasses all usage of information that is one or more of the following:

  • Educational; so long as it is taught as a part of a recognized class and within curriculum.
  • Informational; so long as it is being distributed to inform the public about valid, reasonable public interests. This is far broader than some would like; but it is legal.
  • Transformative; so long as the content is being modified in a substantial enough manner that it is an entirely new work that is not easily confused for the original. This too, is far broader than some would like; but it still is legal.
  • Narrative or Commentary purposes; so long as you're not copying a significant amount of the whole content and passing it off as your own. Short clips with narration and lots of commentary interwoven between them is typically protected. Copyright is not intended to be used to silence free speech. This also tends to include satire; as long as it doesn't tread into defamation territory.
  • Reasonable, 'Non-Profit Seeking or Motivated' Personal Use; People are generally allowed to share things amongst themselves and their friends and other acquaintances. Reasonable backup copies, loaning of copies, and even reproduction and presentation of things are generally considered fair use.

In most cases AI art is at least somewhat Transformative. It may be too complex for us to explain it simply; but the AI is basically a virtual brain that can, without error or certain human faults, ingest image information and make decisions based on input given to it in order to give a desired output.

Arguably; if I have license or right to view artwork; or this right is no longer reserved, but is granted to the public through the use of the World Wide Web...then the AI also has those rights. Yes. The AI has license to view, and learn from your artwork. It just so happens to be a little more efficient at learning and remembering than humans can be at times.

This does not stop you from banning AIs from viewing all of your future works. Communicating that fact with all who interact with your works is probably going to make you a pretty unpopular person. However; rightsholders do not hold or reserve the right to revoke rights that they have previously given. Once that genie is out of the bottle; it's out...unless you've got firm enough contract proof to show that someone agreed to otherwise handle the management of rights.

In some cases; that proof exists. Good luck in court. In most cases however; that proof does not exist in a manner that is solid enough to please the court. A lot of the time; we tend to exchange, transfer and reserve rights ephemerally...that is in a manner that is not strictly always 100% recognized by the law.

Gee; Perhaps we should change that; and encourage the reasonable adaptation and growth of Copyright to fairly address the challenges of the information age.

[-] Thevenin@beehaw.org 19 points 11 months ago

It doesn't change anything you said about copyright law, but current-gen AI is absolutely not "a virtual brain" that creates "art in the same rough and inexact way that we humans do it." What you are describing is called Artificial General Intelligence, and it simply does not exist yet.

Today's large language models (like ChatGPT) and diffusion models (like Stable Diffusion) are statistics machines. They copy down a huge amount of example material, process it, and use it to calculate the most statistically probable next word (or pixel), with a little noise thrown in so they don't make the same thing twice. This is why ChatGPT is so bad at math and Stable Diffusion is so bad at counting fingers -- they are not making any rational decisions about what they spit out. They're not striving to make the correct answer. They're just producing the most statistically average output given the input.

Current-gen AI isn't just viewing art, it's storing a digital copy of it on a hard drive. It doesn't create, it interpolates. In order to imitate a person't style, it must make a copy of that person's work; describing the style in words is insufficient. If human artists (and by extension, art teachers) lose their jobs, AI training sets stagnate, and everything they produce becomes repetitive and derivative.

None of this matters to copyright law, but it matters to how we as a society respond. We do not want art itself to become a lost art.

load more comments (6 replies)
[-] ParsnipWitch@feddit.de 13 points 11 months ago

Current AI models do not learn the way human brains do. And the way current models learn how do "make art" is very different from how human artists do it. To repeatedly try and recreate the work of other artists is something beginners do. And posting these works online was always shunned in artist communities. You also don't learn to draw a hand by remembering where a thousand different artists put the lines so it looks like a hand.

load more comments (5 replies)
[-] AceFuzzLord@lemm.ee 20 points 11 months ago* (last edited 11 months ago)

All this proves to me, based on the context from this post, is that people are willing to commit copyright infringement in order to make a machine produce art in a specific style.

load more comments (7 replies)
[-] arvere@lemmy.ml 16 points 11 months ago

my take on the subject, as someone who worked both in design and arts, and tech, is that the difficulty in discussing this is more rooted on what is art as opposed to what is theft

we mistakingly call illustrator/design work as art work. art is hard to define, but most would agree it requires some level of expressiveness that emanates from the artist (from the condition of the human existence, to social criticism, to beauty by itself) and that's what makes it valuable. with SD and other AIs, the control of this aspect is actually in the hands of the AI illustrator (or artist?)

whereas design and illustration are associated with product development and market. while they can contain art in a way, they have to adhere to a specific pipeline that is generally (if not always) for profit. to deliver the best-looking imagery for a given purpose in the shortest time possible

designers and illustrators were always bound to be replaced one way or a another, as the system is always aiming to maximize profit (much like the now old discussions between taxis and uber). they have all the rights to whine about it, but my guess is that this won't save their jobs. they will have to adopt it as a very powerful tool in their workflow or change careers

on the other hand, artists that are worried, if they think the worth of their art lies solely in a specific style they've developed, they are in for an epiphany. they might soon realise they aren't really artists, but freelance illustrators. that's also not to mention other posts stating that we always climb on the shoulders of past masters - in all areas

both artists and illustrators that embrace this tool will benefit from it, either to express themselves quicker and skipping fine arts school or to deliver in a pace compatible with the market

all that being said I would love to live in a society where people cared more about progress instead of money. imagine artists and designers actively contributing to this tech instead of wasting time talking fighting over IP and copyright...

load more comments (6 replies)
load more comments
view more: next ›
this post was submitted on 30 Jul 2023
221 points (100.0% liked)

Technology

37208 readers
119 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS