emmy67

joined 1 month ago
[–] emmy67@lemmy.world 25 points 2 days ago (2 children)

Wild for a company that's never made a profit

[–] emmy67@lemmy.world 13 points 1 week ago* (last edited 1 week ago)

Nor should what they produce be copyrightable in any form. Even if it's the base upon which an artist builds.

Also, it should all be free.

[–] emmy67@lemmy.world 3 points 2 weeks ago

Yeah. The demo needed to take him to task years ago. The fact they waited until the election is a political move and decision. And a poorly thought out one

[–] emmy67@lemmy.world 1 points 2 weeks ago

The fundamental problem is all those results are on people with abnormal brain function. Because of the corpus calusotomy.

It can't be assumed things work that way in a normal brain.

People do make up things in regards to themselves often. Especially in the case of dissonance. But that's in relation to themselves, not the things they know. Most people, if you asked what op did will either admit they don't know or that you should look it up. The more specific the question the less likely to make something up.

[–] emmy67@lemmy.world 1 points 2 weeks ago (2 children)

Funny thing is, that the part of the brain used for talking makes things up on the fly as well 😁 there is great video from Joe about this topic, where he shows experiments done to people where the two brain sides were split.

Having watched the video. I can confidently say you're wrong about this and so is Joe. If you want an explanation though let me know.

[–] emmy67@lemmy.world 6 points 2 weeks ago (5 children)

Or, the words "i don't know" would work

[–] emmy67@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago)

You're right, it's not. It needs to know what things look like. Which. Once again, it's not going to without knowing what those things look like. Sorry dude either csam is in the training data and can do this. Or it's not. But I'm pretty tired of this. Later fool

[–] emmy67@lemmy.world 0 points 2 weeks ago (2 children)

Once again you're showing the limits of AI. A dragon exists in fiction. It exists in the mind of someone drawing it. While in ai, there is no mind, the concept cannot independently exist.

[–] emmy67@lemmy.world 0 points 2 weeks ago (4 children)

Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept

If a human has never seen a dog before, they don't know what it is or what it looks like.

If it's the same as a human, it won't be able to draw one.

[–] emmy67@lemmy.world 0 points 2 weeks ago (6 children)

I wasn't the one attempting to prove that. Though I think it's definitive.

You were attempting to prove it could generate things not in its data set and i have disproved your theory.

To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.

To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don't believe me?

Here's a link to it

[–] emmy67@lemmy.world 0 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.

Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄

[–] emmy67@lemmy.world -1 points 2 weeks ago (10 children)

Then if your question is "how many Photograph of a hybrid creature that is a cross between corn and a dog were in the training data?"

I'd honestly say, i don't know.

And if you're honest, you'll say the same.

view more: next ›