110
Deleted (lemmy.dbzer0.com)
submitted 1 year ago* (last edited 11 months ago) by IsThisLemmyOpen@lemmy.dbzer0.com to c/asklemmy@lemmy.ml

Deleted

you are viewing a single comment's thread
view the rest of the comments
[-] bitsplease@lemmy.ml 9 points 1 year ago

I don't think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following

"I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it's feet. I would also check to make sure it's healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home"

Granted it's got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don't think this approach would work

[-] lemmyvore@feddit.nl 1 points 1 year ago

Modern LLMs like ChatGPT are really good at faking empathy

They're really not, it's just giving that answer because a human already gave it, somewhere on the internet. That's why OP suggested asking unique questions... but that may prove harder than it sounds. ๐Ÿ˜Š

[-] bitsplease@lemmy.ml 1 points 1 year ago

That's why I used the phrase "faking empathy", I'm fully aware the chatGPT doesn't "understand" the question in any meaningful sense, but that doesn't stop it from giving meaningful answers to the question - that's literally the whole point of it. And to be frank, if you think that a unique question would stump it, I don't think you really understand how LLMs work. I highly doubt that the answer it spit back was just copied verbatim from some response in it's training data (which btw, includes more than just internet scraping). It doesn't just parrot back text as is, it uses existing tangentially related text to form it's responses, so unless you can think of an ethical quandary which is totally unlike any ethical discussion ever posed by humanity before (and continue to do so for millions of users), then it won't have any trouble adapting to your unique questions. It's pretty easy to test this yourself, do what writers currently do with chatGPT - go in and give it an entirely fictional context, with things that don't actually exist in human society, then ask it questions about it. I think you'd be surprised with how well it handles that, even though it's virtually guaranteed there are no verbatim examples to pull from for the conversation

[-] Manticore@lemmy.nz 1 points 1 year ago* (last edited 1 year ago)

Ultimately ChatGPT is a text generator. It doesn't understand what its writing, it's just observed enough humans' writing that it can generate similar text that closely matches it. Which is why if you ask ChatGPT for information that doesn't exist, it will generate convincing lies. It doesn't know it's lying - it's doing its job of generating the text you wanted. Was it close enough, boss?

As long as humans talk about a topic, generative AI can mimic their commentary. That includes love, empathy, poetry, etc. Writing text can never be an answer for captcha; it would need to be something that can't be put in a dataset - even a timestamped photo can be spoofed with the likes of thispersondoesnotexist.com.

The only things AI/bots currently won't do are whatever's deliberately disabled on the source AI for legal reasons (since almost nobody is writing their own AI models), but I doubt you want a captcha where the user lists every slur they can think of, or bomb recipes.

this post was submitted on 26 Jun 2023
110 points (97.4% liked)

Asklemmy

42496 readers
1424 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS