this post was submitted on 04 Sep 2023
98 points (91.5% liked)

Technology

58369 readers
3820 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] LastYearsPumpkin@feddit.ch 29 points 1 year ago (1 children)

Don't use chatgpt as a source, there is no reason to trust anything it says.

It might be right, it might have just thrown together words that sound right, or maybe it's completely made up.

[–] metaStatic@kbin.social 2 points 1 year ago (4 children)

it just guesses the next probable word. literally everything it says is made up.

[–] 8ender@lemmy.world 3 points 1 year ago

Words are how we communicate knowledge so sometimes the most probable combinations of words end up being facts

[–] thal3s@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

“ChatGPT, please provide your rebuttal to this statement about you: […]”

Hey! That's a common misconception. While I do predict the next word based on patterns in the data I was trained on, I'm not just making things up. I provide information and answers based on the vast amount of text I've been trained on. It's more about recognizing patterns and providing coherent, relevant responses than just "guessing." Cheers!

[–] SkaveRat@discuss.tchncs.de 1 points 1 year ago

while it's technically true that it "just predicts the next word", it's a very misleading argument to make.

Computers are also "just some basic logic gates" and yet we can do complex stuff with them.

Complex behaviour can result from simple things.

Not defending the bullshit that LLMs generate, just to point out that you have to be careful with your arguments

[–] TrenchcoatFullofBats@belfry.rip -2 points 1 year ago (2 children)

You have just described how human brains work

[–] sky@codesink.io 7 points 1 year ago

right, and they're actually pretty bad at remembering facts, that's why we have entire institutions dedicated to maintain accurate reference material!

why do people throw all of this out the window for advice from a dumb program I'll never understand

[–] thbb@kbin.social 7 points 1 year ago

Not really. We also have deductive capabilities (aka "system 2") that enable us to ensure some level of proof over our statements.