this post was submitted on 17 Dec 2024
282 points (98.6% liked)

Technology

60115 readers
2475 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] vrighter@discuss.tchncs.de 3 points 1 week ago (5 children)

the problems with (the current forms of generative) AI will not be solved, because they cannot be solved. They are intrinsic to the whole framework.

[–] scarabic@lemmy.world 4 points 1 week ago (3 children)

Error correction is also intrinsic to all of computing and telecommunications, though. That’s a loose comparison but I hope we can make progress on this and get it to a manageable state, even if zero is impossible in principle. A lot of things in life only asymptotically approach zero and yet we live.

[–] vrighter@discuss.tchncs.de 8 points 1 week ago* (last edited 1 week ago) (2 children)

This is not error correction issue though. Error correction means taking known data and adding redundancy to it so that damaed pieces can be repaired. This makes the message longer.

An llm's output does not contain error correction. It's just the output. And it doesn't contain any errors, mathematically speaking. The hallucination is the correct output. It is what the statistics it gathered from its training set determined is most likely. A "correct" llm output is indistinguishable from a "hallucination", mathematically, and always will be. A hallucination is simply "some output that some human, somewhere, doesn't like", and that's uncomputable. And outputs that people subjectively consider as "hallucinations" cannot be eliminated, because an llm is, fundamentally, a probabilistic algorithm. If you added error correction to an llm's output all you'd be able to recover is the llm's original output, "hallucinations" and all.

Tldr: "hallucinations" are a subjective thing. A Hallucination" is not an error that can be corrected after-the-fact, because it is not an error in the first place.

[–] xavier666@lemm.ee 1 points 1 week ago (1 children)

If anyone says "What if we make an AI which specifically catches these hallucinations and then-" I will personally take a flight and come to your house and slap you.

[–] vrighter@discuss.tchncs.de 1 points 1 week ago

all the advertised AI detection tools are just that. Happy slapping!

load more comments (1 replies)