this post was submitted on 17 Apr 2024
110 points (91.7% liked)

Technology

55715 readers
5725 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] polygon6121@lemmy.world -2 points 2 months ago (1 children)

AI in general is definitely prone to hallucinations. It is more commonly seen in LLMs because it is more widely used by the public. It is definitely a problem with all AI

[–] Syntha@sh.itjust.works 2 points 2 months ago* (last edited 2 months ago) (1 children)

Besides generative AI, which models can hallucinate?

[–] polygon6121@lemmy.world 1 points 2 months ago (1 children)

Text to video, automated driving, object detection, language translations. I might be misusing the term, you could argue that the word is describing what LLMs commonly does and that is where the term is derived from. You can also argue that AI is sometimes correct and the human have issues identifying the correct answer. But In my mind it is much the same just different applications. A car completely missing a firetruck approaching or a LLM just spewing out wrong statements is the same to me.

[–] Syntha@sh.itjust.works 1 points 2 months ago (1 children)

Yeah, well it's not the same. Models are wrong all the time, why use a different term at all when it's just "being wrong"?

[–] polygon6121@lemmy.world 1 points 2 months ago (1 children)

The model makes decisions thinking it is right, but for whatever reason can't see a firetruck or stopsign or misidentifies the object.. you know almost like how a human hallucinating would perceive something from external sensory that is not there.

I don't mind giving it another term, but "being wrong" is misleading. But you are correct in the sense that it depends on every given case..

[–] Syntha@sh.itjust.works 1 points 2 months ago (1 children)

No, the model isn't "thinking", no model in use today has anything resembling an internal cognitive process. It is making a prediction. A covid test is predicting whether you have the Covid-19 virus inside you or not. If its prediction contradicts your biological state, it is wrong. If an object recognition algorithm does not predict there being a firetruck, how is that not being wrong in the same way?

[–] polygon6121@lemmy.world 1 points 2 months ago

Predicting? Ok, if you say so.