this post was submitted on 17 Apr 2024
110 points (91.7% liked)

Technology

55715 readers
5725 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Womble@lemmy.world 6 points 2 months ago (1 children)

No, false positives and false negatives are not hallucinations. Otherwise things like a blood test not involving any ml would also be halucinating which removes all meaning from the term.

[–] Ashelyn@lemmy.blahaj.zone 2 points 2 months ago

That's fair. I think fundamentally a false positive/negative isn't that much different. Pretty much all tests—especially those dealing with real world conditions—are heuristic, as are all LLMs by necessity of the design. Hallucination is a pretty specific term given to AI as an attempt to assign agency to a system that doesn't actually have any (by implying it's crazy and making stuff up instead of a black box with deterministic inputs and outputs spitting out something factually wrong but with a similar format to what is trained on). I feel like the nature of any tool where "you can't trust this to be entirely accurate" should have an umbrella term that encompasses both types of providing inaccurate info under certain conditions.

I suppose the difference is that AI is a lot more likely to randomly go off, whereas a blood test is likelier to provide repeated false positives for the same person with their unique biology? There's also the fact that most medical tests represent a true/false dichotomy or lookup table, whereas an LLM is given the entire bounds of language.

Would an AI clustering algorithm (say, K-means for instance) giving an inaccurate diagnosis be a false positive/negative or a hallucination? These models can be programmed on a sliding scale and I feel like there's definitely an area where the line could get pretty blurry.