this post was submitted on 22 Dec 2024
360 points (95.7% liked)

Technology

60052 readers
2857 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 2pt_perversion@lemmy.world 60 points 15 hours ago* (last edited 15 hours ago) (43 children)

There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.

But generative AIs are really good at tasks I wouldn't have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity's current workflow. It's not just hype.

The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn't really fit, like AI enabled fridges and toasters.

[–] Eldritch@lemmy.world 15 points 12 hours ago (15 children)

Computers have always been good at pattern recognition. This isn't new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

The same goes for code completion. They will just generate something that fills the pattern they're told to look for. It doesn't matter if it's right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we've had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can't even get an llm to reliably spit out a hello world program.

[–] JohnEdwa@sopuli.xyz 6 points 10 hours ago (7 children)

"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'"
-Pamela McCorduck

"AI is whatever hasn't been done yet."
- Larry Tesler

That's the curse of the AI Effect.
Nothing will ever be "an actual AI" until we cross the barrier to an actual human-like general artificial intelligence like Cortana from Halo, and even then people will claim it isn't actually intelligent.

[–] ssfckdt@lemmy.blahaj.zone 4 points 8 hours ago (1 children)

I mean, I think intelligence requires the ability to integrate new information into one's knowledge base. LLMs can't do that, they have to be trained on a fixed corpus.

Also, LLMs have a pretty shit-tastic track record of being able to differentiate correct data from bullshit, which is a pretty essential facet of intelligence IMO

[–] JohnEdwa@sopuli.xyz 5 points 7 hours ago

LLMs have a perfect track record of doing exactly what they were designed to, take an input and create a plausible output that looks like it was written by a human. They just completely lack the part in the middle that properly understands what it gets as the input and makes sure the output is factually correct, because if it did have that then it wouldn't be an LLM any more, it would be an AGI.
The "artificial" in AI does also stand for the meaning of "fake" - something that looks and feels like it is intelligent, but actually isn't.

load more comments (5 replies)
load more comments (12 replies)
load more comments (39 replies)