this post was submitted on 21 Feb 2024
289 points (95.0% liked)

Technology

55964 readers
5253 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

you are viewing a single comment's thread
view the rest of the comments
[–] EnderMB@lemmy.world 19 points 4 months ago (2 children)

(Disclosure: I work on LLM's)

While you're not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

Similarly, it's probably safe to assume that the LLM's prediction isn't the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you "you were just discussing this" or "you can access the weather from here" is that all that different from "intelligence"?

At a given point, it's arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don't really know what true intelligence is.

[–] Coreidan@lemmy.world 5 points 4 months ago (3 children)

how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

It’s not. LLM is just a statistical model. Nothing special about it. Nothing different what we’ve already been doing for a while. This only validates my statement that we call just about anything “AI” these days.

We don’t even know what true intelligence is, yet we are quick to make claims that this is “AI”. There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct. Anyone who thinks otherwise is just fooling themselves.

It’s a buzz word to get people riled up. It’s completely disingenuous.

[–] sailingbythelee@lemmy.world 8 points 4 months ago (1 children)

I think the point of the Turing test is to avoid thorny questions about the definition of intelligence. We cant precisely define intelligence, but we know that normally functioning humans are intelligent. Therefore, if we talk to a computer and it is indistinguishable from a human in a conversation, then it is intelligent by definition.

[–] DragonTypeWyvern@literature.cafe 1 points 4 months ago

It's more like if you don't treat it as a person, just in case, you risk committing a great evil out of arrogance.

[–] EnderMB@lemmy.world 4 points 4 months ago

So, by your definition, no AI is AI, and we don't know what AI is, since we don't know what the I is?

While I hate that AI is just a buzzword for scam artists and tech influencers nowadays, dismissing a term seems a bit overkill. It also seems overkill when it's not something that academics/scholars seem particularly bothered by.

[–] QuaternionsRock@lemmy.world 3 points 4 months ago

There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct.

Of all of these qualities, only the last one—the ability to reason or deduct—is a widely-accepted prerequisite for intelligence.

I would also argue that contemporary LLMs demonstrate the ability to reason by correctly deriving mathematical proofs that do not appear in the training datasets. How would you be able to accomplish such a feat without some degree of reasoning?

[–] fidodo@lemmy.world 3 points 4 months ago

The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they're consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.