this post was submitted on 16 Oct 2023
24 points (62.5% liked)

Technology

34419 readers
235 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BitSound@lemmy.world 0 points 11 months ago (1 children)

Your concept of a chair is an abstract thought representation of a chair. An LLM has vectors that combine or decompose in some way to turn into the word “chair,” but are not a concept of a chair or an abstract representation of a chair. It is simply vectors and weights, unrelated to anything that actually exists.

Just so incredibly wrong. Fortunately, I'll have save myself time arguing with such a misunderstanding. GPT-4 is here to help:

This reads like a misunderstanding of how LLMs (like GPT) work. Saying an LLM's understanding is "simply vectors and weights" is like saying our brain's understanding is just "neurons and synapses". Both systems are trying to capture patterns in data. The LLM does have a representation of a chair, but it's in its own encoded form, much like our neurons have encoded representations of concepts. Oversimplifying and saying it's unrelated to anything that actually exists misses the point of how pattern recognition and information encoding works in both machines and humans.

[–] Veraticus@lib.lgbt 0 points 11 months ago (1 children)

Are you kidding me? I sourced GPT4 itself disagreeing with you that it is intelligent and you told me it's lying. And here you are, using it to try to reinforce your point? Are you for real or is this some kind of complicated game?

[–] BitSound@lemmy.world 1 points 11 months ago (1 children)
[–] Veraticus@lib.lgbt -1 points 11 months ago* (last edited 11 months ago) (2 children)

Here, let's ask GPT4 itself since you've decided it's suddenly an okay source:

Your statement is correct in asserting that the vector representation in a language model is not an abstract representation. It's purely a mathematical construct. However, saying it's "unrelated to anything that actually exists" might be an overstatement. These vectors do capture statistical patterns in human language, which are reflections of human thought and culture. They're just not capable of the deep, nuanced understanding that comes from human experience.

I accept it's an overstatement. But it is neither "incredibly wrong," nor is it thought. (Or intelligence.)

[–] BitSound@lemmy.world 1 points 11 months ago (1 children)

So you admit that you were wrong?

[–] Veraticus@lib.lgbt -1 points 11 months ago

I was in this case -- but the overall point I made is still correct. If winning this minor battle is what you were seeking, congratulations. You are no closer to understanding the truth of this or what we were actually talking about. Not that that was either your point or within your capabilities.

[–] SirGolan@lemmy.sdf.org 1 points 11 months ago (1 children)

I'd just like to step in here and mention that asking an LLM is probably not a good proof (and this is directed at both of you). Its understanding of AI is from before it was trained, so it is wildly out of date at this point given how much has happened in the space since.

[–] Veraticus@lib.lgbt 1 points 11 months ago (1 children)

GPT4 has knowledge of its own training since it was trained in 2022.

[–] SirGolan@lemmy.sdf.org 1 points 11 months ago* (last edited 11 months ago) (1 children)

Care to provide some proof of that? They did update their system prompt to include a few things like it is now GPT4 (it used to always say GPT3). Other than that, I don't think it knows anything. But in general, I was more talking about developments in AI since it was trained which it certainly does not know.

Edit: hmm I just reviewed our discussion and I note you only provided one link which was to the psychological definition of intelligence. You otherwise are providing no sources to back up your claims while my responses are full of them. Please start backing up your assertions, or provide some evidence you are an expert in the field.

[–] Veraticus@lib.lgbt 1 points 11 months ago (1 children)
[–] SirGolan@lemmy.sdf.org 1 points 11 months ago (1 children)

I'm aware of that date.

The OpenAI GPT-4 video literally states that GPT-4 finished training in August 2022.

Either way, to clarify / reiterate, you're refuting a different point than I've made. I said:

Its understanding of AI is from before it was trained, so it is wildly out of date at this point given how much has happened in the space since.

I'm not talking about whether it knows about its own training (I doubt that it does). I'm talking about it knowing about what's happened in the broader AI landscape since.

[–] Veraticus@lib.lgbt 1 points 11 months ago (1 children)

I mean, your argument is still basically that it's thinking inside there; everything I've said is germane to that point, including what GPT4 itself has said.

[–] SirGolan@lemmy.sdf.org 1 points 11 months ago

I mean, your argument is still basically that it’s thinking inside there; everything I’ve said is germane to that point, including what GPT4 itself has said.

My argument?

That doesn’t mean they’re having thoughts in there I mean. Not in the way we do, and not with any agency, but I hadn’t argued either way on thoughts because I don’t know the answer to that.

Are you assuming I’m saying that LLMs are sentient, conscious, have thoughts or similar? I’m not. Jury’s out on the thought thing, but I certainly don’t believe the other two things.

I'm not saying it's thinking or has thoughts. I'm saying I don't know the answer to that, but if it is it definitely isn't anything like human thoughts.