this post was submitted on 29 Jul 2023
197 points (100.0% liked)

Technology

37705 readers
378 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] conciselyverbose@kbin.social 6 points 1 year ago (3 children)

ChatGPT will never understand. LLMs have no capacity to do so.

To understand you need underlying models of real world truth to build your word salad on top of. LLMs have none of that.

[–] Mr_Will@feddit.uk 5 points 1 year ago (3 children)

What are your underlying models of the world built out of? Because I'm human, and mine are primarily built out of words.

How do you draw a line between knowing and understanding? Does a dog understand the commands it's been trained to obey?

[–] Parodper@foros.fediverso.gal 5 points 1 year ago

Your underlying model is not made out of words, but out of concepts. You can have multiple words that all map to the same concept, i.e. cosmos, universe, space. Or a single word that map to different concepts.

[–] conciselyverbose@kbin.social 3 points 1 year ago

No, they aren't. You represent them with words. But you sure as hell aren't responding to someone throwing you a football with words trying to figure out where it's going.

No, a dog (while many times more intelligent than chatGPT) doesn't understand anything.

[–] Phroon@beehaw.org 1 points 1 year ago (1 children)

What are your underlying models of the world built out of?

As a Bayesian, my models of the world are built on priors. That is, assumptions I’ve made based on my existing information. From that, I make an educated guess about the world with that model and see what the world does. If my guess doesn’t match reality, I update my assumptions to rebuild my model and repeat the process until it’s close enough.

This is the way the best science is done, and I fell it’s the way that humans really work. Language is just a type of model we use to communicate the world to others, each of us may have a slightly different Bayesian understanding of the language yet we can still communicate.

[–] probably@beehaw.org 4 points 1 year ago (2 children)

Studies have shown we typically use pattern matching for our choices but not statistics. One such experiment had humans view to light bulbs (I think one was red one was green). One light would turn on at a time and they were allowed or given a record of what had happened. Then they were asked to guess what would occur next for n number of steps. Same thing is done with rats. Humans are rewarded with money based on correct choices and rats with food. Here is the thing, one light (let's say red) would light up with 70% probability and the other with 30%. But it was randomized.

The optimal solution is to always pick red. Every time. But humans pick a pattern. Rats pick red. Humans consistently do worse than rats. So while we are using a form of updating, it certainly isn't proper bayesian updating. And just because you think we function some way doesn't make it true. And it will forever be difficult to describe any AI as conscious, because we have really arbitrarily defined it to fit us. But we can't truly say what it is. Not can we can why we function how we do. Or if we are all in a simulation or just a Boltzmann brain.

Honestly, something that concerns me most about AI is that it could become sentient, but we will not know if it is or just cleverly programmed so we treat it only as a tool. Because while I don't think AI is inherently dangerous, I think becoming a slave owner of something that could be much more powerful probably is. And given their lack of chemical hormones, we will have even less of an understanding of what or how it feels.

[–] Ferk@kbin.social 1 points 1 year ago* (last edited 1 year ago)

It could still be bayesian reasoning, but a much more complex one, underlaid by a lot of preconceptions (which could have also been acquired in a bayesian way).

Even if the result is random, a highly pre-trained bayessian network that has the experience of seeing many puzzles or tests before that do follow non-random patterns might expect a non-random pattern... so those people might have learned to not expect true randomness, since most things aren't random.

[–] Phroon@beehaw.org 1 points 1 year ago

All very fair points. It’s all wildly complicated, and I agree; we don’t really understand ourselves.

[–] Serdan@lemm.ee 3 points 1 year ago (1 children)

https://thegradient.pub/othello/

LLMs are neural networks and are absolutely capable of understanding.

[–] conciselyverbose@kbin.social 7 points 1 year ago (1 children)

LLMs are criminally simplified neural networks at minimum thousands of orders less complex than a brain. Nothing we do with current neural networks resembles intelligence.

Nothing they do is close to understanding. The fact that you can train one exclusively on the rules of a simple game and get it to eventually infer a basic rule set doesn't imply anything like comprehension. It's simplistic pattern matching.

[–] Serdan@lemm.ee 1 points 1 year ago (1 children)

Does AlphaGo understand go? How about AlphaStar?

When I say LLM's can understand things, what I mean is that there's semantic information encoded in the network. A demonstrable fact.

You can disagree with that definition, but the point is that it's absolutely not just autocomplete.

[–] conciselyverbose@kbin.social 1 points 1 year ago (1 children)

No, and that definition has nothing in common with what the word means.

Autocorrect has plenty of information encoded as artifacts of how it works. ChatGPT isn't like autocorrect. It is autocorrect, and doesn't do anything more.

[–] Serdan@lemm.ee 1 points 1 year ago (1 children)

It's fine if you think so, but then it's a pointless argument over definitions.

You can't have a conversation with autocomplete. It's qualitatively different. There's a reason we didn't have this kind of code generation before LLM's.

Adversus solem ne loquitor.

[–] conciselyverbose@kbin.social 1 points 1 year ago (1 children)

If you just keep taking the guessed next word from autocomplete you also get a bunch of words shaped like a conversation.

[–] Serdan@lemm.ee 1 points 1 year ago

I am not sure of the relevance of the oppressed classes and with the object of duping the latter is the cravings of the oppressed classes and with the object of duping the latter

Yeah, totally. Repeating the same nonsensical sentence over and over is also how I converse. 🙄

[–] Ferk@kbin.social 1 points 1 year ago* (last edited 1 year ago)

Note that "real world truth" is something you can never accurately map with just your senses.

No model of the "real world" is accurate, and not everyone maps the "real world truth" they personally experience through their senses in the same way.. or even necessarily in a way that's really truly "correct", since the senses are often deceiving.

A person who is blind experiences the "real world truth" by mapping it to a different set of models than someone who has additional visual information to mix into that model.

However, that doesn't mean that the blind person can "never understand" the "real world truth" ....it just means that the extent at which they experience that truth is different, since they need to rely in other senses to form their model.

Of course, the more different the senses and experiences between two intelligent beings, the harder it will be for them to communicate with each other in a way they can truly empathize. At the end of the day, when we say we "understand" someone, what we mean is that we have found enough evidence to hold the belief that some aspects of our models are similar enough. It doesn't really mean that what we modeled is truly accurate, nor that if we didn't understand them then our model (or theirs) is somehow invalid. Sometimes people are both technically referring to the same "real world truth", they simply don't understand each other and focus on different aspects/perceptions of it.

Someone (or something) not understanding an idea you hold doesn't mean that they (or you) aren't intelligent. It just means you both perceive/model reality in different ways.