this post was submitted on 27 Dec 2024
324 points (94.8% liked)

Technology

60115 readers
2822 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Free_Opinions@feddit.uk 42 points 1 day ago (4 children)

We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.

[–] IndustryStandard@lemmy.world 0 points 1 hour ago

Any or every task?

[–] LifeInMultipleChoice@lemmy.ml 15 points 1 day ago (2 children)

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether... And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I'd say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can't, but language models to me aren't "AGI" in my opinion.

[–] Don_alForno@feddit.org 3 points 4 hours ago (1 children)

any cognitive Task. Not "9 out of the 10 you were able to think of right now".

[–] notfromhere@lemmy.ml 2 points 46 minutes ago

Any is very hard to benchmark and is also not how humans are tested.

[–] hendrik@palaver.p3x.de 7 points 23 hours ago (1 children)

Agree. And these tasks can't be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn't enough in my eyes. Especially since it even struggles to do that. It's the "general" that is missing.

[–] Free_Opinions@feddit.uk 4 points 9 hours ago (1 children)

It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

[–] hendrik@palaver.p3x.de 1 points 3 hours ago* (last edited 2 hours ago)

You're - of course - right. Though I'm always a bit unsure about exactly that. We also don't attribute intelligence to books. For example an encyclopedia, or Wikipedia... That has a lot of knowledge stored, yet it is not intelligent. That makes me believe being intelligent has something to do with being able to apply knowledge, and do something with it. And outputting text is just one very limited form of interacting with the world.

And since we're using humans as a benchmark for the "general" part in AGI... Humans have several senses, they're able to interact with their environment in lots of ways, and 90% of that isn't drawing and communicating with words. That makes me wonder: Where exactly is the boundary between an encyclopedia and an intelligent entity... Is intelligence a useful metric if we exclude being able to do anything useful with it? And how much do we exclude by not factoring in parts of the environment/world?

And is there a difference between being book-smart and intelligent? Because LLMs certainly get all of their information second-hand and filtered in some way. They can't really see the world itself, smell it, touch it and manipulate something and observe the consequences... They only get a textual description of what someone did and put into words in some book or text on the internet. Is that a minor or major limitation, and do we know for sure this doesn't matter?

(Plus, I think we need to get "hallucinations" under control. That's also not 100% "intelligence", but it also cuts into actual use if that intelligence isn't reliably there.)

[–] zeca@lemmy.eco.br 7 points 22 hours ago* (last edited 22 hours ago) (3 children)

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

[–] barsoap@lemm.ee 1 points 1 hour ago* (last edited 1 hour ago)

But we know too little about whether the limits of the turing machine are also limits of human cognition.

Erm, no. Humans can manually step interpreters of Turing-complete languages so we're TC ourselves. There is no more powerful class of computation, we can compute any computable function and our silicon computers can do it as well (given infinite time and scratch space yadayada theoretical wibbles)

The question isn't "whether", the answer to that is "yes of course", the question is first and foremost "what" and then "how", as in "is it fast and efficient enough".

[–] Free_Opinions@feddit.uk 1 points 9 hours ago* (last edited 9 hours ago)

As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

I doubt it will remain at "human level" for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

I wonder if we'll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

[–] ipkpjersi@lemmy.ml -1 points 19 hours ago* (last edited 19 hours ago) (2 children)

That's kind of too broad, though. It's too generic of a description.

[–] Entropywins@lemmy.world 9 points 19 hours ago

The key word here is general friend. We can't define general anymore narrowly, or it would no longer be general.

[–] CheeseNoodle@lemmy.world 6 points 17 hours ago

That's the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.