this post was submitted on 27 Dec 2024
282 points (95.2% liked)

Technology

60115 readers
2590 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FlyingSquid@lemmy.world 11 points 3 hours ago* (last edited 3 hours ago)

"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.

[–] Free_Opinions@feddit.uk 26 points 9 hours ago (3 children)

We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.

[–] zeca@lemmy.eco.br 6 points 6 hours ago* (last edited 6 hours ago) (1 children)

Its a definition, but not an effective one in the sense that we can test and recognize it. Can we list all cognitive tasks a human can do? To avoid testing a probably infinite list, we should instead understand what are the basic cognitive abilities of humans that compose all other cognitive abilities we have, if thats even possible. Like the equivalent of a turing machine, but for human cognition. The Turing machine is based on a finite list of mechanisms and it is considered as the ultimate computer (in the classical sense of computing, but with potentially infinite memory). But we know too little about whether the limits of the turing machine are also limits of human cognition.

I wonder if we'll get something like NP Complete for AGI, as in a set of problems that humans can solve, or that common problems can be simplified down/converted to.

[–] LifeInMultipleChoice@lemmy.ml 8 points 8 hours ago (1 children)

So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether... And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I'd say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can't, but language models to me aren't "AGI" in my opinion.

[–] hendrik@palaver.p3x.de 5 points 7 hours ago

Agree. And these tasks can't be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn't enough in my eyes. Especially since it even struggles to do that. It's the "general" that is missing.

[–] ipkpjersi@lemmy.ml 0 points 3 hours ago* (last edited 3 hours ago) (2 children)

That's kind of too broad, though. It's too generic of a description.

[–] CheeseNoodle@lemmy.world 2 points 1 hour ago

That's the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.

[–] Entropywins@lemmy.world 5 points 3 hours ago

The key word here is general friend. We can't define general anymore narrowly, or it would no longer be general.

[–] Mikina@programming.dev 140 points 14 hours ago (51 children)

Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.

If we ever get it, it won't be through LLMs.

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

[–] GamingChairModel@lemmy.world 14 points 4 hours ago

I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.

They did! Here's a paper that proves basically that:

van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5

Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.

This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.

[–] rottingleaf@lemmy.world 4 points 4 hours ago* (last edited 3 hours ago)

I mean, human intelligence is ultimately too "just" something.

And 10 years ago people would often refer to "Turing test" and imitation games in the sense of what is artificial intelligence and what is not.

My complaint to what's now called AI is that it's as similar to intelligence as skin cells grown in the form of a d*ck are similar to a real d*ck with its complexity. Or as a real-size toy building is similar to a real building.

But I disagree that this technology will not be present in a real AGI if it's achieved. I think that it will be.

[–] technocrit@lemmy.dbzer0.com 4 points 6 hours ago

It's impossible to disprove statements that are inherently unscientific.

[–] suy@programming.dev 7 points 7 hours ago

Lol. We’re as far away from getting to AGI as we were before the whole LLM craze. It’s just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what’s the next most likely letter/token based on what’s before it, that can’t even get it’s facts straith without bullshitting.

This is correct, and I don't think many serious people disagree with it.

If we ever get it, it won’t be through LLMs.

Well... depends. LLMs alone, no, but the researchers who are working on solving the ARC AGI challenge, are using LLMs as a basis. The one which won this year is open source (all are if are eligible for winning the prize, and they need to run on the private data set), and was based on Mixtral. The "trick" is that they do more than that. All the attempts do extra compute at test time, so they can try to go beyond what their training data allows them to do "fine". The key for generality is trying to learn after you've been trained, to try to solve something that you've not been prepared for.

Even OpenAI's O1 and O3 do that, and so does the one that Google has released recently. They are still using heavily an LLM, but they do more.

I hope someone will finally mathematically prove that it’s impossible with current algorithms, so we can finally be done with this bullshiting.

I'm not sure if it's already proven or provable, but I think this is generally agreed. just deep learning will be able to fit a very complex curve/manifold/etc, but nothing more. It can't go beyond what was trained on. But the approaches for generalizing all seem to do more than that, doing search, or program synthesis, or whatever.

[–] TheFriar@lemm.ee 12 points 9 hours ago (1 children)

The only text predictor I want in my life is T9

[–] Edgarallenpwn@midwest.social 3 points 8 hours ago (1 children)

I still have fun memories of typing "going" in T9. Idk why but it 46464 was fun to hit.

[–] BreadstickNinja@lemmy.world 3 points 7 hours ago

I remember that the keys for "good," "gone," and "home" were all the same, but I had the muscle memory to cycle through to the right one without even looking at the screen. Could type a text one-handed while driving without looking at the screen. Not possible on a smartphone!

[–] SlopppyEngineer@lemmy.world 29 points 12 hours ago

There are already a few papers about diminishing returns in LLM.

load more comments (45 replies)
[–] frezik@midwest.social 61 points 14 hours ago (2 children)

We taught sand to do math

And now we're teaching it to dream

All the stupid fucks can think to do with it

Is sell more cars

[–] technocrit@lemmy.dbzer0.com 3 points 6 hours ago (1 children)
[–] frezik@midwest.social 2 points 5 hours ago

I dunno, I don't do math very well when I dream.

Cars, and snake oil, and propaganda

[–] adarza@lemmy.ca 261 points 18 hours ago (24 children)

AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits

nothing to do with actual capabilities.. just the ability to make piles and piles of money.

[–] NotSteve_@lemmy.ca 14 points 9 hours ago

That's an Onion level of capitalism

load more comments (23 replies)
load more comments
view more: next ›