2099
you are viewing a single comment's thread
view the rest of the comments
[-] RecluseRamble@lemmy.dbzer0.com 101 points 3 weeks ago* (last edited 3 weeks ago)

More like:

Computer scientist: We have made a text generator

Everyone: tExT iS iNtElLiGeNcE

[-] jorp@lemmy.world 15 points 3 weeks ago

I get that it's cool to hate on how AI is being shoved in our faces everywhere and I agree with that sentiment, but the technology is better than what you're giving it credit for.

You don't have to diminish the accomplishments of the actual people who studied and built these impressive things to point out that business are bandwagoning and rushing to get to market to satisfy investors. like with most technologies it's capitalism that's the problem.

LLMs emulate neural structures and have incredible natural language parsing capabilities that we've never even come close to accomplishing before. The prompt hacks alone are an incredibly interesting glance at how close these things come to "understanding." They're more like social engineering than any other kind of hack.

[-] AppleTea@lemmy.zip 45 points 3 weeks ago

The trouble with phrases like 'neural structures' and 'language parsing' is that these descriptions still play into the "AI" narrative that's been used to oversell large language models.

Fundamentally, these are statistical weights randomly wired up to other statistical weights, tested and pruned against a huge database. That isn't language parsing, it's still just brute-force calculation. The understanding comes from us, from people assigning linguistic meaning to patterns in binary.

[-] CompassRed@discuss.tchncs.de 5 points 3 weeks ago* (last edited 3 weeks ago)

Language parsing is a routine process that doesn't require AI and it's something we have been doing for decades. That phrase in no way plays into the hype of AI. Also, the weights may be random initially (though not uniformly random), but the way they are connected and relate to each other is not random. And after training, the weights are no longer random at all, so I don't see the point in bringing that up. Finally, machine learning models are not brute-force calculators. If they were, they would take billions of years to respond to even the simplest prompt because they would have to evaluate every possible response (even the nonsensical ones) before returning the best answer. They're better described as a greedy algorithm than a brute force algorithm.

I'm not going to get into an argument about whether these AIs understand anything, largely because I don't have a strong opinion on the matter, but also because that would require a definition of understanding which is an unsolved problem in philosophy. You can wax poetic about how humans are the only ones with true understanding and that LLMs are encoded in binary (which is somehow related to the point you're making in some unspecified way); however, your comment reveals how little you know about LLMs, machine learning, computer science, and the relevant philosophy in general. Your understanding of these AIs is just as shallow as those who claim that LLMs are intelligent agents of free will complete with conscious experience - you just happen to land closer to the mark.

load more comments (9 replies)
load more comments (11 replies)
load more comments (16 replies)
this post was submitted on 08 Jun 2024
2099 points (98.9% liked)

Programmer Humor

18250 readers
1 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS