this post was submitted on 12 Oct 2023
33 points (97.1% liked)
Technology
37699 readers
228 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I kind of regret learning ML sometimes. Being one of the 10 people per km2 who understand how it works is so annoying. It’s just a fancy mirror ffs, stop making weird faces at it you baboons!
The best part is it's not even that complicated of a thing conceptually. Like you don't need to study it to kind of understand the idea and some of its limitations.
Do you really understand how it works? What would you call a neural network with mirror neurons primed to react to certain stimuli patterns as the network gets trained... a mirror, or a baboon?
Lol nice way to say you don’t understand shit about it :D
I'm afraid that's not correct, but clearly this discussion is over.
ANNs don’t have “mirror neurons” lol
What do you call a neuron "that reacts both when a particular action is performed and when it is only observed"? Current LLMs are made out exclusively of mirror neurons, since their output (what they perform) is the same action as their input (what they observe).
I can’t even parse what you mean when you say their input is the same as their output, that would imply they don’t transform their input, which would defeat their purpose. This is nonsense.