this post was submitted on 17 Mar 2025
1329 points (99.8% liked)
Programmer Humor
34426 readers
268 users here now
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's certainly one theory, but as we are largely out of training data there's not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.
Just generate the training material, duh.
DeepSeek
This is certainly the pattern that is actively emerging.
I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.
"more breakthroughs" spoken like we get these once everyday like milk delivery.
I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.
None of it's perfect, but a lot of it's fuckin' spooky, and any form of "well it can't do [blank]" has a half-life.
Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.
Dipshits going "I made this!" is not indicative of what this makes possible.
I kid you not, I took ML back in 2014 as a extra semester in my undergrad. The complaints then were the same as complaints now: too much power requirement, too many false positives. The latter of the two has evolved into hallucinations.
If normal people going "I made this!" is not convincing enough that it is easily identified then who is this going to replace? you still need the right expert right? all it creates is more work for experts to come and fix broken AI output.
Despite results improving at an insane rate, very recently. And you think this is proof of a problem with... the results? Not the complaints?
People went "I made this!" with fucking Terragen. A program that renders wild alien landscapes which became generic after about the fifth one you saw. The problem there is not expertise. It's immense quantity for zero effort. None of that proves CGI in general is worthless non-art. It's just shifting what the computer will do for free.
At some point, we will take it for granted that text-to-speech can do an admirable job reading out whatever. It'll be a button you push when you're busy sometimes. The dipshits mass-uploading that for popular articles, over stock footage, will be as relevant as people posting seven thousand alien sunsets.
the results do keep improving of course. But it's not some silver bullet. Yes, your enthusiasm is warranted.. but you peddle it like the 2nd coming of christ which I don't like encouraging.
If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.
You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.
The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.
We don't need leaps and bounds, from here. We're already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.
And this is with LLMs - which are stupid. We didn't design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that'll fake its way through explaining why the answer is yes or no. If we're only interested in the accuracy of that answer, then we're wasting effort on the quality of the faking.
Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between "but right now it sucks it [blank]" and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.
I’m not saying they don’t have applications. But the idea of them being a one size fits all solution to everything is something being sold to VC investors and shareholders.
As you say - the issue is accuracy. And, as you also say - that’s not what these things do, and instead they make predictions about what comes next and present that confidently. Hallucinations aren’t errors, they’re what they were built to do.
If you want something which can set an alarm for you or find search results then something that responds to set inputs correctly 100% of the time is better than something more natural-seeming which is right 99%of the time.
Maybe along the line there will be a new approach, but what is currently branded as AI is never going to be what it’s being sold as.
That's your interpretation.
that's reality. Unless you're too deluded to think it's magic.
Everything possible in theory. Doesn't mean everything happened or just about to happen