1134
this post was submitted on 14 Mar 2025
1134 points (99.0% liked)
Technology
66471 readers
4597 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find it odd that Lemmy users are so adverse to tech.
I am opposed to shitty tech.
It's not shitty tech.
It is the shittiest tech. If you think this bullshit will actually lead to AGI, something that wouldn't be shit, you don't know much about LLMs or are incredibly delusional.
LLMs are an implementation on the way to AGI.
People are not averse to tech, they are averse to being treated like shit as compared to rich businesses. If copyright doesn't apply to companies it must not apply to individuals.
In that case most of I think will agree to LLMs learning from all the written stuff.
It's not an opposition to tech. It's an opposition to billionaires changing the rules whenever it benefits them, while the rest has to just sit with it.
The billionaires are the ones with the resources to develop this tech. We could nationalize it, but then people would complain about that too for different reasons.
Billionaires control our government so nationalizing it is no different.
Is that so? I don't find it odd at all when the only thing LLMs are good at so far is losing people their jobs and lowering the quality of essentially everything they get shoved into.
I agree with the other user that it sounds like user error. Or perhaps you've not really used them at all, and just have joined the AI hate bandwagon.
Cry about it. Crypto bros make the same excuses to this day prove your bullshit works before you start shoving it in my face. And yes, LLMs are really unhelpful. There's extremely little value you can get out of them (outside of generating text that looks like a human wrote it which is what they are designed to do) unless you are a proper moron.
You sound like an old man yelling about the TV. LLMs are NOT unhelpful. You'd know this if you actually used them.
I've used them and have yet to get a fully correct result on anything I've asked beyond the absolute basics. I always have to go in and correct some aspect of whatever it shits out. Scraping every bit of data they can get their hands on is only making the problem worse.
To say you've never gotten a fully correct result on anything has to be hyperbole. These things are tested. We know their hallucination rate, and it's not 100%.
In all of your replies, however, you fail to provide a single example. Are they writing code for you, or creating shitty art for you?
I have used them in a large variety of ways, from general knowledge seeking to specific knowledge seeking, writing code, generating audio, images, and video. I use it most days, if not essentially every day. What examples would you like me to provide? Tell me and I will provide them.
That sounds like user error, not the LLMs fault
The issue isn't with AI, it's with how companies position it. When they claim it'll do everything and solve all your issues and then it struggles with some tasks a 10 year old could do, it creates a very negative image.
It also doesn't help that they hallucinate with a lot of confidence and people use them as a solution, not as a tool - meaning they blindly accept the first answer that came out.
If the creators of models made more reasonable claims and the models were generally able to convey their confidence in the answers they gave maybe the reception wouldn't be so cold. But then there wouldn't be hype and AI wouldn't be actively shoved into everything.
I disagree with your take. I've found it extremely helpful in my life. I find using it and learning with it to be an enriching experience. I find following it's development and seeing it grow to be exciting. I see the possibilities of all the positive things it could do for the future of humanity.
I don't think a 10 year old could explain subatomic particles and the fundamental forces of the universe to me. I don't think they could refresh my memory of how to do geometry to help my son with his homework. I don't think a 10 year old could write a program for me to keep track of all the ebooks I have saved to my hard drive.
It's fairly obvious what's happening here. A bunch of people complaining about that newfangled thing they don't understand or see the full potential of, just like for every new technology that has ever emerged. The automobile would never take off. Humans would never fly. TV was a fad. The Internet wouldn't flourish. Rinse and repeat.
The world doesn't allow us to disconnect tech and capitalism. Why should we be happy about the tech just for the techs sake? People aren't adverse to the tech. They are against its use to further our exploitation.
It's not tech for techs sake, and it's not exploitation.
Technological advances are supposed to improve peoples lives. Allow them to work less and enjoy things more often.
It's why we invented a wheel. It's why we invented better weapons to hunt with.
"Tech for techs sake" is enjoying the technology and ignoring its impact on people's lives.
When a society creates a massive sum of information accessible to all, trains new technology on data created by that society, and then a small subset of that society steals and uses that data to profit themselves and themselves alone; I don't know what else you call that but exploitation.
Advances in AI should make our lives better. Not worse. Because of our economic model we have decided that technological advances no longer benefit everyone, but hurt a majority of the population for the profits of a few.
The AI is not the problem in this case. The economic model is. It is not an economic model suitable for the advancement of technology.
Yeah it's crazy how intense the Lemmy hive mind is about some things. It's basically a cult
lol, this is a human trait, not a Reddit/Twitter/Lemmy "thing".