this post was submitted on 09 Jun 2025
57 points (81.3% liked)

Technology

71359 readers
3585 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

top 50 comments
sorted by: hot top controversial new old
[–] ada@piefed.blahaj.zone 79 points 5 days ago* (last edited 5 days ago) (2 children)

It's a hugely disruptive technology, that is harmful to the environment, being taken up and given center stage by a host of folk who don't understand it.

Like the industrial revolution, it has the chance to change the world in a massive way, but in doing so, it's going to fuck over a lot of people, and notch up greenhouse gas output. In a decade or two, we probably won't remember what life was like without them, but lots of people are going to be out of jobs, have their income streams cut off and have no alternatives available to them whilst that happens.

And whilst all of that is going on, we're getting told that it's the best most amazing thing that we all need, and it's being stuck in to everything, including things that don't benefit from the presence of an LLM, and sometimes, where the presence of an LLM can be actively harmful

[–] Mwa@thelemmy.club 2 points 5 days ago

Am Mixed about LLMS And stuff but i agree with this

load more comments (1 replies)
[–] cabbage@piefed.social 67 points 5 days ago

We're outsourcing thinking to a bullshit generator controlled by mostly American mega-corporations who have repeatedly demonstrated that they want to do us harm, burning through scarce resources and rendering creative humans robbed and unemployed in the process.

What's not to hate.

[–] Fontasia@feddit.nl 41 points 5 days ago* (last edited 4 days ago) (1 children)

I know there's people who could articulate it better than I can, but my logic goes like this:

  • Loss of critical thinking skill: This doesn't just apply for someone working on a software project that they don't really care about. Lots of coders start in their bedroom with notepad and some curiosity. If copilot interrupts you with mediocre but working code, you never get the chance to learn ways of solving a problem for yourself.
  • Style: code spat out by AI is a very specific style, and no amount of prompt modifiers with come up with the type of code someone designing for speed or low memory usage would produce that's nearly impossible to read but solves for a very specific case.
  • If everyone is a coder, no one is a coder: If everyone can claim to be a coder on paper, it will be harder to find good coders. Sure, you can make every applicant do FizzBuzz or a basic sort, but that does not give a good opportunity to show you can actually solve a problem. It will discourage people from becoming coders in the first place. A lot of companies can actually get by with vibe coders (at least for a while) and that dries up the market of the sort of junior positions that people need to get better and promoted to better positions.
  • When the code breaks, it takes a lot longer to understand and rectify when you don't know how any of it works. When you don't even bother designing or completing a test plan because Cursor developed a plan, which all came back green, pushed it during a convenient downtime and has archived all the old versions in its own internal logical structure that can't be easily undone.

Edits: Minor clarification and grammar.

[–] mbtrhcs@feddit.org 8 points 4 days ago

I'm an empirical researcher in software engineering and all of the points you're making are being supported by recent papers on SE and/or education. We are also seeing a strong shift in behavior of our students and a lack of ability to explain or justify their "own" work

[–] cmnybo@discuss.tchncs.de 30 points 5 days ago

My main issue is that LLMs are being used to flood the internet with AI slop. Almost every time I search for something, I have to go through a lot of results to find one with any usable information. The SEO spam before AI was bad enough, now it's significantly worse.

[–] wolf@lemmy.zip 28 points 5 days ago (1 children)

I am in software and a software engineer, but the least of my concerns is being replaced by an LLM any time soon.

  • I don't hate LLMs, they are just a tool and it does not make sense at all to hate a LLM the same way it does not make sense to hate a rock

  • I hate the marketing and the hype for several reasons:

    • You use the term AI/LLM in the posts title: There is nothing intelligent about LLMs if you understand how they work
    • The craziness about LLMs in the media, press and business brainwashes non technical people to think that there is intelligence involved and that LLMs will get better and better and solve the worlds problems (possible, but when you do an informed guess, the chances are quite low within the next decade)
    • All the LLM shit happening: Automatic translations w/o even asking me if stuff should be translated on websites, job loss for translators, companies hoping to get rid of experienced technical people because LLMs (and we will have to pick up the slack after the hype)
    • The lack of education in the population (and even among tech people) about how LLMs work, their limits and their usages...

LLMs are at the same time impressive (think jump to chat-gpt 4), show the ugliest forms of capitalism (CEOs learning, that every time they say AI the stock price goes 5% up), helpful (generate short pieces of code, translate other languages), annoying (generated content) and even dangerous (companies with the money can now literally and automatically flood the internet/news/media with more bullshit and faster).

[–] doctorschlotkin@lemm.ee 6 points 4 days ago (1 children)

Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.

Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.

[–] BestBouclettes@jlai.lu 3 points 4 days ago* (last edited 4 days ago)

LLMs and generative AI will do what social media did to us, but a thousand times worse. All that plus the nightmarish capacity of pattern matching at an industrial scale. Inequalities, repression, oppression, disinformation , propaganda and corruption will skyrocket because of it. It's genuinely terrifying.

[–] Saleh@feddit.org 26 points 4 days ago

I recently had an online event about using "AI" in my industry, construction.

The presentor finished on "Now is no the time to wait, but to get doing, lest you want to stay behind".

She gave examples of some companies she found that promised to help with "AI" in the process of designing constructions. When i asked her, if any of these companies are willing to take the legal risk that the designs are up to code and actually sound from an engineering perspective, she had to deny.

This sums it up for me. You get sold a hype by people who dont understand (or dont tell) what it is and isnt to managers who dont understand what it is and isnt over the heads of people who actually understand what it is or at least what it needs to be to be relevant. And these last people then get laid off or f*ed over in other ways as they have twice the work than before because now first they need to show to management why the "AI" result is criminal and then do all the regular design work anyways.

It is the same toxid dynamic like with any tech bro hype before. Just now it seems to look good at first and is more difficult to show why it is not.

This is especially dangerous when it comes to engineering.

[–] Dekkia@this.doesnotcut.it 19 points 5 days ago

I personally just find it annoying how it's shoehorned into everyting regardless if it makes sense to be there or not, without the option to turn it off.

I also don't find it helpful for most things I do.

[–] corsicanguppy@lemmy.ca 15 points 5 days ago

Emotional? No. Rational.

Use of Ai is showing as a bad idea for so many reasons that have been raised by people who study this kind of thing. There's nothing I can tell you that has any more validity than the experts' opinions. Go see.

[–] umbraroze@piefed.social 13 points 5 days ago

I'm not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.

My criticism comes in three parts:

  1. The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people's critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It's making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.

  2. A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are "open source" but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It's done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it's a trade secret. If it's not replicable, its scientific value is a little bit in question.

  3. The AI business is rotten to the core. AI businesses like to pretend they're altruistic innovators who take us to the Future. They're a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren't even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they're creating. If you try to apply even a little bit of reasonable regulation to this - "hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training" - they immediately whinge about how you're impeding the great march of human progress or someshit.

And I'm not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say "this is so easy to use that I can just make my programs myself, looks like I don't need you any more, you're fired, bye", and a year later, the biz bros come back and say "this software that I built is a pile of hellish garbage, please come back and fix this, I'll pay triple". This is just Visual Basic for Applications all over again.

[–] blackn1ght@feddit.uk 11 points 5 days ago (1 children)

I feel like it's more the sudden overnight hype about it rather than the technology itself. CEOs all around the world suddenly went "you all must use AI and shoe horn it into our product!". People are fatigued about constantly hearing about it.

But I think people, especially devs, don't like big changes (me included), which causes anxiety and then backlash. LLMs have caused quite a big change with the way we go about our day jobs. It's been such a big change that people are likely worried about what their career will look like in 5 or 10 years.

Personally I find it useful as a pairing buddy, it can generate some of the boilerplate bullshit and help you through problems, which might have taken longer to understand by trawling through various sites.

[–] taladar@sh.itjust.works 6 points 5 days ago (1 children)

It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.

The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people's (or worse, AI's) code is orders of magnitude harder than writing the same code yourself.

load more comments (1 replies)
[–] chris@lemmy.grey.fail 12 points 5 days ago (1 children)

I think a lot of it is anxiety; being replaced by AI, the continued enshitification of the services I loved, and the ever present notion that AI is, "the answer." After a while, it gets old and that anxiety mixes in with annoyance -- a perfect cocktail of animosity.

And AI stole em dashes from me, but that's a me-problem.

[–] MadMadBunny@lemmy.ca 7 points 5 days ago (1 children)

Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?

Bunshit.

[–] chris@lemmy.grey.fail 4 points 5 days ago (1 children)
[–] MadMadBunny@lemmy.ca 4 points 5 days ago

Fraking toaster…

[–] latenightnoir@lemmy.blahaj.zone 11 points 5 days ago* (last edited 5 days ago) (3 children)

To me, it's not the tech itself, it's the fact that it's being pushed as something it most definitely isn't. They're grifting hard to stuff an incomplete feature down everyone's throats, while using it to datamine the everloving spit out of us.

Truth be told, I'm genuinely excited about the concept of AGI, of the potential of what we're seeing now. I'm also one who believes AGI will ultimately be as a progeny and should be treated as such, as a being in itself, and while we aren't capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I'm also deeply disappointed to see these people behaving this way - like madmen and cultists. And as a further note, looking at our species' approach toward anything it sees as Other doesn't really make me think humanity would be adequate parents for any type of AGI as we are now, either.

The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That's my problem.

[–] MalReynolds@aussie.zone 10 points 5 days ago (1 children)

Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that's how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.

Said it better than I did, this is exactly it!

Right now, it's like watching everyone cheer on as the obvious Villain is developing nuclear weapons.

load more comments (2 replies)
[–] randon31415@lemmy.world 10 points 5 days ago

To me, it is the loss of meaningful work.

Alot of people have complained "why take arts and coders jobs - make AI take the drudgery filled work first and leave us the art and writing!" The problem is: automation already came for those jobs. In 90% of jobs today, the job CAN be automated with no AI needed. It just costs more to automate it then to pay a minimum wage worker. Than means anyone who works those jobs isn't ACTUALLY doing those jobs. They are instead saving their employer the difference between their pay and the amount needed to automate it.

Before genAI came, there were a few jobs that couldn't be automated. Those people thought that they not only have job security, but they were the only people actually producing things worth value. They were the ones that weren't just saving a boss a buck. Then genAI came. Why write a book, code a program, or paint a painting if some program can do the same? Oh, it is better? More authentic? It is surprising how much of the population doesn't care. And AI is getting better - poisoned training and loss of their users critical thinking skills not withstanding.

Soon, the only thing proud a worker can be about their work is how much they saved their employers money; and for most people that isn't meaning enough. Somethings got to change.

[–] thanksforallthefish@literature.cafe 8 points 5 days ago (6 children)

Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

I smell bovine faeces. Or are you, in fact, an LLM ?

[–] hanke@feddit.nu 8 points 5 days ago

He dumped windows (for Linux) amd installed LineageOS (on his phone).

OP likely has two devices.

[–] chrisbtoo@lemmy.ca 5 points 5 days ago

Calm down. They never said anything about the two things happening on the same device.

load more comments (4 replies)
[–] Fizz@lemmy.nz 7 points 5 days ago (1 children)

The main reason the invoke an emotional response. They stole everything from us (humans) illegally and then used it to make a technology that aims to replace us. I dont like that.

The second part is that I think they are shit at what people are using them for. They seem like they provide great answers but they are far to often completely wrong and the user doesnt know. Its also annoying that they are being shoved into everything.

[–] ToastedRavioli@midwest.social 3 points 5 days ago* (last edited 5 days ago)

Google AI recently told me that capybaras and caimans have a symbiotic relationship where the caimans protect them so they can eat their feces

[–] MudMan@fedia.io 7 points 5 days ago (3 children)

My hypothesis from the start is that people were on a roll with the crypto hate (which was a lot less ambiguous, since there were fewer legitimate applications there).

Then the AI gold rush hit and both investors and haters smoothly rolled onto that and transferred over a lot of the same discourse. It helps that AIbros overhyped the crap out of the tech, but the carryover hate was also entirely unwilling to acknowledge any kind of nuance from the go.

So now you have a bunch of people with significant emotional capital baked into the idea that genAI is fundamentally a scam and/or a world-destroying misstep that have a LOT of face to lose by conceding even a sliver of usefulness or legitimacy to the thing. They are not entirely right... but not entirely wrong, either, so there you go, the perfect recipe for an eternal culture war.

Welcome to discourse and public opinion in the online age. It kinda sucks.

[–] WanderingThoughts@europe.pub 9 points 5 days ago (1 children)

It doesn't help with the hate that LLMs have some obvious flaws but tech bros are dead set on declaring it the future and ramming it in every product foie gras style.

load more comments (1 replies)
load more comments (2 replies)
[–] Alphane_Moon@lemmy.world 5 points 5 days ago

For me personally, the problem is not so much LLMs and/or ML solutions (both of which I actively use), but the fact this industry is largely led by American tech oligarchs. Not only are they profoundly corrupt and almost comically dishonest, but they are also true degenerates.

[–] dhork@lemmy.world 4 points 4 days ago

My biggest issue is with how AI is being marketed, particularly by Apple. Every single Apple Intelligence commercial is about a mediocre person who is not up to the task in front of them, but asks their iPhone for help and ends up skating by. Their families are happy, their co-workers are impressed, and they learn nothing about how to handle the task on their own the next time except that their phone bailed their lame ass out.

It seems to be a reflection of our current political climate, though, where expertise is ignored, competence is scorned, and everyone is out for themselves.

[–] Epzillon@lemmy.world 4 points 4 days ago

Ethics and morality does it for me. It is insane to steal the works of millions and re-sell it in a black box.

The quality is lacking. Literally hallucinates garbage information and lies, which scammers now weaponize (see Slopsquatting).

Extreme energy costs and environmental damage. We could supply millions of poor with electricity yet we decided a sloppy AI which cant even count letters in a word was a better use case.

The AI developers themselves dont fully understand how it works or why it responds with certain things. Thus proving there cant be any guarantees for quality or safety of AI responses yet.

Laws, juridical systems and regulations are way behind, we dont have laws that can properly handle the usage or integration of AI yet.

Do note: LLM as a technology is fascinating. AI as a tool become fantastic. But now is not the time.

[–] MagicShel@lemmy.zip 4 points 5 days ago

I think a lot of ground has been covered. It's a useful technology that has been hyped to be way more than it is, and the really shitty part is a lot of companies are trying to throw away human workers for AI because they are that fucking stupid or that fucking greedy (or both).

They will fail, for the most part, because AI is a tool your employees use, they aren't a thing to foist onto your customers. Also where do the next generation of senior developers come from if we replace junior developers with AI? Substitute in teachers, artists, copy editors, others.

Add to that people who are too fucking stupid to understand AI deciding it needs to be involved in intelligence, warfare, police work.

I frequently disagree with the sky is falling crowd. AI use by individuals, particularly local AI (though it's not as capable) is democratizing. I moved from windows to Linux two years ago and I couldn't have done that if I hadn't had AI to help me troubleshoot a bunch of issues I had. I use it all the time at work to leverage my decades of experience in areas where I'd have to relearn a bunch of things from scratch. I wrote a Python program in a couple of hours having never written a line before because I knew what questions to ask.

I'm very excited for a future with LLMs helping us out. But everyone is fixated on AI gen (image, voice, text) but it's not great at that. What it excels at is very quickly giving feedback. You have to be smart enough to know when it's full of shit. That's why vibe coding is a dead end. I mean it's cool that very simple things can be churned out by very inexperienced developers, but that has a ceiling. An experienced developer can also leverage it to do more faster at a higher level, but there is a ceiling there as well. Human input and knowledge never stops being essential.

So welcome to Lemmy and discussion about AI. You have to be prepared for knee-jerk negativity, and the ubiquitous correction when you anthropomorphize AI as a shortcut to make your words easier to read. There isn't usually too much overtly effusive praise here as that gets shut down really quickly, but there is good discussion to be had among enthusiasts.

I find most of the things folks hate about AI aren't actually the things I do with it, so it's easy to not take the comments personally. I agree that ChatGPT written text is slop and I don't like it as writing. I agree AI art is soulless. I agree distributing AI generated nudes of someone is unethical (I could give a shit what anyone jerks off to in private). I agree that in certain niches, AI is taking jobs, even if I think humans ultimately do the jobs better. I do disagree that AI is inherently theft and I just don't engage with comments to that effect. It's unsettled law at this point and I find it highly transformative, but that's not a question anyone can answer in a legal sense, it's all just strongly worded opinion.

So discussions regarding AI are fraught, but there is plenty of good discourse.

Enjoy Lemmy!

[–] hendrik@palaver.p3x.de 4 points 5 days ago* (last edited 5 days ago)

You'll find a different prevailing mood in different communities here on Lemmy. The people in the technology community (the example you gave) are fed up with talking about AI all day, each day. They'd like to talk about other technology at times and that skews the mood. At least that's what I've heard some time ago... Go to a different community and discuss AI there and you'll find it's a different sentiment and audience there. (And in my opinion it's the right thing to do anyway. Why discuss everything in this community, and not in the ones dedicated to the topic?)

[–] lmuel@sopuli.xyz 4 points 5 days ago

I’m not part of the hate crowd but I do believe I understand at least some of it.

A fairly big issue I see with it is that people just don’t understand what it is. Too many people see it as some magical being that knows everything…

I’ve played with LLMs a lot, hosting them locally, etc., and I can’t say I find them terribly useful, but I wouldn’t hate them for what they are. There are more than enough real issues, of course, both societal and environmental.

One thing I do hate is using LLMs to generate tons of online content, though, be it comments or entire websites. That’s just not what I’m on the internet for.

Not to be snarky, but why didn’t you ask an LLM this question?

[–] webghost0101@sopuli.xyz 3 points 5 days ago

I believe Lemmy naturally attracts many people who are sick of enshitification including the prevalence of ai slobs.

Those people make very good points and havy very valid fears.

However don’t let the mob mentality get to you. Some people focus on the current day flaws they fail to see real dangers. While others demonise it so hard they can no longer distinguish good from bad applications.

There are still many people of all flavors of pro ai here.

My own take is that 99% of ai i see and hear is crap because this is transit period of praise and disappointment. But i also see massive positive potential a few decades from now.

Also that negative potential of course but the world has so much negative potential being run by flawed corruptible humans already its a maybe we win or we definitely lose all situation.

[–] muelltonne@feddit.org 3 points 5 days ago

LLMs are an awesome technology. They have their flaws. The companies behind them are totally unethical. The hype is insane and it is insane how many crappy AI integrations are popping up everywhere. Business models are in many cases not there. There is a real fear of job loss. But this tech is here to stay and you can do awesome thing with it. People totally misunderstand the whole energy usage issue. People are abusing ChatGPT & Co for things it is not build for and OpenAI actively encourages them.

But I really think that this community here has gone too much in the direction of AI hate. Even if somebody posts a great and substantial article, it will get downvoted because AI is in the title. And I really would like to discuss current AI here without people simply downvoting everything they do not like without having read the article

[–] yesman@lemmy.world 2 points 4 days ago

It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3)

It's ironic that you describe your impression of LLMs in emotional terms.

[–] Luffy879@lemmy.ml 2 points 5 days ago (1 children)

how to fully dump Windows and install LineageOS.

Are you fucking Moses? Then how the fuck did you manage to turn your Windows Machine into an android phone?

is the level of acrimony toward LLMs.

Good, since you apparantly arent able to use your brain, I'm gonna speed run it real quick:

  • Its frying the planet
  • its generating way to much slop, making anyone unable to find true non hallucinated information
  • its a literal PsyOP
  • it's rotting peoples critical thinking
  • its being shoved down everyone's throat, tho it can't even generate simple things.
  • people are using it to slood FOSS projects.

such an emotional response with this crowd.

Its not emotional, its just having the same negative experience over and over and over again

It's a tool that has gone from interesting (GPT3) to terrifying

The only thing that's terrifying about it us peoples brain rotting away their critical thinking

[–] mbirth@lemmy.ml 3 points 5 days ago

it's rotting peoples critical thinking

@gork is this real?

[–] Traister101@lemmy.today 2 points 5 days ago

Okay so imagine for a second that somebody just invented voice to text and everyone trying to sell it to you lies about it and claims it can read your thoughts and nobody will ever type things manually ever again.

The people trying to sell us LLMs lie about how they work and what they actually do. They generate text that looks like a human wrote it. That's all they do. There's some interesting attributes of this behavior, namely that when prompted with text that's a question the LLM will usually end up generating text that ends up being an answer. The LLM doesn't understand any part of this process any better than your phones autocorrect, it's just really good at generating text that looks like stuff it's seen in training. Depending on what exactly you want this thing to do it can be extremely useful or a complete scam. Take for example code generation. By and large they can generate code mostly okay, I'd say they tend to be slightly worse than a competent human. Genuinely really impressive for what it is, but it's not revolutionary. Basically the only actual use cases for this tech so far has been glorified autocomplete. It's kind of like NFTs or Crypto at large, there is actual utility there but nobody who's trying to sell the idea to you is actually involved or cares about that part, they just want to trick you into becoming their new money printer.

[–] INeedMana@lemmy.world 1 points 4 days ago

It might be interesting to cross-post this question to !fuck_ai@lemmy.world
but brace for impact

[–] 4am@lemm.ee 1 points 4 days ago

In addition to what everyone else has said, they’re doing all this not-useful work replacing humans with unrealistic hype, but to do it they’re using up SO MUCH natural resources it’s astonishing.

They have caused chip shortages. They are extracting all the natural water from aquifers in an area for cooling and then dumping it because it’s no longer potable. Microsoft and Google are talking about building nuclear power plants dedicated JUST to the LLMs.

They are doing this all for snake oil as others have pointed out. It’s destroying the world socially, economically, and physically. And not in a “oh cars disrupted buggy whips” kind of way; in a “the atmosphere is no longer breathable” kind of way.

[–] Brotha_Jaufrey@lemmy.world 1 points 4 days ago

AI becoming much more widespread isn’t because it’s actually that interesting. It’s all manufactured. Forcibly shoved into our faces. And for the negative things AI is capable of, I have an uneasy feeling about all this.

load more comments
view more: next ›