this post was submitted on 18 May 2025
247 points (93.6% liked)

Ask Lemmy

32266 readers
1818 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

(page 4) 50 comments
sorted by: hot top controversial new old
[–] Sunflier@lemmy.world 3 points 2 weeks ago* (last edited 2 weeks ago)

Disable all ai being on by default. Offer me a way to opt into having ai, but don't shove it down my throat by default. I don't want google ai listening in on my calls without having the option to disable it. I am an attorney, and many of my calls are privileged. Having a third party listen in could cause that privilege to be lost.

I want ai that is stupid. I live in a capitalist plutocracy that is replacing workers with ai as fast and hard as possible without having ubi. I live in the United States, which doesn't even have universal health insurance. So, ubi is fucked. This sets up the environment where a lot of people will be unemployable through no fault of their own because of ai. Thus without ubi, we're back to starvation and hoovervilles. But, fuck us. They got theirs.

[–] BananaTrifleViolin@lemmy.world 3 points 2 weeks ago

I'm not against AI itself—it's the hype and misinformation that frustrate me. LLMs aren't true AI - or not AGI as the meaning of AI has drifted - but they've been branded that way to fuel tech and stock market bubbles. While LLMs can be useful, they're still early-stage software, causing harm through misinformation and widespread copyright issues. They're being misapplied to tasks like search, leading to poor results and damaging the reputation of AI.

Real AI lies in advanced neural networks, which are still a long way off. I wish tech companies would stop misleading the public, but the bubble will burst eventually—though not before doing considerable harm.

[–] yarr@feddit.nl 3 points 2 weeks ago* (last edited 2 weeks ago)

My favorite one that I've heard is: "ban it". This has a lot of problems... let's say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.

Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.

If you want to see a parallel to this, check out Japan's reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn't sit still either. A temporary reprieve.

The more aggressive of you will say, this is no problem, let's push for a worldwide ban. Good luck with that. For almost any issue on Earth, I'm not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.

AI is here. It's like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.

The question shouldn't be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What's the right balance? The genie isn't going back in the bottle, no matter how hard you wish.

[–] CCAirWater@lemm.ee 3 points 2 weeks ago* (last edited 2 weeks ago)

Our current 'AI' is not AI. It is not.

It is a corporate entity to shirk labor costs and lie to the public.

It is an algorithm designed to lie and the shills who made it are soulless liars, too.

It only exists for corporations and people to cut corners and think they did it right because of the lies.

And again, it is NOT artificial intelligence by the standard I hold to myself.

And it pisses me off to no fucking end.

I personally would love an AI personal assistant that wasn't tied to a corporation listening to every fkin thing I say or do. I would absolutely love it.

I'm a huge Sci-Fi fan, so sure I fear it to a degree. But, if I'm being honest, AI would be amazing if it could analyze how I learned math wrong as a kid and provide ways to fix it. It would be amazing if it could help me routinely create schedules for exercise and food and grocery lists with steps to cook and how all of those combine to effect my body. It would be fantastic if it could point me to novels and have a critical debate about the inner works with a setting of being a contrarian or not so I can seek to deeply understand the novels.

It sounds like what our current state of AI has right? No. The current state is a lying machine. It cannot have critical thought. Sure, it can give me a schedule of food/exercise, but it might tell me I need to lift 400lbs and eat a thousand turkeys to meet a goal of being 0.02grams heavy. It might tell me 5+7 equals 547,032.

It doesn't know what the fuck it's talking about!

Like, ultimately, I want a machine friend who pushes me to better myself and helps me understand my own shortcomings.

I don't want a lying brick bullshit machine that gives me all the answers but they are all wrong because it's just a guesswork framework full of 'whats the next best word?'

Edit: and don't even get me fucking started on the shady practices of stealing art. Those bastards trained it on people's hard work and are selling it as their own. And it can't even do it right, yet people are still buying it and using it at every turn. I don't want to see another shitty doodle with 8 fingers and overly contrasted bullshit in an ad or in a video game. I don't want to ever hear that fucking computer voice on YouTube again. I stopped using shortform videos because of how fucking annoying that voice is. It's low effort nonsense and infuriates the hell out of me.

[–] anachrohack@lemmy.world 2 points 2 weeks ago

License it's usage

[–] PlzGivHugs@sh.itjust.works 2 points 2 weeks ago

I think two main things need to happen: increased transparency from AI companies, and limits on use of training data.

In regards to transparency, a lot of current AI companies hide information about how their models are designed, produced, weighted and use. This causes, in my opinion, many of the worst effects of current AI. Lack of transparency around training methods mean we don't know how much power AI training uses. Lack of transparency in training data makes it easier for the companies to hide their piracy. Lack of transparency in weighting and use means that many of the big AI companies can abuse their position to push agendas, such as Elon Musk's manipulation of Grok, and the CCP's use of DeepSeek. Hell, if issues like these were more visible, its entirely possible AI companies wouldn't have as much investment, and thus power as they do now.

In terms of limits on training data, I think a lot of the backlash to it is over-exaggerated. AI basically takes sources and averages them. While there is little creativity, the work is derivative and bland, not a direct copy. That said, if the works used for training were pirated, as many were, there obviously needs to be action taken. Similarly, there needs to be some way for artists to protect or sell their work. From my understanding, they technically have the legal means to do so, but as it stands, enforcement is effectively impossible and non-existant.

[–] Chadus_Maximus@lemm.ee 2 points 2 weeks ago

Most importantly, I wish countries would start giving a damn about the extreme power consumption caused by AI and regulate the hell out of it. Why do we need to lower our monitors refresh rate while there is a ton of energy used by useless AI agents instead that we should get rid of?

[–] kossa@feddit.org 2 points 2 weeks ago (1 children)

I would love to see regulation, that any contet created by AI cannot be used commercially.

I love e.g. that parents can make their own children books, but nobody should profit from all the stolen work of artists.

[–] StarMerchant938@lemmy.world 2 points 2 weeks ago (1 children)

Even that constitutes theft on some level. It devalues children's books and the talents required to create them. It disincentivizes parents to go support actual authors and illustrators, and anything they make with AI is based on stolen intellectual property.

[–] kossa@feddit.org 3 points 2 weeks ago

Yep, it does. But then again it is in line with how copyright works today: I can draw a Mickey Mouse comic for my child as much as I want, I cannot publish it.

And most parents would not have the time anyway to cut out "all the children's books". I love how I could create one that my daughter wished for for her birthday, but it is not a serious dent into our book spendings or library rentals.

[–] MTK@lemmy.world 2 points 2 weeks ago

Not much, just don't build it over theft.

[–] nutsack@lemmy.dbzer0.com 2 points 2 weeks ago

i would use it to take a shit if they let me

load more comments
view more: ‹ prev next ›