this post was submitted on 01 Apr 2025
-5 points (30.8% liked)

Asklemmy

47147 readers
838 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

A specific example from me would be implementing LLM AI into my code (genetically) and without more details than that I'll get people demanding that I don't do that and giving suggestions for what I should do.

Suggestions are cool, but I'm gonna ask why I should not put LLM in my code in a generic sense just to have my question ignored or have lies and insults hurled my way

It's cool if you want to answer that question, I'm just curious about other people's similar story about receiving resistance to follow up questions if you just have to say those people aren't worth it or you feel like you missed something you shouldn't have in those situations.

top 11 comments
sorted by: hot top controversial new old
[–] That_Devil_Girl@lemmy.ml 4 points 21 hours ago

Yup, I work as a shipwright welder and have had to refuse to complete assigned tasks. When I'm tasked with welding two large steel plates together, end to end, they both need to be double beveled. If they're not, then all I can do is make thin surface welds which are easily broken.

That's dangerous as these steel plates are an inch and a half thick and weigh a lot. Their weight alone will break surface welds. So I refuse to do the job. They ask why I refuse and I tell them about the lack of double bevel.

I'm even willing to break out the oxy/acetalyne torch and cut the bevels myself, but they refuse. They're in a hurry, they don't have time to do things correctly or safely, and they don't care about making it someone elses problem. That's the sort of shit that's likely to cause serious injury or death.

[–] rbn@sopuli.xyz 4 points 1 day ago* (last edited 1 day ago) (1 children)

Regarding your specific example, there pretty good reasons not to use AI if there's an adequate alternative, so I can absolutely understand people arguing against that.

AI is resource intensive and thus bad for the environment. Results usually aren't deterministic, so the behavior is no longer reproducible. If there is a defined algorithm to solve the issue in a correct way, AI will be less accurate. If you use cloud services, you may run into privacy issues.

Not saying there aren't any use cases for LLMs or other forms of AI. But just applying it everywhere 'cause it's fancy, is not a good idea.

In general, I appreciate if people question my work or come up with proposals for improvement as long as it's polite and the person is at least qualified to some degree. However, that does not mean that I change my mind immediately and follow their advice.

[–] PixelPilgrim@lemmings.world -4 points 1 day ago (1 children)

Yeah if you have better way of doing anything with no drawbacks you should do that I'll just say out of pure reason.

Thinking about deterministic results. I can think of a flawed code that gives a wrong result deterministically 1 out of its thousands of potential outputs and you can determine that 1 wrong answer is A) not big enough flaw to fix(code is good enough) B) not worth fixing since it's rare (too much effort to fix). Now how that applies to LLM is that you can see the what LLM outputs and determine it's execution is good enough or not working.

Using a lot of resources at the cost of the environment is more a value thing. Cyanobacterial didn't care about poisoning the environment with oxygen. Ironically I don't think the electric grid should be restructured for ai since I don't think so is doing anything important enough to warrant changing the electrical grid.

I would care if someone was rude or unqualified on an issue. Id want to know why something I did was wrong, either technically or morally, or if there a better way of doing and why it's better

[–] crusa187@lemmy.ml 1 points 2 hours ago

I would care if someone was rude or unqualified on an issue

Would you? Your tone reads as fairly rude in this post, and your qualifications seem quite lacking if you don’t even comprehend the dire environmental impact and obvious drawbacks of the vast majority of contemporary AI big compute. For that matter, most llm outputs are not deterministic, especially with certain configurations eg high temperature, etc, so I don’t even follow your contrived example here. Consider that Cyanobacteria are unaware of their environmental impact - humans are not so ignorant, unless they choose to be.

[–] Sir_Kevin@lemmy.dbzer0.com 2 points 1 day ago (1 children)

Yes, many times. Often it's because they don't understand what it is I'm trying to do. More often than not, they make wild assumptions about what I'm suggesting and then lose their fucking minds instead of asking for clarification. Ultimately it becomes an argument about what they think I'm talking about and I never get an actual answer.

[–] PixelPilgrim@lemmings.world 2 points 23 hours ago

I'm trying to think if that has happened to me, but I try to keep it simple like "i dont understand what you mean my X are you saying Y?". that probably gets me different set of interactions. One time I even tried humbling myself and just say "I'm still learning all of this and trying to figure out my mistakes..." and such ill get bereted(by strangers). i try to power through the insults and just ask what they mean and it still comes off as you being offensive

[–] j4k3@lemmy.world 1 points 1 day ago* (last edited 18 hours ago) (1 children)

When tech changes quickly, some people always resist exponentially in the opposite vector. The bigger and more sudden the disruption, the bigger the push back.

If you read some of Karl Marx stuff, it was the fear of the machines. Humans always make up a mythos of divine origin. Even atheists of the present are doing it. Almost all of the stories about AI are much the same stories of god machines that Marx was fearful of. There are many reasons why. Lemmy has several squeaky wheel users on this front. It is not a very good platform for sharing stuff about AI unfortunately.

There are many reasons why AI is not a super effective solution and overused in many applications. Exploring uses and applications is the smart thing to be doing in the present. I play with it daily, but I will gatekeep over the use of any cloud based service. The information that can be gleaned from any interaction with an AI prompt is exponentially greater than any datamining stalkerware that existed prior. The real depth of this privacy evasive potential is only possible with a large number of individual interactions. So I expect all applications to interact with my self hosted OpenAI compatible server.

The real frontier is in agentic workflows and developing effective niche focused momentum. Any addition of AI into general use type stuff is massively over used.

Also people tend to make assumptions about code as if all devs are equal or capable. In some sense I am a dev, but not really. I'm more of a script kiddie that dabbles in assembly at times. I use AI more like stack exchange to good effect.

[–] PixelPilgrim@lemmings.world 2 points 1 day ago (1 children)

yeah i see that marx said

[doing away] with all repose, all fixity and all security as far as the worker’s life-situation is concerned; how it constantly threatens, by taking away the instruments of labour, to snatch from his hands the means of subsistence, and, by suppressing his specialised function, to make him superfluous

but that's marx just saying industrialization is threating working class. im not seeing much myth just boring explanations of workers relation to machinery

I do use perplexity and chatgpt to code a lot. i really rather not go to stack overflow and try to understand three posts and figure out an implementation. I'm fine with that being automated

[–] j4k3@lemmy.world 1 points 1 day ago* (last edited 18 hours ago) (1 children)

I use the term myth loosely in abstraction. Generalization of the tools of industry is still a mythos in an abstract sense. Someone with a new lathe they bought to bore the journals of an engine block has absolutely no connection or intentions related to class, workers, or society. That abstraction and assignment of meaning like a category or entity or class is simply the evolution of a divine mythos in the more complex humans of today.

Stories about Skynet or The Matrix are about a similar struggle of the human class against machine gods. These have no relationship to the actual AI alignment problem and are instead a battle with more literal machine gods. Point is that the new thing is always the boogie man. Evolution must be deeply conservative most of the time. People display a similar trajectory of conservative aversion to change. In this light, the reasons for such resistance are largely irrelevant. It is a big change and will certainly get a lot of push back from conservative elements that collectively ensure change is not harmful. Those elements get cut off in the long term as the change propagates.

You need a 16 GB or better GPU from the 30 series or higher, but then run Oobabooga text gen with the API and an 8Γ—7b or like a 34b or 70b coder in a GGUF quantized model. Those are larger than most machines can run but Oobabooga can pull it off by splitting the model between CPU and GPU. You'll just need the ram to initially load the thing or deepspeed to load it from NVME.

Use a model with a long context and add a bunch of your chats into the prompt. Then ask for your user profile and start asking it questions about you that seem unrelated to any of your previous conversations in the context. You might be surprised by the results. Inference works both directions. You're giving a lot of information that is specifically related to the ongoing interchanges and language choices. If you add a bunch of your social media posts, it is totally different in what the model will make up about you in a user profile. There is information of some sort that the model is capable of deciphering. It is not absolute or like some kind of conspiracy or trained behavior (I think), but the accuracy seemed uncanny to me. It spat out surprising information across multiple unrelated sessions when I tried it a year ago.

[–] PixelPilgrim@lemmings.world 2 points 23 hours ago (1 children)

I actually didn't pursue a an llm ai project because the suggested model needed like 32 gigs of ram (i dont have that and i dont want to by a machine for that project).

i jokingly call llm ai dubious linear algebra. i try to see an arguement against llm ai, like i sided with the writers guild in the strike and I can sympathize with ai trained on their work taking there job so they lost out on income and job they want, but im a socialist so i believe that the economy should provide them a house and food without having to work and that shouldn't need to rely on writing gigs to survive

[–] j4k3@lemmy.world 1 points 21 hours ago* (last edited 18 hours ago)

I like to write, but have never done so professionally. I disagree that it hurts writers. I think people reacted poorly to AI because of the direct and indirect information campaign Altmann funded to try and make himself a monopoly. AI is just a tool. It is fun to play with in unique areas, but these often require very large models and/or advanced frameworks. In my science fiction universe I must go to extreme lengths to get the model to play along with several aspects like a restructure of politics, economics, and social hierarchy. I use several predictions I imagine about the distant future that plausibly make the present world seem primitive in several ways and with good reasons. This restructuring of society violates both some of our cultural norms in the present and is deep within areas of politics that are blocked by alignment. I tell a story where humans are the potentially volatile monsters to be feared. That is not the plot, but convincing a present model to collaborate on such a story ends up in the gutter a lot. My grammar and thought stream is not great and that is the main thing I use a model to clean up, but it is still collaborative to some extent.

I feel like there is an enormous range of stories to tell and that AI only makes these more accessible. I have gone off on tangents many times exploring parts of my universe because of directions the LLM took. Like I limit the model to generate a sentence at a time and I'm writing half or more of every sentence for the first 10k tokens. Then it picks up on my style so much that I can start the sentence with a word or change one word in a sentence and let it continue with great effect. It is most entertaining to me because it is almost as fast as me telling a story as fast as I can make it up. I don't see anything remotely bad about that. No one makes a career in the real world by copying someone else's writing. There are tons of fan works but those do not make anyone real money and they only increase the reach of the original author.

No, I think all the writers and artists hype was all about Altmann's plan for a monopoly that got derailed when Yann LeCunn covertly leaked the Llama weights after Altmann went against the founding principles of OpenAI and made GPT3 proprietary.

People got all upset about digital tools too back when they first came on the scene; about how they would destroy the artists. Sure it ended the era of hand painted cartoon cell animation, but it created stuff like Pixar.

All of AI is a tool. The only thing to hate is this culture of reductionism where people are given free money in the form of great efficiency gains and they choose to do the same things with less people and cash out the free money instead of using the opportunity to offer more, expand, and do something new. A few people could get a great tool chain together and create a franchise greater, better planned, and more rich than anything corporations have ever done to date. The only thing to hate are these little regressive stupid people without vision, without motivation, and far too conservatively timid to take risks and create the future. We live in an age of cowards worthy of loathing. That is the only problem I see.