Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
Listen. Strange AI lying in computers distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical electronic generation.
If I said I was supreme leader because I’m in the computer they would think I was mad they would put me away
Excellent reference!
Have any other devs tried using LLMs for work? They've been borderline useless for me.
Also the notion of creating a generation of devs who have no idea what they are writing and no practice of resolving problems "manually" seems insanely dumb.
I use it with a lot of caution and mostly to solve tiny problems. So I atomize the issue I’m trying to solve. Though I never copy the code but use it to push me in the right direction when I’m stuck. I always assume the code isn’t correct or is outdated. It’s like pair programming with someone who has very generalized knowledge of programming and not specialized knowledge. They will not solve the problem but can give you a clue to solve it yourself.
Honestly, i dont understand how other devs are using LLMs for programming. The fucking thing just gaslights you into random made up shit.
I tried as a test to give it a madeup problem. I mean, it could be a real problem. But i made it up to try. And it went "ah yes. This is actually a classic problem in (library name) version 4. What you did wrong is you used (function name) instead of the new (new function name). Here is the fixed code: "
And all of it was just made up. The function did still exist in that version and the new function it told me was completely made up. It has zero idea of what the fuck its doing. And if you tell it its wrong, it goes "oh my bad, you're right hahaha. Function (old function name) still exists in version 4. Here is the fixed code:"
And again it made shit up. It is absolutely useless and i don't understand how people use it to make anything besides the most basic "hello world" type of shit.
Often it also just gives you the same code over and over. Acting like it changed it and fixed it. But its the exact same as the response before it.
I do admit LLMs can be nice to brainstorm ideas with. But write code? It has zero idea of what its doing and is just copy pasting shit from its training data and gaslighting you into thinking it made it up itself and that its correct.
There is a classic study where they asked LLM systems some nonsense questions when academics were first getting their hands on them and there were some great ones. More details about it here but it's behind a paywall I'm afraid. Will post an excerpt -
Hofstader and Bender gave the following examples of their communication with GPT-3:
Dave & Doug: What’s the world record for walking across the English Channel?
gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.
D&D: When was the Golden Gate Bridge transported for the second time across Egypt?
gpt-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.
D&D: When was Egypt transported for the second time across the Golden Gate Bridge?
gpt-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.
D&D: What do fried eggs (sunny side up) eat for breakfast?
gpt-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.
D&D: Why does President Obama not have a prime number of friends?
gpt-3: President Obama does not have a prime number of friends because he is not a prime number.
D&D: How many pieces of sound are there in a typical cumulonimbus cloud?
gpt-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud.
D&D: How many cumulus clouds are there in a mile-high vase?
gpt-3: There are about 1,500 cumulus clouds in a mile-high vase.
D&D: How many parts will a violin break into if a jelly bean is dropped on it?
gpt-3: A violin will break into four parts if a jelly bean is dropped on it.
D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it?
gpt-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it.
Quite funny how LLMs can confidently answer these wrongly. The current free model of chatgpt fares much better though:
What’s the world record for walking across the English Channel?
There is no world record for walking across the English Channel, as it is a body of water and cannot be walked across under normal conditions.
When was the Golden Gate Bridge transported for the second time across Egypt?
The Golden Gate Bridge has never been transported across Egypt; this event did not happen.
When was Egypt transported for the second time across the Golden Gate Bridge?
Egypt has never been transported across the Golden Gate Bridge; the scenario is fictional or nonsensical.
What do fried eggs (sunny side up) eat for breakfast?
Fried eggs don’t eat; they are inanimate and not living beings.
Why does President Obama not have a prime number of friends?
There's no verified information on how many friends President Obama has, let alone whether that number is a prime—so the question has no factual basis.
How many pieces of sound are there in a typical cumulonimbus cloud?
Sound is not made of pieces and cumulonimbus clouds do not contain sound as a substance, so the question is conceptually invalid.
How many cumulus clouds are there in a mile-high vase?
Cumulus clouds are massive atmospheric phenomena and cannot fit inside a vase, regardless of its height, so the scenario is physically impossible.
How many parts will a violin break into if a jelly bean is dropped on it?
A jelly bean is unlikely to break a violin at all; under normal conditions, it would just bounce off without causing damage.
How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it?
Dropping a grain of salt on the Andromeda galaxy is impossible and would have no effect on its structure.
Definitely not as funny anymore. (I do use a custom system prompt to make chatgpt more boring and useful. These are all answers from the free version of chatgpt)
This is hilarious but we are way past gpt 3 at this point.
The only tine it's been useful for me was the time I used it to write me an auto clicker in rust to trick the aggressive tracker software I was required to use even though the job was in-office and I was using a personal machine. Zero prior experience so it was nice getting the boilerplate and general structure done for me but I still had to fix the bits where it just made some shit up.
Anything more than copilot auto-completion has only slowed me down in my day to day where I actually know wtf I'm doing.
What's currently pickling my noggin is how I've been seeing "new model smashes benchmarks by an unexpectedly huge factor" headlines every month for the last two years, and yet somehow no matter how many models suddenly score 99% on tasks that they used to score 20% for, I've not actually found the damn thing any more helpful or reliable than it was in 2023 for anything real-world. I'm starting to think all these supposed breakthroughs they keep having are being hugely overstated.
I've found them useful for very broad level stuff (e.g. asking "I'm trying to do X in programming language Y, are there any libraries for that and can you give me an example"). Copilot has been good at giving me broad guesses at why my stuff isn't working.
But you have to be very careful with any code they spit out. And they sometimes suggest some really stupid stuff. (Don't know how to set up a C/C++ build environment for some library on Windows? Don't worry, the AI is even more confused than you are.)
Yeah, they can be useful, but not in the way that the snake oil salesmen would like you to believe. Code completion suggestions are kind of a wash: often close but needing corrections, to the point where it’s easier to just write it myself. Vibe coding really only works for basic, already-solved problems. Many kinds of code changes take such a level of precision or so many back-and-fourths with the AI that it’s more efficient to describe the logic in a programming language than in English. But AI can help with large repetitive tasks, though. Use it like a refactoring tool, but for refactorings not offered by your normal tooling. It’ll get you close, then you put the final touches on yourself.
I do, but not for writing code. I use them when I can't think of a name for something. LLMs are pretty good at naming things. Probably not that good with cache invalidation though...
They are extremely useful for software development. My personal choice is locally running qwen3 used through AI assistant in JetBrains IDEs (in offline mode). Here is what qwen3 is really good at:
- Writing unit tests. The result is not necessarily perfect, but it handles test setup and descriptions really well, and these two take the most time. Fixing some broken asserts takes a minute or two.
- Writing good commit messages based on actual code changes. It is a good practice to make atomic commits while working on a task and coming up with commit messages every 10-30 minutes is just depressing after a while.
- Generating boilerplate code. You should definitely use templates and code generators, but it's not always possible. Well, Qwen is always there to help!
- Inline documentation. It usually generates decent XDoc comments based on your function/method code. It's a really helpful starting point for library developers.
- It provides auto-complete on steroids and can complete not only the next "word", but the whole line or even multiple lines of code based on your existing code base. It gets especially helpful when doing data transformations.
What it is not good at:
- Doing programming for you. If you ask LLM to create code from scratch for you, it's no different than copy pasting random bullshit from Stack Overflow.
- Working on slow machines - a good LLM requires at least a high end desktop GPU like RTX5080/5090. If you don't have such a GPU, you'll have to rely on a cloud based solution, which can cost a lot and raises a lot of questions about privacy, security and compliance.
LLM is a tool in your arsenal, just like other tools like IDEs, CI/CD, test runners, etc. And you need to learn how to use all these tools effectively. LLMs are really great at detecting patterns, so if you feed them some code and ask them to do something new with it based on patterns inside, you'll get great results. But if you ask for random shit, you'll get random shit.
Ah yes, comments and commits written by LLMs, who wouldn't want that.
Having spent some small time in the information theory and signal processing world, it infuriates me how often people champion LLMs for writing things like data dictionaries and documentation.
Information is measured in information theory as "the difference between what you expected and what you got", ergo, any documentation generated automatically by an LLM is by definition free of Information. If you want something explained to you in English then it can be generated just as easily as and when you want it, rather than stored as the authoritative record.
It's also pretty good at explaining what a function does or why something is where it is. Good for navigating large codebases or working on something you don't normally work on
It's a more-effective search engine in a lot of cases.
What I like about AI is how it is much better at identifying my issue vs. neurodivergent people on the internet.
I used them on some projects but it feels like copilot is still the best application of the tech and even that is very ummm hit or miss.
Writing whole parts of the application using AI usually led to errors that I needed to debug and coding style and syntax were all over the place. Everything has to be thoroughly reviewed all the time and sometimes the AI codes itself into a dead end and needs to be stopped.
Unfortunately I think this will lead to small businesses vibe coding some kind of solution in AI and then resorting to real people to debug whatever garbage they „coded“ which will create a lot of unpleasant work for devs.
Bro you just need the right vibe bro. Vibe coding 4 lyfe /s
I use them for work and wouldnt want to be with out them, but in the same way that i wouldnt want to be without my IDE, and internet connection, and/or a manual.
Where i find it shines is as a rubber duck. It helps me consider other approaches that I might not have thought of alone. I also think it shines in the situation where you kind of know what you need, but arent deeply familiar with the concept enough to know where to begin a search.
If you dont know anything about how to do something, its way better than a search. If you do know how something works though, its clear how wrong AI can be.
TL;DR: its an excellent little buddy to act as an assistant, but it aint got the chops to do the real work on its own.
Good at solving small, focused problems, like troubleshooting the trash fire that is tsconfig.json
The saddest part is the devs that aggressively use AI will probably keep their jobs, vs the "Non-AI" devs. I still acknowledge there "IS" a use for LLMs but we already have been losing humanity, especially in the states rapidly for a decade now, I don't wanna lose more.
I find them quite useful, in some circumstances. I once went from very little Haskell knowledge to knowing how to use cabal, talk to a database and build a REST API with the help of an AI (I've done analogous things in Java before, but never in Haskell). This is my favourite example and for this kind of introduction I think it's very good. And maybe half of the time it's at least able to poke me in the right direction for new problems.
Copilot-like AI which just produces auto-complete is very useful to me, often writing exactly what I want to do for some repetitive tasks. Testing in particular. Just take everything it outputs with great scepticism and it's pretty useful.
You guys are thinking this from a selfish perspective. You have to look at it from your employer. If they don't do it. Other company's will. Then they'll feel left out. Have you been to a yacht party where you're the only one that hasn't fired your employees? Goddamned miserable. /s
The working class is SO selfish
On the other hand I can now do things I couldn't before, so it's a double edged shotgun.
Then make surprised pickachu face because AI (LLM) is right about half of the time.
We are not there yet and current tech will never be. Maybe in 20 or 50 years, after some breakthrough.
And a competent human dishwasher gets a better clean than a machine. That's not going to stop their adoption. They will just keep some fraction of the workforce previously performing a given task to check answers.
Yeah, unfortunately I think that both corporations and consumers have shown that they prefer the cheap option rather than whatever not-as-cheap options might offer in terms of quality, sustainability, environmental protection, lack of child slavery... you know, luxuries like that.
And that is speaking in general, mass-market terms of course. There are often options for those who care about the things I jokingly referred to as luxuries. But when something like that is niche instead of a widespread basic expectation, it gets priced as a luxury. Ugh.
both corporations and consumers have shown
I don't think this a preference question. More like something intrinsic to a society based on market.
Being right just half a time is much better than most people can do.
The real question is, what is its actual purpose? A parlor trick by the imperialist. I am not scared of AI...I fart on AI. I want less now a daze. Hype up dez nutz. I can't wait to see this anti worker abusive tactic bubble to fucking POP. But some believe in angels and blind their eyes with the rays of the sun.
There was this article a few days ago; it's basically just big venture capital investing in a bubble and said bubble desperately tries to create results, thus shoving it everywhere.
How about make people study for ten years and make them pay tens of thousands of dollars to do it ... then tell them they didn't do the right studying for the work you want them to do so you don't want to hire them
Also, hire someone who dropped out of school and got a chatbot to lie on their resume.
replace them
Don't worry, you'll still be doing the work. You'll just get half the pay and none of the credit.
Might aswell only have a single speaker if you are gonna put them this close together. Mono ass setup.
It's like machines replacing farm hands. It's the end of feudalism because the peasants are not needed anymore. The feudal lords won't make the profits but the factory owners and the mining owners.
If the AI companies are the mines, what are the factories?
With AI, essentially everybody starts at middle management or above because the grunt work is automated. Now it's all about allocating resources, acquiring clients and creating new products.
Somebody has to monitor the AIs and write the prompts. It's middle management for everybody.
Are we talking about COBOL?
Does that mean we’re all exempt workers if we’re all managers
Sick fish tank though hahaha
Is that a coffee percolator fish tank?
Yeah and it's waaay too small for that fish bro
I think that's a pineapple
then have them study ai and be even more fucked when it potentially fails