this post was submitted on 22 Dec 2024
336 points (95.9% liked)

Technology

60052 readers
2818 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sudneo@lemm.ee 30 points 10 hours ago (2 children)

Even if they plateaued in place where they are right now it would lead to major shakeups in humanity's current workflow

Like which one? Because it's now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

[–] locuester@lemmy.zip 14 points 9 hours ago (3 children)

Computer programming has radically changed. Huge help having llm auto complete and chat built in. IDEs like Cursor and Windsurf.

I’ve been a developer for 35 years. This is shaking it up as much as the internet did.

[–] Nalivai@lemmy.world 17 points 8 hours ago* (last edited 8 hours ago) (2 children)

I quit my previous job in part because I couldn't deal with the influx of terrible, unreliable, dangerous, bloated, nonsensical, not even working code that was suddenly pushed into one of the projects I was working on. That project is now completely dead, they froze it on some arbitrary version.
When junior dev makes a mistake, you can explain it to them and they will not make it again. When they use llm to make a mistake, there is nothing to explain to anyone.
I compare this shake more to an earthquake than to anything positive you can associate with shaking.

[–] InnerScientist@lemmy.world 4 points 7 hours ago

And so, the problem wasn't the ai/llm, it was the person who said "looks good" without even looking at the generated code, and then the person who read that pull request and said, again without reading the code, "lgtm".

If you have good policies then it doesn't matter how many bad practice's are used, it still won't be merged.

The only overhead is that you have to read all the requests but if it's an internal project then telling everyone to read and understand their code shouldn't be the issue.

[–] locuester@lemmy.zip -1 points 3 hours ago

This is a problem with your team/project. It’s not a problem with the technology.

[–] sudneo@lemm.ee 20 points 9 hours ago (2 children)

I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

I often use LLMs to work on my personal projects and - for example - often Claude or ChatGPT 4o spit out programs that don't compile, use inexistent functions, are bloated etc. Possibly for languages with more training (like Python) they do better, but I can't see it as a "radical change" and more like a well configured snippet plugin and auto complete feature.

LLMs can't count, can't analyze novel problems (by definition) and provide innovative solutions...why would they radically change programming?

[–] locuester@lemmy.zip -1 points 3 hours ago

You’re missing it. Use Cursor or Windsurf. The autocomplete will help in so many tedious situations. It’s game changing.

[–] areyouevenreal@lemm.ee 0 points 8 hours ago (1 children)

ChatGPT 4o isn't even the most advanced model, yet I have seen it do things you say it can't. Maybe work on your prompting.

[–] sudneo@lemm.ee 7 points 7 hours ago

That is my experience, it's generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It's not a matter a prompting though, it's not my prompt that makes it hallucinate functions that don't exist in libraries or make it write code that doesn't compile, it's a feature of the technology itself.

GPTs are statistical text generators after all, they don't "understand" the problem.

[–] areyouevenreal@lemm.ee -1 points 8 hours ago

Exactly this. Things have already changed and are changing as more and more people learn how and where to use these technologies. I have seen even teachers use this stuff who have limited grasp of technology in general.

[–] figjam@midwest.social -4 points 8 hours ago (1 children)
[–] sudneo@lemm.ee 5 points 7 hours ago (1 children)

Oh boy...what can possibly go wrong for documents where small minutiae like wording can make a huge difference.

[–] figjam@midwest.social -2 points 6 hours ago (1 children)

Creating legal documents, no. Reviewing legal documents for errors and inaccuracies totally.

[–] sudneo@lemm.ee 3 points 6 hours ago (1 children)

I really can't see this being done by any sane person. Why would you have a generator of text reviewing stuff (besides grammar)? Do you have any reference of some companies doing this, perhaps?

[–] figjam@midwest.social 1 points 4 hours ago

Its complex pattern matching and looking up existing case law online. This work has been outsourced to contracting companies for at least 7 years that I'm aware of. If it is something that can be documented in a run book for non professionals to do for twenty cents on the dollar then there is no reason it can't be done by a script for .002.