this post was submitted on 11 Sep 2024
66 points (76.2% liked)

Programming

17361 readers
219 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

you are viewing a single comment's thread
view the rest of the comments
[–] DScratch@sh.itjust.works 12 points 1 month ago (5 children)

I’ll bet people said the same thing when Intellisense started suggesting lines completions.

And when errors were highlighted in the code rather than console output.

And when high-level languages started appearing.

[–] dinckelman@lemmy.world 18 points 1 month ago

This really isn’t a good comparison at all. One gives you a list of choices you can make, and the other gives you a blind answer.

If seeing what argument types the function takes make me a worse engineer, so be it, I guess

[–] MajorHavoc@programming.dev 13 points 1 month ago (1 children)

I’ll bet people said the same thing when Intellisense started suggesting lines completions.

They did.

And when errors were highlighted in the code rather than console output.

Yep.

And when high-level languages started appearing.

And yes.

That said, if you believed my mentors, we were barelling towards a 2025 in which nothing running on software ever really worked reliably.

So they may have been grumpy, but they were also right, on that point.

[–] vrighter@discuss.tchncs.de 7 points 1 month ago

I mean with the "move fast and break things" mentality of most companies nowadays, I'd say he was spot-on

[–] leisesprecher@feddit.org 8 points 1 month ago

And when people started writing books instead of memorizing epic poems.

[–] u_tamtam@programming.dev 8 points 1 month ago

I’ll bet people said the same thing when Intellisense started suggesting lines completions.

I'm sure many did, but I'm also pretty sure it's easy to draw a line between code assistance and LLM-infused code generation.

[–] JackGreenEarth@lemm.ee 2 points 1 month ago

And they may have been right. But getting code is usually the end result, not proving you're some better programmer. And useful tools may be used to help you with the aforementioned goal.