this post was submitted on 12 Apr 2024
73 points (95.1% liked)

Technology

58092 readers
2855 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

YouTuber Internet of Bugs examines the latest demo from Cognition that showcases their "first AI software engineer" allegedly solving UpWork programming tasks.

you are viewing a single comment's thread
view the rest of the comments
[–] Sparrow_1029@programming.dev 36 points 5 months ago (5 children)

Am I one of the few who just doesn't use AI at all? I don't have to generate tons of code for work at the moment and brand new projects that I've been given are small--meaning I wouldn't necessarily use it to generate starter boilerplate. I have coworkers that love copilot or spend much longer prompting ChatGPT than they would if they wrote code themselves. A majority of my time is spent modelling the problem, gathering rejuirements, researching others' solutions online (likely this step could be better AI-assisted?), not actually implementing a solution in code.

Anyway, I'm not super anti-AI in software development, and I see where it could be useful. Maybe it just isn't for me yet. The current hype around it as well as the attitude of big-tech exceptionalism ("AI can salve all our problems") feels a bit like a bubble, at least regarding the current generation of LLMs and ML

[–] admin@lemmy.my-box.dev 11 points 5 months ago (2 children)

One way it can be useful is when you use it as a more verbal variant of rubber duck debugging. You'll need to state the issue that you're facing, including the context and edge cases. In doing so, the problem will also become more clear to you yourself.

Contrary to a rubber duck, it can then actually suggest some approach vectors, which you can then dismiss or investigate further.

[–] lemmy___user@lemmy.world 7 points 5 months ago (1 children)

This is how I use LLMs right now, and there have been a few times it's been genuinely helpful. Mind you, most of the time it's been helpful, it's because it hallucinates some nonsense that gets me in the right direction, but that's still at least a little better than the duck.

[–] admin@lemmy.my-box.dev 3 points 5 months ago

That was my experience as well with GPT 3.5. But the hit ratio is a lot better with GPT 4, and other models like Mixtral and its derivatives.

load more comments (2 replies)