this post was submitted on 31 May 2025
215 points (86.9% liked)

Showerthoughts

34834 readers
627 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] chicken@lemmy.dbzer0.com 1 points 4 days ago

But any actual developer knows that you don’t just deploy whatever Copilot comes up with, because - let’s be blunt - it’s going to be very bad code. It won’t be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate… You use it as a starting point, and then sculpt it into shape.

Yeah, but I don't know where you're getting the "never will" or "fundamentally cannot do" from. LLMs used to be only useful for coding if you ask for simple self-contained functions in the most popular languages, and now we're here; most requests with small scope, I'm getting a result that is better written than I could have done myself by spending way more time, it makes way fewer mistakes than before and can often correct them. That's with only using local models which became actually viable for me less than a year ago. So why won't it keep going?

From what I can tell there is not very much actually standing in the way of sensible holistic consideration of a larger problem or codebase here, just context size limits and being more likely to forget things in the context window the longer it is, which afaik are problems being actively worked on where there's no reason they would be guaranteed to remain unsolved. This also seems to be what is holding back agentic AI from being actually useful. If that stuff gets cracked, I think it's going to mean things will start changing even faster.