this post was submitted on 22 Dec 2024
59 points (90.4% liked)

Technology

60052 readers
2818 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Rhaedas@fedia.io 5 points 10 hours ago (1 children)

Alignment is short for goal alignment. Some would argue that alignment suggests a need for intelligence or awareness and so LLMs can't have this problem, but a simple program that seems to be doing what you want it to do as it runs but then does something totally different in the end is also misaligned. Such a program is also much easier to test and debug than AI neural nets.

[โ€“] eleitl@lemm.ee -1 points 5 hours ago

Aligned with who's goals exactly? Yours? Mine? At which time? What about future superintelligent me?

How do you measure alignment? How do you prove conservation of this property along open ended evolution of a system embedded into above context? How do you make it a constructive proof?

You see, unless you can answer above questions meaningfully you're engaging in a cargo cult activity.