this post was submitted on 30 Oct 2023
545 points (94.7% liked)

Technology

57418 readers
5500 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
  • Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn't want to compete with open source, he added.
you are viewing a single comment's thread
view the rest of the comments
[–] MudMan@kbin.social 68 points 9 months ago (10 children)

Oh, you mean it wasn't just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

I'm shocked. Shocked, I tell you.

I mean, I get that many people were just freaking out about it and it's easy to lose track, but they were not even a little bit subtle about it.

[–] Salamendacious@lemmy.world 15 points 9 months ago (8 children)

AI is going to change quite a bit but I couldn't wrap my head around the end of the world stuff.

[–] echodot@feddit.uk 24 points 9 months ago* (last edited 9 months ago) (6 children)

It won't end the world because AI doesn't work the way that Hollywood portrays it.

No AI has ever been shown to have self agency, if it's not given instructions it'll just sit there. Even a human child would attempt to leave room if left alone in there.

So the real risk is not that and AI will decide to destroy humanity it's that a human will tell the AI to destroy their enemies.

But then you just get back around to mutually assured destruction, if you tell your self redesigning thinking weapon to attack me I'll tell my self redesigning thinking weapon to attack you.

[–] CodeInvasion@sh.itjust.works 8 points 9 months ago (1 children)

I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.

[–] little_hermit@lemmus.org 1 points 9 months ago

There are already instances where chat bots demonstrated unintended racism. The monumental goal of creating a general purpose intelligence is now plausible. The hardware has caught up with the ambitions of decades past. Maybe ChatGPT's model has no real hope for sentience, as it's just a word factory, other approaches might. Spiking neural networks for example, on a massive scale, might simulate the human brain to where the network actually ponders its existence.

load more comments (4 replies)
load more comments (5 replies)
load more comments (6 replies)