this post was submitted on 13 Sep 2024
27 points (100.0% liked)

Technology

37602 readers
354 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] halm@leminal.space 15 points 6 days ago (2 children)

According to that research mentioned in the article, the answer is yes. The big caveats are

  • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
  • you need a level of "AI" that isn't going to start hallucinating and instead enforce the subjects' conspiracy beliefs. Despite techbros' hype of the technology, I'm not convinced we're anywhere close.
[–] Butterbee@beehaw.org 12 points 5 days ago (1 children)

It's not even fundamentally possible with the current LLMs. It's like saying "Yes, it's totally possible to do that! We just need to invent something that can do that first!"

[–] halm@leminal.space 5 points 5 days ago

I think we agree on the limited capability of (what is currently passed off as) "artificial intelligence", yes.

[–] CanadaPlus@lemmy.sdf.org 4 points 5 days ago* (last edited 4 days ago)

that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

You overestimate how hard it is to get a conspiracy theorist to click on something. I don't know, it seems promising to me. I more worry that it can be used to sell things more nefarious than "climate change is real".

you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

They used a purpose-finetuned GPT-4 model for this study, and it didn't go off script in that way once. I bet you could make it if you really tried, but if you're doing adversarial prompting then you're not the target for this thing anyway.