76
submitted 10 months ago by Peaces@infosec.pub to c/technology@beehaw.org

Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses

WP gift article expires in 14 days.

https://archive.ph/eZvfT

https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf

you are viewing a single comment's thread
view the rest of the comments
[-] Tywele@lemmy.dbzer0.com 62 points 10 months ago

So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?

[-] Schedar@beehaw.org 13 points 10 months ago

Exactly what I was thinking.

I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.

Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.

[-] liv@beehaw.org 19 points 10 months ago

It's quite weird. I thought the article was going to be about how an eating disorder helpline had to withdraw its AI after it started telling people with EDs how to lose weight - which really did happen.

It feels like maybe the editor told the journalist to report on that but they just mucked around with ChatGPT instead.

load more comments (2 replies)
this post was submitted on 13 Aug 2023
76 points (100.0% liked)

Technology

37208 readers
119 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS