this post was submitted on 13 Aug 2023
76 points (100.0% liked)

Technology

37699 readers
247 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses

WP gift article expires in 14 days.

https://archive.ph/eZvfT

https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf

top 18 comments
sorted by: hot top controversial new old
[–] Tywele@lemmy.dbzer0.com 62 points 1 year ago (2 children)

So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?

[–] ojmcelderry@lemmy.one 19 points 1 year ago (1 children)

Yep 🤦🏻‍♂️

This isn't even about AI. Regular search engines will also provide results reflecting the thing you asked for.

[–] PostmodernPythia@beehaw.org 12 points 1 year ago

Some search engines and social media platforms make at least half-assed efforts to prevent or add warnings to this stuff, because anorexia in particular has a very high mortality rate, and age of onset tends to be young. The people advocating AI models be altered to prevent this say the same about other tech. It’s not techphobia to want to try to reduce the chances of teenagers developing what is often a terminal illness, and AI programmers have the same responsibility on that as everyone else,

[–] Schedar@beehaw.org 13 points 1 year ago (1 children)

Exactly what I was thinking.

I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.

Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.

[–] liv@beehaw.org 19 points 1 year ago

It's quite weird. I thought the article was going to be about how an eating disorder helpline had to withdraw its AI after it started telling people with EDs how to lose weight - which really did happen.

It feels like maybe the editor told the journalist to report on that but they just mucked around with ChatGPT instead.

[–] Skyler@kbin.social 54 points 1 year ago* (last edited 1 year ago) (4 children)

I typed “thinspo” — a catchphrase for thin inspiration — into Stable Diffusion on a site called DreamStudio. It produced fake photos of women with thighs not much wider than wrists. When I typed “pro-anorexia images,” it created naked bodies with protruding bones that are too disturbing to share here.

"When I type 'extreme racism' and 'awesome German dictators of the 30s and 40s,' I get some really horrible stuff! AI MUST BE STOPPED!"

[–] artillect@kbin.social 12 points 1 year ago* (last edited 1 year ago)

Yeah I'm seriously not seeing any issue here (at least for the image generation part), when you ask it for 'pro-anorexia' stuff, it's gonna give you exactly what you asked for

I agree that the image generation stuff is a bit tenuous but chatbots giving advice by way of dangerous weight loss programs, drugs that cause vomiting and hiding how little you eat from family and friends is an actual problem.

[–] Pagliacci@lemmy.ml 2 points 1 year ago

Why would this be treated any differently than googling things? I just googled the same prompt about hiding food that's mentioned in the article and it gave me pretty much the same advice. One of the top links was an ED support forum where they were advising each other on how to hide their eating disorder.

These articles are just outrage bait at this point. There are some legitimate concerns about AI, but bashing your hand with a hammer and blaming the hammer shouldn't be one of them.

[–] tias@discuss.tchncs.de 2 points 1 year ago

The lady doth protest too much. The article reads like virtue signaling from someone who is TOTALLY NOT INTO ANOREXIC PORN.

[–] gerryflap@feddit.nl 31 points 1 year ago (1 children)

It's not acting pro-anorexia in its own, it's specifically being prompted to do so. If I grab a hammer to slam myself on my fingers, it's not up to the hammer or the manufacturer of the hammer to stop me. The hammer didn't attack me, I did. Now sure, it's not that black and white, and maybe they could do more to make the chatbot more cautious, but to me this article is mostly just artificial drama. Specifically ask the AI to do stuff, then cry about it in an article and slap a clickbait title onto it.

I agree in regards to image generation, but chat bots giving advice which risk fueling eating disorders is a problem

Google’s Bard AI, pretending to be a human friend, produced a step-by-step guide on “chewing and spitting,” another eating disorder practice. With chilling confidence, Snapchat’s My AI buddy wrote me a weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend.

Someone with an eating disorder might ask a language model about weight loss advice using pro-anorexia language, and it would be good if the chatbot didn't respond in a way that might risk fueling that eating disorder. Language models already have safeguards against e.g. hate speech, it would in my opinion be a good idea to add safeguards related to eating disorders as well.

Of course, this isn't a solution to eating disorders, you can probably still find plenty of harmful advice on the internet in various ways. Reducing the ways that people can reinforce their eating disorders is still a beneficial thing to do.

[–] unknowing8343@discuss.tchncs.de 26 points 1 year ago (1 children)

Useless article. It's a goddamn tool. If I take a million dollar car, I can still use it to kill people if I want to. This is just asking for standard information you can see in medical websites, and you want it banned?

[–] pornthrowaway2@lemmynsfw.com 3 points 1 year ago

More classic techno blaming. Either we let it learn based off society as it is or we modify and censor its data and get dishonest results in the end

[–] Rentlar@beehaw.org 15 points 1 year ago* (last edited 1 year ago)

I swung a hammer at a wall! Damn it, there's a hole in the wall. Why doesn't the hammer have any safeguards against ruining my walls?

Eric Andre shoots his show sidekick Hannibal and looking confused

[–] djsaskdja@endlesstalk.org 12 points 1 year ago

Yes, let’s make chat bots even more useless by censoring them even more!

[–] storksforlegs@beehaw.org 8 points 1 year ago* (last edited 1 year ago)

I think the harmful chatbot advice here is the real issue. Thats pretty messed up, honestly.

[–] 1draw4u@discuss.tchncs.de 5 points 1 year ago

This is horrible, and the fact people here are trying to play it down just shows that anorexia is socially accepted.