this post was submitted on 08 Sep 2024
434 points (98.0% liked)

Microblog Memes

5402 readers
3784 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] GardenVarietyAnxiety@lemmy.world 85 points 1 week ago* (last edited 1 week ago) (6 children)

This is being done by PEOPLE. PEOPLE are using AI to do this.

I'm not defending AI, but we need to focus on the operator, ~~not the tool.~~

The operator as much as the tool.

[–] TootSweet@lemmy.world 50 points 1 week ago (1 children)
[–] GardenVarietyAnxiety@lemmy.world 16 points 1 week ago (1 children)

I had the same thought after I posted it, lol

[–] TexasDrunk@lemmy.world 1 points 1 week ago (2 children)

Step one for gun control should be a fully functioning mental healthcare system. That's not the final step by any means, but if people are getting the mental help they need there will be fewer shootings.

[–] merc@sh.itjust.works 5 points 1 week ago

Step one for gun control should be gun control.

Sure, a functioning mental healthcare system is important and should be pursued in parallel. But, clearly, there's a major issue with the availability of powerful guns. That needs to be addressed before, or at least at the same time as mental health.

[–] kibiz0r@midwest.social 30 points 1 week ago* (last edited 1 week ago) (1 children)

Technology is not neutral.

Especially for a tool that’s specifically marketed for people to delegate decision-making to it, we need to seriously question the person-tool separation.

That alleged separation is what lets gig economy apps abuse their workers in ways no flesh-and-blood boss would get away with, as well as RealPage’s decentralized price-fixing cartel, and any number of instances of “math-washing” justifying discrimination.

The entire big tech ethos is basically to do horrible shit in such tiny increments that there is no single instance to meaningfully prosecute. (Edit: As always, Mike Judge is relevant: https://youtu.be/yZjCQ3T5yXo)

We need to take this seriously. Language is perhaps the single most important invention of our species, and we’re at risk of the social equivalent of Kessler Syndrome. And for what? So we can write “thank you” notes quicker?

[–] GardenVarietyAnxiety@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Respect.

Also: I just realized I need a Mike Judge marathon night.

[–] zib@lemmy.world 27 points 1 week ago (1 children)

You bring up a good point. In addition to regulating the tool, we should also punish the people who maliciously abuse it.

[–] GardenVarietyAnxiety@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

Regulate it because it's being abused, and hold the abusers accountable, yeah.

I always see the names of the models being boogey-manned, but we only ever see the names of the people behind the big, seemingly untouchable ones.

"Look at this scary model" vs "Look at this person being a dick"

We're being told what to be afraid of and not who is responsible for it, because fear sells and we can't do anything with it.

Just my perception, of course.

[–] Saleh@feddit.org 5 points 1 week ago (3 children)

I mean the tool is also being made by people. And there is people who pointed out, that a tool that is great at spurting out plausible sounding things with no factual bearing could be abused badly for spreading misinformation. Now there have been ethic boards among the people who make these tools who have taken these concerns in and raised them in their companies, subsequently getting ousted for putting ethical concerns before short term profits.

The question is, how much is it just a tool and how much of it is intrinsically linked with the unethical greedy people behind pushing it onto the world?

E.g. a cybertruck is also just a car, and one could say the truck itself is not to blame. But it is the very embodiment of the problems of the people involved.

[–] Anticorp@lemmy.world 2 points 1 week ago

Corporate ethics only exist within the realm of theoretical, and training videos. Ethics will not be tolerated in actuality.

It is all intrinsically linked. But we need to see who the people behind it are or it's just a boogey-man.

[–] merc@sh.itjust.works 1 points 1 week ago

subsequently getting ousted for putting ethical concerns before short term profits.

The irony is that there are no profits. The companies selling generative AI are losing such vast sums of money it's difficult to wrap your head around.

What they're focused on isn't short-term profits, it's being the biggest, most dominant firm whenever AI does eventually become profitable, which might take decades.

[–] saltesc@lemmy.world 1 points 1 week ago (1 children)

Yep. Machines will only ever do what they're told to do. This is AI literally doing the job it's been instructed to do under the rules it has been given.

[–] merc@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago)

Machines are not designed by hermits who have no knowledge of the outside world. They're tools, but they're tools designed with a purpose and with or without safeties designed to keep them from maiming or killing people. The design of the machine can be used to talk about the responsibility and morals of the machine's designer. And, certain machines are so unsafe that even if they theoretically can have a useful purpose, the dangers of the machine being misused are so great that the machine shouldn't be permitted to be sold.

In Arrested Development, George Bluth designs and sells the Cornballer, a machine to deep-fry cornballs. It was made illegal after it caused serious burns to anybody who used it. Part of the purpose of showing this device on the show is to reveal the character of George Bluth. It shows that he's the kind of guy who doesn't care enough to design a safe device, and who continues to try to sell it in Mexico even after it's made illegal in the US because of how unsafe it is.

Yes, in this case it is people who are submitting papers full of fabricated data using ChatGPT as a tool. But, that doesn't mean that ChatGPT is simply "neutral" in this whole thing. They've released a tool that lacks safeties and that is effectively "burning" science. The positive potential uses of ChatGPT are what, writing a dirty limerick in the style of Shakespeare? Meanwhile, the potential pitfalls of using it are things like having it convince a suicidal person to kill themselves, sowing confusion and making it harder to find good science, giving people unsafe medical diagnoses?

[–] otter@lemmy.dbzer0.com 1 points 1 week ago (1 children)

People seem to've already forgotten about Transmetropolitan. 🤷🏽‍♂️

I mean, sure, fuck Ellis, but still. Idiocracy came after, and even that's fading from modern awareness, it seems. 😶‍🌫️

[–] CitizenKong@lemmy.world 4 points 1 week ago

Ellis is like Gaiman, at some point you have to seperate the work from the author.