this post was submitted on 03 Oct 2023
756 points (95.7% liked)

Technology

58111 readers
4733 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Robin Williams' daughter Zelda says AI recreations of her dad are 'personally disturbing'::Robin Williams' daughter Zelda says AI recreations of her dad are 'personally disturbing': 'The worst bits of everything this industry is'

you are viewing a single comment's thread
view the rest of the comments
[–] TwilightVulpine@lemmy.world 7 points 11 months ago (3 children)

It's sad to see how AI advocates strive to replicate the work of artists all the while being incredibly dismissive of their value. No wonder so many artists are incensed to get rid of everything AI.

Besides, it's nothing new that media companies and internet content mills are willing to replace quality with whatever is cheaper and faster. To try to use that as an indictment against those artists' worth is just... yeesh.

This is the kind of stuff AI can produce just by itself, within seconds, the idea is from AI and so is the actual image.

You realize that even this had to be set up by human beings right? Piping random prompts through art AI is impressive, but it's not intelligent. Don't let yourself get caught on sci-fi dreams, I made this mistake too. When you say "AI will steamroll humans" you are assigning awareness and volition to it that it doesn't have. AIs maybe filled with all human knowledge but they don't know anything. They simply repeat patterns we fed into them. An AI could give you a description of a computer, it could generate a picture of a computer, but it doesn't have an understanding. Like I said before, it's like a very elaborate auto-complete. If it could really understand anything, the situation would be very different, but the fact that even its most fierce advocates use it as a tool shows that it's still lacking capabilities that humans have.

AI will not steamroll humans. AI-powered corporate industries, owned by flesh and blood people, might steamroll humans, if we let them. If you think that will get to just enjoy a Holodeck you are either very wealthy or you don't realize that it's not just artists who are at risk.

[–] brewbellyblueberry@sopuli.xyz 4 points 11 months ago

It’s sad to see how AI advocates strive to replicate the work of artists all the while being incredibly dismissive of their value. No wonder so many artists are incensed to get rid of everything AI.

It's such a shame too. Like you can have a million sensible takes and opinions and views on the topic, pro-AI, but the discussion revolves around the same shit on both sides.

It is an amazing tool, and could be used (and is used, it's just obscured by the massive amount of shit and assholes trolling other people/artists) in so many creative ways. I'd been in a bit of a rut for quite a few years (partially because my brain no make happy chemicals or sleep), but I haven't been as excited about the possibilities and inspired maybe ever in my life (at least not for a decade or nearly two) with art and my own stuff. I'm finally drawing again after way too many years of letting my stuff gather dust.

[–] assassin_aragorn@lemmy.world 2 points 11 months ago (2 children)

I used to think techno supremacists were an extreme fringe, but "AI" has made me question that.

For one, this isn't AI in the scifi sense. This is a sophisticated model that forms an algorithm to generate content based on patterns it observes in a plethora of works.

It's ridiculously overhyped, and I think it's just flash in a pan. Companies have already minimized their customer support with automated service options and "tell me what the problem is" prompts. I have yet to meet anyone who is pleased by these. Instead it's usually shouting into the phone that you want to talk to a real human because the algorithm thinks you want a problem fixed instead of the service cancelled.

I think this "technocrat" vs "humanities" debate will be society's next big question.

[–] TwilightVulpine@lemmy.world 1 points 11 months ago (2 children)

I used to be on the tecnocrat side too when I was younger, but seeing the detrimental effects of social media, the app-driven gig economy and how companies constantly charge more for less changed my mind. Technocrats adopt this idea that technology is neutral and constantly advancing towards an ideal solution for everything, that we only need to keep adding more tech and we'll have an utopia. Nevermind that so many advancements in automation lead to layoffs rather than less working hours for everyone.

I believe the debate is already happening, and the widespread disillusionment with tech tycoons and billionaires shows popular opinion is changing.

[–] assassin_aragorn@lemmy.world 2 points 11 months ago

Very similar here, I used to think technology advancement was the most important thing possible. I still do think it's incredibly important, but we can't commercially do it for its own sake. Advancement/knowledge for the sake of itself must be confined to academia. AI currently can't hold a candle to human creativity, but if it reaches that point, it should be an academic celebration.

I think the biggest difference for me now vs before is that I think technology can require too high of a cost to be worth it. Reading about how some animal subjects behaved with Elon's Neuralink horrified me. They were effectively tortured. I refuse the idea that we should develop any technology which requires that. If test subjects communicate fear or panic that is obviously related to the testing, it's time to end the testing.

Part of me still does wonder, but what could be possible if we do make sacrifices to develop technology and knowledge? And here, I'm actually reminded of fantasy stories and settings. There's always this notion of cursed knowledge which comes with incredible capability but requires immoral acts/sacrifice to attain.

Maybe we've made it to the point where we have something analogous (brain chips). And to avoid it, we not only need to better appreciate the human mind and spirit -- we need people in STEM to draw a line when we would have to go too far.

I digress though. I think you're right that we're seeing an upswell of the people against things like this.

[–] zurneyor@lemmy.dbzer0.com 1 points 11 months ago (1 children)

All the ills you mention are a problem with current capitalism, not with tech. They exist because humans are too fucking stupid to regulate themselves, and should unironically be ruled by an AI overlord instead once the tech gets there.

[–] TwilightVulpine@lemmy.world 1 points 11 months ago (1 children)

You are making the exact same mistake that I just talked about, that I have also made, that a bunch of tech enthusiasts make:

An AI Overlord will be engineered by people with human biases, under the command of people with human biases, trained by data with human biases, having goals that are defined with human biases. What you are going to get is tyranny with extra steps, plus some of its own concerning glitches on the side.

It's a sci-fi dream to assume technology is inherently destined to solve human issues. It takes human concern and humanites studies to apply technology in a way that actually helps people.

[–] lloram239@feddit.de 1 points 11 months ago (1 children)

under the command of people with human biases

Humans won't be in control. The AI will consume and interpret more data than any human ever could. It'll be like trying to verify that your computer calculates correctly with pen&paper, there is just no hope. People will blindly trust whatever the AI tells them, since they'll get used to the AI providing superior answers.

This of course won't happen all at once, this will happen bit by bit until you have AI dominating every process in a company, so much that the company is run by AI. Maybe you still have a human in there putting their signature on legal documents. But you are not going to outsmart a thing that is 1000x smarter than you.

[–] TwilightVulpine@lemmy.world 1 points 11 months ago (1 children)

Even given the smartest, most perfect computer in the world, it can give people the perfect, most persuasive answers and people can still say no and pull the plug just because they feel like it.

The same is not even different among humans, the power to influence organizations and society entirely relies on the willingness of people to go along with it.

Not only this sci-fi dream is skipping several steps, steps where humans in power direct and gauge AI output as far as it serves their interests rather than some objective ultimate optimal state of society. Should the AI provide all the reasons that they should be in charge, an executive or a politician can simply say "No, I am the one in charge" and that will be it. Because to most of them preserving and increasing their own power is the whole point, even if at expense of maximum efficiency, sustainability or any other concerns.

But before you go fullblown Skynet machine revolution, you should realize that AIs that are limited and directed by greedy humans can already cause untold damage to regular people, simply by optimizing them out of industries. For this, they don't even need to be self-aware agents. They can do that as mildly competent number crunchers, completely oblivious of reality out of spreadsheets and reports.

And all this is assuming an ideal AI. Truly, AI can consume and process more data than any human. Including wrong data. Including biased data. Including completely baseless theories. Who's to say we might not get to a point AI decides to fire people because of the horoscope or something equally stupid?

[–] lloram239@feddit.de 1 points 11 months ago (1 children)

Even given the smartest, most perfect computer in the world, it can give people the perfect, most persuasive answers and people can still say no and pull the plug just because they feel like it.

How do you "pull the plug" on electricity, cars or the Internet? You don't. Our society has become so depended on those things that you can't just switch them off even if you wanted to. Even if outlawed them, people would just ignore you and keep using those things, because they are far to useful to give up on. With AI you will not only have that dependency as a problem, but also the fact that AI is considerably easier to build than any of those. All you need is a reasonably powerful computer (i.e. regular gaming PC). There are no special resources or infrastructure that makes construction of new AIs difficult.

Not only this sci-fi dream is skipping several steps, steps where humans in power direct and gauge AI output as far as it serves their interests rather than some objective ultimate optimal state of society.

Meta just failed to gauge the output of an AI that generates stickers. Microsoft had to pull the plug on Sydney. OpenAI is having constant issues with DAN. We can't even keep that stuff under control in those simple cases. What are our chances when this has actual power, autonomy and integration in our society?

The danger here is not Skynet, you can nuke that from orbit if you have to. A singular AI program can be fought. The real issue is the fact that AI is just a bunch of math. People will use it all over the place and slowly hand more and more control over to the AIs. There won't be any single place you can nuke and even when you nuke one, the knowledge how to build more AIs won't vanish. AI is a tool for to useful to give up on.

[–] TwilightVulpine@lemmy.world 1 points 11 months ago* (last edited 11 months ago) (1 children)

Are you really trying to use failures of AI to try to argue that it's going to overcome humans? If we can't even get it to work how we want it too what makes you think people are just going to hand the keys of Society to it? How is an AI that keeps bursting into racist rants and emotional meltdowns going to take over anything? Does it sound like it is brewing some Master Plan? Why would people hand control to it? That alone shows that it presents all the flaws of a human, like I just pointed out.

Maybe you are too eager to debunk me but you are missing the point to nitpick. It doesn't really matter that we can't "pull the plug" on the internet, if that even was needed, all it takes to stop the AI takeover is that people in power just disregard what it says. It's far more reasonable to assume even those who use AIs wouldn't universally defer to it.

Nevermind that no drastic action is needed period. You said it yourself, Microsoft pulled the plug on their AIs. This idea of omnipresent self-replicating AI is still sci-fi, because AIs have no reason to seek to spread themselves, or ability to do so.

[–] lloram239@feddit.de 1 points 11 months ago* (last edited 11 months ago) (1 children)

Are you really trying to use failures of AI to try to argue that it’s going to overcome humans?

There is no failure here, there is just a lack of human control. The AI does what it does and the human struggle to keep it in check.

Why would people hand control to it?

People are stupid. Look at the rise of smartphones. Hardware that controls your life and that you have little to no control over. Yet people bought them by the billions.

How is an AI that keeps bursting into racist rants and emotional meltdowns going to take over anything?

Over here in Germany the AfD is on its way to become the second strongest political party, seems like racists rants are pretty popular these day. Over in USA Trump managed to get people to storm the Capitol with a few words and tweets, that's the power of information and AI is really good at processing that. If AI wants to take control, it will find a way.

You said it yourself, Microsoft pulled the plug on their AIs.

The thing is, they kind of didn't, they just censored the living hell out of BingChat. BingChat is still up and running. AI is far too useful to give up on, so they try to keep it in check instead. Which they failed at yet again when they allowed DALLE3 into the wild and had to censor it's ability to generate certain images afterwards. It's a constant cat&mouse game to plug all the holes and undesired behaviors, and large part of the censorship itself relies on other AI systems doing the censoring.

Humans aren't in control here. We just go with the flow and try to nudge the AI into a beneficial direction. But long term we have no idea where this is going. AI safety is neither a solved nor even a well understood problem and there is good reason to believe it's fundamentally unsolvable.

[–] TwilightVulpine@lemmy.world 1 points 11 months ago* (last edited 11 months ago)

You are trying to argue in so many directions and technicalities it's just incoherent. AI will control everything because it's gonna be smarter, people will accept because they are dumb, and if the AI is dumb too that also works, but wasn't it supposed to be smarter? Anything that gets you to the conclusion you already started with.

I could be having deeper arguments of how an AI even gets to want anything, but frankly, I don't think you could meaningfully contribute to that discussion.

[–] lloram239@feddit.de 1 points 11 months ago

For one, this isn’t AI in the scifi sense.

It's pretty much exactly what the ship computer in StarTrek: TNG is along with the Holodeck (minus the energy->matter conversion).

It’s ridiculously overhyped, and I think it’s just flash in a pan.

You'll be up for a rude awakening. What we see today is just the start of it. The current AI craze has been going on for a good 10 years, most of it limited to the lab and science papers. ChatGPT and DALL-E are simply the first that were good enough for public consumption. What followed them were huge investments into that space. We'll be not only seeing a lot more of this, but also much better ones. The thing with AI is: The more data and training you throw at it, the better it gets. You can make a lot of progress simply by doing more it, without any big scientific breakthroughs. And AI companies with a lot of funding are throwing everything they can find at AI right now.

[–] lloram239@feddit.de 1 points 11 months ago

while being incredibly dismissive of their value.

Values change. Images used to be difficult and time consuming to create, thus they had value. They are trivial to create now, so it becomes worthless. That's progress. Yet instead of using that new superpower to create bigger projects and doing something still valuable with it, all the artists do is complain.

You realize that even this had to be set up by human beings right?

You obviously don't realize that it didn't. That's prompts generated by AI put into another AI. There was no human telling it what to draw. The only instruction was to draw something original and than draw something different for the next image.

When you say “AI will steamroll humans” you are assigning awareness and volition to it that it doesn’t have.

I don't do any of that, I just acknowledge their superior and constantly improving performance. The thing doesn't need to be self aware to put all the artists, and the other humans, out of a job if it can work 1000x faster than them.

Also AI will get awareness and volition real soon anyway, ETA for AGI is around 5 years, at the current pace I wouldn't even be surprised if it arrives sooner. Human exceptionalism has the tendency to not age very well these days.

They simply repeat patterns we fed into them

They don't. See, it would be way easier to lake you Luddites seriously if you at least had any clue what you were talking about. But the whole art world seem to be stuck playing make-believe, just repeating the same nonsense that they heard from other people talking about AI instead just trying it for themselves.

Most of that AI stuff is publicly available, lots of it is free, and some can be run on your own PC. Just go and play with it to get a realistic idea of what it is and isn't capable of. And most important of all: Think about the future. People always talk like issues with current AI systems are some fundamental limit of AI, when in reality most of those problems will be gone within six months.

Also it's just bind boggling how people ignore everything AI can do, just to focus on some minuscule detail it still gets wrong. The fact that it can't draw hands is not terribly surprising (hard structure to figure out from low res 2D images), meanwhile the fact that it can draw basically everything else, way faster and often better than almost any human, is rather mind boggling, yet somehow ignored.

AI-powered corporate industries, owned by flesh and blood people

CEOs are target for AI replacement just like everybody else. And AI that pays its own bills and runs on some rented cloud computing won't be far off either. Either way, you don't even have to go into doomsday scenarios with evil AI, the fact that AI will outcompete humans at most tasks alone ist enough to drastically reshape the world. If it's ethically trained Open Source AI or some cooperate run thing really doesn't matter, since either way, the changes will be huge.

If you think that will get to just enjoy a Holodeck you are either very wealthy or you don’t realize that it’s not just artists who are at risk.

Well, we are already way closer to that spooky scifi future than you'd think.