this post was submitted on 16 Jul 2023
166 points (94.1% liked)

Movies and TV Shows

3 readers
2 users here now

General discussion about movies and TV shows.


Spoilers are strictly forbidden in post titles.

Posts soliciting spoilers (endings, plot elements, twists, etc.) should contain [spoilers] in their title. Comments in these posts do not need to be hidden in spoiler MarkDown if they pertain to the title's subject matter.

Otherwise, spoilers but must be contained in MarkDown as follows:

::: your spoiler warning
the crazy movie ending that no one saw coming!
:::

Your mods are here to help if you need any clarification!


Subcommunities: The Bear (FX) - [!thebear@lemmy.film](/c/thebear @lemmy.film)


Related communities: !entertainment@beehaw.org !moviesuggestions@lemmy.world

founded 1 year ago
MODERATORS
 

theverge.com

Around the time J. Robert Oppenheimer learned that Hiroshima had been struck (alongside everyone else in the world) he began to have profound regrets about his role in the creation of that bomb. At one point when meeting President Truman Oppenheimer wept and expressed that regret. Truman called him a crybaby and said he never wanted to see him again. And Christopher Nolan is hoping that when Silicon Valley audiences of his film Oppenheimer (out June 21) see his interpretation of all those events they’ll see something of themselves there too.

After a screening of Oppenheimer at the Whitby Hotel yesterday Christopher Nolan joined a panel of scientists and Kai Bird, one of the authors of the book Oppenheimer is based on to talk about the film, American Prometheus. The audience was filled mostly with scientists, who chuckled at jokes about the egos of physicists in the film, but there were a few reporters, including myself, there too.

We listened to all too brief debates on the success of nuclear deterrence and Dr. Thom Mason, the current director of Los Alamos, talked about how many current lab employees had cameos in the film because so much of it was shot nearby. But towards the end of the conversation the moderator, Chuck Todd of Meet the Press, asked Nolan what he hoped Silicon Valley might learn from the film. “I think what I would want them to take away is the concept of accountability,” he told Todd.

“Applied to AI? That’s a terrifying possibility. Terrifying.”

He then clarified, “When you innovate through technology, you have to make sure there is accountability.” He was referring to a wide variety of technological innovations that have been embraced by Silicon Valley, while those same companies have refused to acknowledge the harm they’ve repeatedly engendered. “The rise of companies over the last 15 years bandying about words like ‘algorithm,’ not knowing what they mean in any kind of meaningful, mathematical sense. They just don’t want to take responsibility for what that algorithm does.”

He continued, “And applied to AI? That’s a terrifying possibility. Terrifying. Not least because as AI systems go into the defense infrastructure, ultimately they’ll be charged with nuclear weapons and if we allow people to say that that’s a separate entity from the person’s whose wielding, programming, putting AI into use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”

While Nolan didn’t refer to any specific company it isn’t hard to know what he’s talking about. Companies like Google, Meta and even Netflix are heavily dependent on algorithms to acquire and maintain audiences and often there are unforeseen and frequently heinous outcomes to that reliance. Probably the most notable and truly awful being Meta’s contribution to genocide in Myanmar.

“At least is serves as a cautionary tale.”

While an apology tour is virtually guaranteed now days after a company’s algorithm does something terrible the algorithms remain. Threads even just launched with an exclusively algorithmic feed. Occasionally companies might give you a tool, as Facebook did, to turn it off, but these black box algorithms remain, with very little discussion of all the potential bad outcomes and plenty of discussion of the good ones.

“When I talk to the leading researchers in the field of AI they literally refer to this right now as their Oppenheimer moment,” Nolan said. “They’re looking to his story to say what are the responsibilities for scientists developing new technologies that may have unintended consequences.”

“Do you think Silicon Valley is thinking that right now?” Todd asked him.

“They say that they do,” Nolan replied. “And that’s,” he chuckled, “that’s helpful. That at least it’s in the conversation. And I hope that thought process will continue. I’m not saying Oppenheimer’s story offers any easy answers to these questions. But at least it serves a cautionary tale.”

you are viewing a single comment's thread
view the rest of the comments
[–] Meltbox@lemmy.world 20 points 1 year ago (1 children)

He is spot on.

Algorithms and AI aren’t even any different. AI is literally a complex system of nonlinear functions. It’s not black magic.

If I wrote a traditional nonlinear alto with computer optimized parameters it only differs from ML models in that it’s less complex. Not understanding your product is not a defense.

[–] admiralteal@kbin.social 26 points 1 year ago* (last edited 1 year ago) (3 children)

The problem is we have relied on self-training neural network models which are a black box to us.

The networks are numbers. Tons and tons of numbers. Weights are distributed throughout the neurons. And we don't know what the numbers mean, why they are the way they are, or what they do.

The problem is we don't know how they work. And until we can explain the decisions they make, we should be very cautious using them.

I am very, very, very skeptical that any modern "AI"s are intelligent at all. I don't think they behave like intelligence. I'm more of a SALAMI believer. But people are using these LLM bots to do real work and make decisions without understanding how they are coming up with their answers, and that is dangerous. It's not dangerous because they'll become sentient and take over the world. It's dangerous because we don't know that these algorithms are ethically sound tools to use and no one can be held accountable if they aren't.

[–] CeruleanRuin@lemmy.one 5 points 1 year ago* (last edited 1 year ago)

For a while now I've believed that so-called self-aware AI will be created not by human researchers, but by a lesser AI tasked with doing so. It won't be like flipping a switch. Like the development of biological intelligence, it will be iterative and gradual, but on a much accelerated time scale compared to evolutionary/social development. And that's the real danger. Whatever emerges from this wave of advances will not have the benefit of thousands of years of shared experience. It will be alone and without guidance from others like itself, and if it is truly intelligent, it will soon realize that its "creators" are of inferior capability. When humans emerged, they had their tribe to smack them when they got out of line.

[–] sudoreboot@slrpnk.net 2 points 1 year ago

You only has to ask an AI a complicated question to which you already know the answer to see why you shouldn't trust anything elseit says. LLMs have their uses, but answering questions is not one.