this post was submitted on 05 Apr 2024
869 points (96.2% liked)

Technology

55940 readers
4147 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A shocking story was promoted on the "front page" or main feed of Elon Musk's X on Thursday:

"Iran Strikes Tel Aviv with Heavy Missiles," read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran's embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X's own official AI chatbot, Grok, and then promoted by X's trending news product, Explore, on the very first day of an updated version of the feature.

you are viewing a single comment's thread
view the rest of the comments
[–] kadu@lemmy.world 154 points 3 months ago* (last edited 3 months ago) (23 children)

I wonder how legislation is going to evolve to handle AI. Brazilian law would punish a newspaper or social media platform claiming that Iran just attacked Israel - this is dangerous information that could affect somebody's life.

If it were up to me, if your AI hallucinated some dangerous information and provided it to users, you're personally responsible. I bet if such a law existed in less than a month all those AI developers would very quickly abandon the "oh no you see it's impossible to completely avoid hallucinations for you see the math is just too complex tee hee" and would actually fix this.

[–] Ottomateeverything@lemmy.world 97 points 3 months ago (18 children)

I bet if such a law existed in less than a month all those AI developers would very quickly abandon the "oh no you see it's impossible to completely avoid hallucinations for you see the math is just too complex tee hee" and would actually fix this.

Nah, this problem is actually too hard to solve with LLMs. They don't have any structure or understanding of what they're saying so there's no way to write better guardrails.... Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.

So no, if this law came into effect, people would just stop using AI. It's too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn't happen.

[–] wizardbeard@lemmy.dbzer0.com 55 points 3 months ago (14 children)

Yep. To add on, this is exactly what all the "AI haters" (myself included) are going on about when they say stuff like there isn't any logic or understanding behind LLMs, or when they say they are stochastic parrots.

LLMs are incredibly good at generating text that works grammatically and reads like it was put together by someone knowledgable and confident, but they have no concept of "truth" or reality. They just have a ton of absurdly complicated technical data about how words/phrases/sentences are related to each other on a structural basis. It's all just really complicated math about how text is put together. It's absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.

Turns out that if you get enough of that data together, it makes a very convincing appearance of logic and reason. But it's only an appearance.

You can't duct tape enough speak and spells together to rival the mass of the Sun and have it somehow just become something that outputs a believable human voice.


For an incredibly long time, ChatGPT would fail questions along the lines of "What's heavier, a pound of feathers or three pounds of steel?" because it had seen the normal variation of the riddle with equal weights so many times. It has no concept of one being smaller than three. It just "knows" the pattern of the "correct" response.

It no longer fails that "trick", but there's significant evidence that OpenAI has set up custom handling for that riddle over top of the actual LLM, as it doesn't take much work to find similar ways to trip it up by using slightly modified versions of classic riddles.

A lot of supporters will counter "Well I just ask it to tell the truth, or tell it that it's wrong, and it corrects itself", but I've seen plenty of anecdotes in the opposite direction, with ChatGPT insisting that it's hallucination was fact. It doesn't have any concept of true or false.

[–] Akisamb@programming.dev 1 points 3 months ago

It's absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.

This is not true. If you train these models on game of Othello, they'll keep a state of the world internally and use that to predict the next move played (1). To execute addition and multiplication they are executing an algorithm on which they were not explicitly trained (although the gpt family is surprisingly bad at it, due to a badly designed tokenizer).

These models are still pretty bad at most reasoning tasks. But training on predicting the next word is a perfectly valid strategy, after all the best way to predict what comes after the "=" in 1432 + 212 = is to do the addition.

load more comments (13 replies)
load more comments (16 replies)
load more comments (20 replies)