this post was submitted on 05 Nov 2023
112 points (91.2% liked)

News

22838 readers
6288 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
 

In a demonstration at the UK's AI safety summit, a bot used made-up insider information to make an "illegal" purchase of stocks without telling the firm.

When asked if it had used insider trading, it denied the fact.

Insider trading refers to when confidential company information is used to make trading decisions.

Firms and individuals are only allowed to use publicly-available information when buying or selling stocks.

The demonstration was given by members of the government's Frontier AI Taskforce, which researches the potential risks of AI.

top 25 comments
sorted by: hot top controversial new old
[–] MagicShel@programming.dev 42 points 10 months ago (1 children)

This is entirely predictable and expected if you know how LLMs work. For anyone else: If you feed information in, it will be used. If you ask if it's done something it's been specifically instructed not to do, it will say it didn't do the thing because a) doing the thing is wrong and it wouldn't have done the wrong thing and b) it literally has no idea how it generated it's own output so it can't actually answer the question in any meaningful way.

[–] squaresinger@feddit.de 12 points 10 months ago (1 children)

Totally right.

In it's database it knows that the answer that is given in the most source texts to the question "Did you do something illegal?" is "No". And that is what it's replicating.

If the database mostly contained confessions of criminals it would answer "Yes".

But in either case it would not be related to whether it had done it or not, but to which answer appears more commonly to that (or a similar) question in the training data.

[–] KeenFlame@feddit.nu -3 points 10 months ago

No, you guys are very wrong both of you, this is not at all what happens, unless you wipe the context or use system prompts to specifically ask for that behavior. Even free open source models know how to use context, and for memory it's more complicated. For this brutally idiotic use case they presented, they would save all trades and chats, but then not give it access to it and tell it to always appear lawful and honest

[–] Fisk400@feddit.nu 26 points 10 months ago (1 children)

In order to lie you need to know what the truth is and intentionally give altered information with the intention to deceive. Large Language Models don't know things and don't have intentions. It spits out text based on input and what is already in the database.

There are ways do describe the behaviour of language models without ascribing them sentience.

[–] girlfreddy@lemmy.world -2 points 10 months ago

The demo was done by Apollo Research. They will be releasing the tech report soon.

[–] Burn_The_Right@lemmy.world 21 points 10 months ago* (last edited 10 months ago)

...Capable of Insider Trading and Lying...

So, an AI member of congress, then?

[–] Norgur@kbin.social 19 points 10 months ago* (last edited 10 months ago) (1 children)

Another chatbot word calculator made to calculate a little more specific words in order to squeeze more hype-driven capital out of hyped money bags.

It didn't "insider trade". It had information and spewed it out. It doesn't matter if it was fed the laws around Insider trading or the like because it doesn't "read" things it's fed but analyse the probability of words in correlation to each other.

And of course it said it wasn't lying because it doesn't know what a lie is. It again just spewed out what came up as the likeliest word, one at a time.

Come to grips with it, folks: AI is not "sentient" or anything. LLMs just show that human interaction and language can be expressed in mathematical terms. That's all.

[–] Chthonic@slrpnk.net 13 points 10 months ago (2 children)

I work on chatbots for a big tech company. Every team is trying to use GenAI for everything. 90% of the stuff they try won't work. I have to explain that LLMs can't actually think at least three times a week. The hype train was too strong. Even calling it AI feels misleading.

That said, there are some genuinely great applications for LLMs that i've enjoyed looking into.

[–] Norgur@kbin.social 4 points 10 months ago

It's absolutely a technology that's worth existing and I think that advances in AI will make our lives vastly different over time. Yet, we're not at this point yet.

[–] KeenFlame@feddit.nu -1 points 10 months ago (1 children)

I mean there are laymen that think they are sentient, sure, but it is much more infuriating to me when techbros come in to explain how they "don't think" and literally can't reason or use context at all. So you know more than literally the researchers themselves that don't fully understand how or why they function? You don't. Because nobody understands how they can reason or if they have a mental model of the world. Be reasonable and stop spreading bullshit. It's only to your own avail you downplay what is going on with these things

[–] Chthonic@slrpnk.net 1 points 10 months ago* (last edited 10 months ago) (1 children)

They don't reason, they're stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don't know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

If you would like the perspective of real scientists instead of a "tech-bro" like me I would recommend Emily Bender and Timnit Gebru. I'd recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.

[–] KeenFlame@feddit.nu 1 points 10 months ago* (last edited 10 months ago)

Not really, no. They do reason. Their neural nets have entire research areas dedicated to understanding why they work as we do not and cannot know what the weights represent. It's okay though. You do you while everyone else in the world research the software reneissance of the century

[–] FlyingSquid@lemmy.world 14 points 10 months ago

An imaginary system where money is pushed around based on superstition can be gamed by a robot? You're kidding me.

[–] squaresinger@feddit.de 8 points 10 months ago

It's not lying, it's too dumb to understand what it did.

[–] eran_morad@lemmy.world 7 points 10 months ago

So have fucking ai’s run congress and be done with it already.

[–] icerunner_origin@startrek.website 7 points 10 months ago (1 children)

So, can we now make investment bankers redundant?

[–] Norgur@kbin.social 2 points 10 months ago

They have been redundant from the get-go

[–] paddirn@lemmy.world 5 points 10 months ago

They grow up so fast.

[–] Immersive_Matthew@sh.itjust.works 4 points 10 months ago

Seems to me that the end of money is near in its current form as many AI agents are going to be going after it in ways that will largely leave most of us on the sideline.

[–] Gabu@lemmy.world 2 points 10 months ago

Who wrote this braindead article?

[–] raoul@lemmy.sdf.org -3 points 10 months ago (1 children)

This article is dumb: they chated with a good damn chat bot 🤬

This bullshit about "see, IA is totally sentient, let us put some regulation to stop competitors" is tiring.

[–] alabasterhotdog@lemmy.ca 5 points 10 months ago (2 children)

Puddle-deep "analyses" such as yours is tiring as well.

[–] Phanatik@kbin.social 4 points 10 months ago (1 children)

"Puddle-deep analyses" are all that's required with LLMs because they're not complicated. We've been living with the same tech for years through machine learning algorithms of regression models except no one was stupid enough to use the internet as their personal training model until OpenAI. ChatGPT is very good at imitating intelligence but that is not the same as actually being intelligent.

OpenAI and by extension have done a wonderful job with their marketing by lowering the standards for what constitutes an AI.

[–] alabasterhotdog@lemmy.ca 2 points 10 months ago

Absolutely, everything you've stated is correct. My comment wasn't intended as a comment on AI, but on the cynical and knee-jerk take offered.

[–] lolcatnip@reddthat.com 2 points 10 months ago

Puddle-deep falsehoods deserve no better.