115
submitted 10 months ago by throws_lemy@lemmy.nz to c/technology@beehaw.org
all 16 comments
sorted by: hot top controversial new old
[-] Luke_Fartnocker@lemm.ee 23 points 10 months ago

If you get a message, or see something on a social media platform urging you to buy crypto or NFTs, it's 100% a scam. It doesn't take an AI detector to figure that out.

[-] TwilightVulpine@kbin.social 22 points 10 months ago

"Regulators can't keep up" is like the history of the tech industry in a nutshell.

[-] Mothra@mander.xyz 15 points 10 months ago

Who would have seen this coming

[-] AceFuzzLord@lemm.ee 14 points 10 months ago

Stopping the scam bots has always been like fighting a hydra. You kill one head and a million more pop up.

[-] remotelove@lemmy.ca 9 points 10 months ago* (last edited 10 months ago)

What are regulators going to do? Write the bot a report so it kills itself? Invite the bot to a ton of meetings? Sit it down and give it a firm finger pointing?

[-] DavidGarcia@feddit.nl 4 points 10 months ago

they can pass another completely ineffective law that they can point to to get reelected to then give more money to lockheed martin

[-] fer0n@lemm.ee 7 points 10 months ago

This makes it sound like the robots have gone wilde, while in reality humans are setting up the spam bots. #saveTheBots

[-] Hexagon@feddit.it 7 points 10 months ago

The "dead internet" is getting closer

[-] autotldr@lemmings.world 5 points 10 months ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryA new study shared last month by researchers at Indiana University's Observatory on Social Media details how malicious actors are taking advantage of OpenAI's chatbot ChatGPT, which became the fastest-growing consumer AI application ever this February.

The rise of social media gave bad actors a cheap way to reach a large audience and monetize false or misleading content, Menczer said.

New AI tools "further lower the cost to generate false but credible content at scale, defeating the already weak moderation defenses of social-media platforms," he said.

In the past few years, social-media bots — accounts that are wholly or partly controlled by software — have been routinely deployed to amplify misinformation about events, from elections to public-health crises such as COVID.

The AI bots in the network uncovered by the researchers mainly posted about fraudulent crypto and NFT campaigns and promoted suspicious websites on similar topics, which themselves were likely written with ChatGPT, the survey says.

Yang said that tracking suspects' social-media activity patterns, whether they have a history of spreading false claims and how diverse in language and content their previous posts are, is a more reliable way to identify bots.


Saved 78% of original text.

[-] marv99@feddit.de 17 points 10 months ago

Thank you good bot, please take my social media profile.

[-] Black_Gulaman@lemmy.dbzer0.com 5 points 10 months ago

It's you! Guys, I found the bot!

Pitchforks, everyone!

[-] fer0n@lemm.ee 4 points 10 months ago

Here’s a shorter summary:

Researchers found over 1,000 AI spam bots on social media using ChatGPT to promote scams, especially in cryptocurrency. These bots imitate humans, making detection harder and potentially degrading online information quality. Without regulation, malicious actors could outpace efforts to combat AI-generated content, posing a threat to the internet's reliability.

[-] El_Dorado@beehaw.org 5 points 10 months ago

Made me think of letting Ai bots fight ai bots and watch everything burn down 🍿😅

[-] sfera@beehaw.org 2 points 10 months ago

Let's have AI regulators!

this post was submitted on 26 Aug 2023
115 points (100.0% liked)

Technology

37208 readers
108 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS