this post was submitted on 03 Feb 2025
508 points (98.3% liked)

Technology

61346 readers
3604 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Originality.AI looked at 8,885 long Facebook posts made over the past six years.

Key Findings

  • 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
  • Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
  • This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
top 50 comments
sorted by: hot top controversial new old
[–] FundMECFSResearch@lemmy.blahaj.zone 8 points 2 hours ago (1 children)

This kind of just looks like an add for that companies AI detection software NGL.

[–] AcesFullOfKings@feddit.uk 4 points 1 hour ago

this whole concept relies on the idea that we can reliably detect AI, which is just not true. None of these "ai detector" apps or services actually works reliably. They have terribly low success rates. the whole point of LLMs is to be indistinguishable from human text, so if they're working as intended then you can't really "detect" them.

So all of these claims, especially the precision to which they write the claims (24.05% etc), are almost meaningless unless the "detector" can be proven to work reliably.

[–] Opinionhaver@feddit.uk 4 points 3 hours ago (1 children)

Title says 40% of posts but the article says 40% of long-form posts yet doesn't in any way specify what counts as a long-form post. My understanding is that the vast majority of Facebook posts are about the lenght of a tweet so I doubt that the title is even remotely accurate.

[–] will_a113@lemmy.ml 1 points 14 minutes ago

Yeah, the company that made the article is plugging their own AI-detection service, which I'm sure needs a couple of paragraphs to be at all accurate. For something in the range of just a sentence or two it's usually not going to be possible to detect an LLM.

[–] morrowind@lemmy.ml 32 points 6 hours ago (1 children)

Keep in mind this is for AI generated TEXT, not the images everyone is talking about in this thread.

Also they used an automated tool, all of which have very high error rates, because detecting AI text is a fundamentally impossible task

[–] addie@feddit.uk 2 points 4 hours ago (1 children)

AI does give itself away over "longer" posts, and if the tool makes about an equal number of false positives to false negatives then it should even itself out in the long run. (I'd have liked more than 9K "tests" for it to average out, but even so.) If they had the edit history for the post, which they didn't, then it's more obvious. AI will either copy-paste the whole thing in in one go, or will generate a word at a time at a fairly constant rate. Humans will stop and think, go back and edit things, all of that.

I was asked to do some job interviews recently; the tech test had such an "animated playback", and the difference between a human doing it legitimately and someone using AI to copy-paste the answer was surprisingly obvious. The tech test questions were nothing to do with the job role at hand and were causing us to select for the wrong candidates completely, but that's more a problem with our HR being blindly in love with AI and "technical solutions to human problems".

"Absolute certainty" is impossible, but balance of probabilities will do if you're just wanting an estimate like they have here.

[–] morrowind@lemmy.ml 2 points 4 hours ago (2 children)

I have no idea whether the probabilities are balanced. They claim 5% was AI even before chatgpt was released, which seems pretty off. No one was using LLMs before chatgpt went viral except for researchers.

[–] GenosseFlosse@feddit.org 1 points 1 hour ago

Chatbots doesn't mean that they have a real conversation. Some just spammed links from a list of canned responses, or just upvoted the other chat bots to get more visibility, or the just reposted a comment from another user.

[–] szczuroarturo@programming.dev 1 points 2 hours ago

Im pretty sure chatbots were a thing before AI. They certainly werent as smart but they did exists.

[–] ZILtoid1991@lemmy.world 16 points 6 hours ago (2 children)

> uses ai slop to illustrate it

[–] harmsy@lemmy.world 10 points 6 hours ago (1 children)

The most annoying part of that is the shitty render. I actually have an account on one of those AI image generating sites, and I enjoy using it. If you're not satisfied with the image, just roll a few more times, maybe tweak the prompt or the starter image, and try again. You can get some very cool-looking renders if you give a damn. Case in point:

[–] Petter1@lemm.ee 5 points 5 hours ago

😍this is awesome!

A friend of mine has made this with your described method:

PS: 😆the laptop on the illustration in the article! Someone did not want pay for high end model and did not want to to take any extra time neither…

[–] Draces@lemmy.world 5 points 6 hours ago

Seems like an appropriate use of the tech

[–] transfluxus@leminal.space 1 points 3 hours ago

Considering that they do automated analysis, 8k posts does not seem like a lot. But still very interesting.

[–] venusaur@lemmy.world 5 points 5 hours ago (1 children)

Probably on par with the junk human users are posting

[–] Treczoks@lemmy.world 1 points 5 hours ago

Hmm, "the junk human users are posting", or "the human junk users are posting"? We are talking about Facebook here, after all.

[–] Magister@lemmy.world 59 points 10 hours ago (2 children)

It's incredible, for months now I see some suggested groups, with an AI generated picture of a pet/animal, and the text is always "Great photography". I block them, but still see new groups every day with things like this, incredible...

[–] will_a113@lemmy.ml 24 points 8 hours ago (3 children)

I have a hard time understanding facebook’s end game plan here - if they just have a bunch of AI readers reading AI posts, how do they monetize that? Why on earth is the stock market so bullish on them?

[–] lepinkainen@lemmy.world 14 points 6 hours ago (1 children)

Engagement.

It’s all they measure, what makes people reply to and react to posts.

People in general are stupid and can’t see or don’t care if something is AI generated

[–] acosmichippo@lemmy.world 4 points 6 hours ago (3 children)

they measure engagement, but they sell human eyeballs for ads.

[–] andallthat@lemmy.world 4 points 6 hours ago (1 children)

But if half of the engagement is from AI, isnt that a grift on advertisers? Why should I pay for an ad on Facebook that is going to be "seen" by AI agents? AI don't buy products (yet?)

[–] acosmichippo@lemmy.world 3 points 6 hours ago

yes, exactly.

load more comments (2 replies)
[–] WalrusDragonOnABike@reddthat.com 22 points 8 hours ago (1 children)

As long as they can convince advertisers that the enough of the activity is real or enough of the manipulation of public opinion via bots is in facebook's interest, bots aren't a problem at all in the short-term.

[–] acosmichippo@lemmy.world 2 points 6 hours ago (1 children)

surely at some point advertisers will put 2 and 2 together when they stop seeing results from targeted advertising.

[–] SolarMonkey@slrpnk.net 4 points 3 hours ago

I think you give them too much credit. As long as it doesn’t actively hurt their numbers, like x, it’s just part of the budget.

[–] 1984@lemmy.today 2 points 6 hours ago* (last edited 6 hours ago)

AI can put together all that personal data and create very detailed profiles on everyone, automatically. From that data, an Ai can add a bunch of attributes that are very likely to be true as well, based on what the person is doing every day, working, education, gender, social life, mobile data location, bills etc etc.

This is like having a person follow every user around 24 hours per day, combined with a psychologist to interpret and predict the future.

It's worth a lot of money to advertisers of course.

[–] spongebue@lemmy.world 5 points 6 hours ago (2 children)

For me it's some kind of cartoon with the caption "Great comic funny 🤣" and sometimes "funny short film" (even though it's a picture)

Like, Meta has to know this is happening. Do they really think this is what will keep their userbase? And nobody would think it's just a little weird?

[–] brucethemoose@lemmy.world 1 points 3 hours ago

Engagement is engagement, sustainability be damned.

[–] Petter1@lemm.ee 2 points 5 hours ago* (last edited 5 hours ago)

Well, maybe it is the taste of people still being there.. I mean, you have to be at least a little bit strange, if you are still on facebook…

[–] Fandangalo@lemmy.world 28 points 10 hours ago (2 children)

I’ve posted a notice to leave next week. I need to scrape my photos off, get any remaining contacts, and turn off any integrations. I was only there to connect with family. I can email or text.

FB is a dead husk fake feeding some rich assholes. If it’s coin flip AI, what’s the point?

[–] EveningPancakes@lemm.ee 14 points 10 hours ago* (last edited 9 hours ago) (2 children)

Back when I got off in 2019, there was a tool (Facebook sponsored somewhere in the settings) that allowed you to save everything in an offline HTML file that you could host locally and get access to things like picture albums, complete with descriptions and comments. Not sure if it still exists, but it made the process incredibly painless getting off while still retaining things like pictures.

[–] Fandangalo@lemmy.world 12 points 9 hours ago (1 children)

Thank you real internet person. You make the internet great.

  • From Another Real Internet Person
[–] UltraGiGaGigantic@lemmy.ml 2 points 6 hours ago

Wait, you're not a dog using the internet while the humans are at work?

[–] bassomitron@lemmy.world 8 points 9 hours ago

It still existed when I did the same thing a year ago or so. They implemented it awhile back to try and avoid antitrust lawsuits around the world. Though, now that Zuckerberg has formally started sucking this regime's dick, I wouldn't be surprised if it goes away.

load more comments (1 replies)
[–] brucethemoose@lemmy.world 21 points 9 hours ago* (last edited 9 hours ago) (6 children)

The bigger problem is AI “ignorance,” and it’s not just Facebook. I’ve reported more than one Lemmy post the user naively sourced from ChatGPT or Gemini and took as fact.

No one understands how LLMs work, not even on a basic level. Can’t blame them, seeing how they’re shoved down everyone’s throats as opaque products, or straight up social experiments like Facebook.

…Are we all screwed? Is the future a trippy information wasteland? All this seems to be getting worse and worse, and everyone in charge is pouring gasoline on it.

[–] Petter1@lemm.ee 2 points 5 hours ago* (last edited 5 hours ago) (1 children)

*where you think they sourced from AI

you have no proof other than seeing ghosts everywhere.

Not get me wrong, fact checking posts is important, but you have no evidence if it is AI, human brain fart or targeted disinformations 🤷🏻‍♀️

[–] brucethemoose@lemmy.world 4 points 3 hours ago* (last edited 3 hours ago) (1 children)

No I mean they literally label the post as “Gemini said this”

I see family do it too, type something into Gemini and just assume it looked it up or something.

[–] Petter1@lemm.ee 1 points 1 hour ago

I see no problem if the poster gives the info, that the source is AI. This automatically devalues the content of the post/comment and should trigger the reaction that this information is to be taken with a grain of salt and it needs to factchecked in order to improve likelihood that that what was written is fact.

An AI output is most of the time a good indicator about what the truth is, and can give new talking points to a discussion. But it is of course not a “killer-argument”.

load more comments (5 replies)
[–] brucethemoose@lemmy.world 9 points 9 hours ago* (last edited 9 hours ago) (1 children)

Also… the tremendous irony here is Meta is screwing themselves over.

They've hedged their future on AI, and are smart enough to release the weights and fund open research, yet their advantage (a big captive dataset, aka Facebook/Instagram/WhatsApp users) is completely overrun with slop that poisons it. It’s as laughable as Grok (X’s AI) being trained on Twitter.

[–] SlopppyEngineer@lemmy.world 8 points 7 hours ago (1 children)

Meta is probably screwed already. Their user base is not growing as before, maybe shrinking in some markets, and they need the padding to cover it up.

[–] brucethemoose@lemmy.world 1 points 2 hours ago

Very true.

But also so stupid because their user base is, what, a good fraction of the planet? How can they grow?

load more comments
view more: next ›