this post was submitted on 08 Jun 2025
1285 points (97.0% liked)
Microblog Memes
8026 readers
3139 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Let's not pretend statistical models are approaching humanity. The companies who make these statistical model algorithms proved they couldn't in 2020 by OpenAI and also 2023 DeepMind papers they published.
To reiterate, with INFINITE DATA AND COMPUTE TIME the models cannot approach human error rates. It doesn't think, it doesn't emulate thinking, it statistically resembles thinking to some number below 95% and completely and totally lacks permanence in it's statistical representation of thinking.
I think most people understand that these LLM cannot think or reason, they're just really good tools that can analyze data, recognize patterns, and generate relevant responses based on parameters and context. The people who treat LLM chatbot like they're people have much deeper issues than just ignorance.
I don't know if it's an urban myth, but I've heard about 20% of LLM inference time and electricity is being spend on "hello" and "thank you" prompts. :)
It's a very real thing. So much so that OpenAI actually came out and publicly complained about how it's apparently costing the company millions.
https://www.vice.com/en/article/telling-chatgpt-please-and-thank-you-costs-openai-millions-ceo-claims/