this post was submitted on 27 Dec 2024
322 points (94.7% liked)
Technology
60115 readers
2822 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
nothing to do with actual capabilities.. just the ability to make piles and piles of money.
That's an Onion level of capitalism
The same way these capitalists evaluate human beings.
For fun, I used this idea in chatgpt using similar logic:
You're correct to find this idea odd—it reflects a misunderstanding of how financial goals, like owning a house, relate to income. Here's why this thinking is flawed:
Profit vs. Net Worth: Making $1,000 in profit doesn't directly equate to being able to afford a house because buying a house requires not just income but also savings, creditworthiness, and a down payment. Profit is only a small piece of the puzzle.
Costs and Expenses: Owning a house involves more than just the purchase price. You need to consider closing costs, property taxes, maintenance, and potentially ongoing mortgage payments. $1,000 would likely not be enough to cover even a fraction of these.
Leverage and Financing: Most people don't buy houses outright; they use loans or mortgages. To secure a mortgage, lenders look at your total financial picture: income, savings, debt, and credit score. Simply generating $1,000 won't improve these factors significantly.
Specificity of the Goal: "Making $1,000" is a general financial milestone, not a specific strategy tailored to the requirements of owning a home. A more actionable goal would involve budgeting, saving for a down payment, and improving credit.
The Backwards Logic: The idea that reaching an arbitrary profit threshold suddenly makes you eligible for something like owning a house conflates short-term gains with long-term financial stability. A better approach is to reverse-engineer the goal: start by determining the total cost of the house, then work backwards to figure out how much you need to save monthly, how much income you need to earn, and what steps are required to secure financing.
Example of Goal-Oriented Thinking:
If you wanted to own a $200,000 house:
You might need a 10% down payment ($20,000).
Closing costs could add $5,000–$10,000.
Monthly income would need to support a mortgage payment (let’s say $1,200/month).
Steps would include saving regularly, reducing debt, and improving credit, rather than focusing on an isolated profit milestone like $1,000.
Summary:
Focusing on a single, arbitrary profit number like $1,000 doesn’t align with the comprehensive planning required for significant financial goals like home ownership. Instead, success depends on a holistic view of your finances and structured goal-setting.
Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.
AI (LLM software) is not a bubble. It’s been effectively implemented as a utility framework across many platforms. Most of those platforms are using OpenAI’s models. I don’t know when or if that’ll make OpenAI 100 billion dollars, but it’s not a bubble - this is not the .COM situation.
The vast majority of those implementations are worthless. Mostly ignored by it's intended users, seen as a useless gimmick.
LLM have it's uses but companies are pushing them into every areas to see what sticks at the moment.
Not the person you replied to, but I think you're both "right". The ridiculous hype bubble (I'll call it that for sure) put "AI" everywhere, and most of those are useless gimmicks.
But there's also already uses that offer things I'd call novel and useful enough to have some staying power, which also means they'll be iterated on and improved to whatever degree there is useful stuff there.
(And just to be clear, an LLM - no matter the use cases and bells and whistles - seems completely incapable of approaching any reasonable definition of AGI, to me)
I think people misunderstand a bubble. The .com bubble happened but the internet was useful and stayed around. The AI bubble doesn't mean AI isn't useful just that most of the chaf well disapear.
The dotcom bubble was based on technology that had already been around for ten years. The AI bubble is based on technology that doesn't exist yet.
Yeah, it’s so a question of if OpenAI won’t lose too many of its investors when all the users that don’t stick fall down.
This is simply false.
To each his own, but I use Copilot and the ChatGPT app positively on a daily. The Copilot integration into our SharePoint files is extremely helpful. I’m able to curate data that would not show up in a standard search of file name and content indexing.
To be fair, a bubble is more of an economic thing and not necessarily tied to product/service features.
LLMs clearly have utility, but is it enough to turn them into a profitable business line?
You’re right about the definition, and I do think the LLMs will aid in a product offering’s profitability, if not directly generate profits. But OP didn’t mean economically, they meant LLMs will go the way of slap bracelets.
Sounds like they meant economics to me.
They said “AI bubble collapses” first then “their value” - meaning the product’s practical use stops functioning (people stop using it) first thus causing economic breakdown for the companies as a result.
It’s obvious that the OP is expecting LLMs to be a fad that people will soon be forgetting.
It's a bubble. It doesn't mean the tech does not have its uses. And it is exactly like the .com situation.
I think that "exactly like" it's absurd. Bubbles are never "exactly" like the previous ones.
I think in this case there is a clear economical value in what they produce (from the POV of capitalism, not humanity's best interests), but the cost is absurdly huge to be economically viable, hence, it is a bubble. But in the dot com bubble, many companies had a very dubious value in the first place.
There is clear economic value in chains of bullshit that may or may not ever have a correct answer?
OpenAI doesn't produce LLMs only. People are gonna be paying for stuff like Sora or DallE. And people are also paying for LLMs (e.g. Copilot, or whatever advanced stuff OpenAI offers in their paid plan).
How many, and how much? I don't know, and I am not sure it can ever be profitable, but just reducing it to "chains of bullshit" to justify that it has no value to the masses seems insincere to me. ChatGPT gained a lot of users in record time, and we know is used a lot (often more than it should, of course). Someone is clearly seeing value in it, and it doesn't matter if you and I disagree with them on that value.
I still facepalm when I see so many people paying for fucking Twitter blue, but the fact is that they are paying.
Completely wrong.
Ever heard of the internet bubble?
Yeah. That’s what I just mentioned.
The context here is that OpenAI has a contract with Microsoft until they reach AGI. So it's not a philosophical term but a business one.
Right but that's not interesting to anyone but themselves. So why call it AGI then? Why not just say once the company has made over x amount of money they are split off to a separate company. Why lie and say you've developed something that you might not have developed.
honestly I agree. 100 Billion profit is incredibly impressive and would overtake basically any other software industry in the world but alas it doesn't have anything to do with "AGI". For context, Apple's net income is 90 Billion this year.
I've listened to enough interviews to know that all of AI leaders want this holy grail title of "inventor of AGI" more than anything else so I don't think the definitely will ever be settled collectively until something so mind blowing exists that would really render the definition moot either way.