this post was submitted on 25 May 2024
775 points (97.1% liked)

Technology

60070 readers
3523 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

you are viewing a single comment's thread
view the rest of the comments
[–] NounsAndWords@lemmy.world 35 points 7 months ago (7 children)

GPT-2 came out a little more than 5 years ago, it answered 0% of questions accurately and couldn't string a sentence together.

GPT-3 came out a little less than 4 years ago and was kind of a neat party trick, but I'm pretty sure answered ~0% of programming questions correctly.

GPT-4 came out a little less than 2 years ago and can answer 48% of programming questions accurately.

I'm not talking about mortality, or creativity, or good/bad for humanity, but if you don't see a trajectory here, I don't know what to tell you.

[–] 14th_cylon@lemm.ee 93 points 7 months ago (3 children)

Seeing the trajectory is not ultimate answer to anything.

[–] NounsAndWords@lemmy.world 17 points 7 months ago (1 children)

Perhaps there is some line between assuming infinite growth and declaring that this technology that is not quite good enough right now will therefore never be good enough?

Blindly assuming no further technological advancements seems equally as foolish to me as assuming perpetual exponential growth. Ironically, our ability to extrapolate from limited information is a huge part of human intelligence that AI hasn't solved yet.

[–] 14th_cylon@lemm.ee 0 points 7 months ago

will therefore never be good enough?

no one said that. but someone did try to reject the fact it is demonstrably bad right now, because "there is a trajectory".

[–] otp@sh.itjust.works 13 points 7 months ago (3 children)

I appreciate the XKCD comic, but I think you're exaggerating that other commenter's intent.

The tech has been improving, and there's no obvious reason to assume that we've reached the peak already. Nor is the other commenter saying we went from 0 to 1 and so now we're going to see something 400x as good.

[–] stufkes@lemmy.world 5 points 7 months ago

I think the one argument for the assumption that we're near peak already is the entire issue of AI learning from AI input. I think numberphile discussed a maths paper that said that to achieve the accuracy that we want, there is simply not enough data to train it on.

That's of course not to say that we can't find alternative approaches

[–] 31337@sh.itjust.works 2 points 7 months ago

We're close to peak using current NN architectures and methods. All this started with the discovery of transformer architecture in 2017. Advances in architecture and methods have been fairly small and incremental since then. The advancements in performance has mostly just been throwing more data and compute at the models, and diminishing returns have been observed. GPT-3 costed something like $15 million to train. GPT-4 is a little better and costed something like $100 million to train. If the next model costs $1 billion to train, it will likely be a little better.

[–] 14th_cylon@lemm.ee 0 points 7 months ago* (last edited 7 months ago) (1 children)

I appreciate the XKCD comic, but I think you’re exaggerating that other commenter’s intent.

i don't think so. the other commenter clearly rejects the critic(1) and implies that existence of upward trajectory means it will one day overcome the problem(2).

while (1) is well documented fact right now, (2) is just wishful thinking right now.

hence the comic, because "the trajectory" doesn't really mean anything.

[–] otp@sh.itjust.works 1 points 7 months ago (1 children)

In general, "The technology is young and will get better with time" is not just a reasonable argument, but almost a consistent pattern. Note that XKCD's example is about events, not technology. The comic would be relevant if someone were talking about events happening, or something like sales, but not about technology.

Here, I'm not saying that you're necessarily right or they're necessarily wrong, just that the comic you shared is not a good fit.

[–] 14th_cylon@lemm.ee 0 points 7 months ago (1 children)

In general, “The technology is young and will get better with time” is not just a reasonable argument, but almost a consistent pattern. Note that XKCD’s example is about events, not technology.

yeah, no.

try to compare horse speed with ford t and blindly extrapolate that into the future. look at the moore's law. technology does not just grow upwards if you give it enough time, most of it has some kind of limit.

and it is not out of realm of possibility that llms, having already stolen all of human knowledge from the internet, having found it is not enough and spewing out bullshit as a result of that monumental theft, have already reached it.

that may not be the case for every machine learning tool developed for some specific purpose, but blind assumption it will just grow indiscriminately, because "there is a trend", is overly optimistic.

[–] otp@sh.itjust.works 0 points 7 months ago (1 children)

I don't think continuing further would be fruitful. I imagine your stance is heavily influenced by your opposition to, or dislike of, AI/LLMs

[–] 14th_cylon@lemm.ee -1 points 7 months ago

oh sure. when someone says "you can't just blindly extrapolate a curve", there must be some conspiracy behind it, it absolutely cannot be because you can't just blindly extrapolate a curve 😂

[–] systemglitch@lemmy.world 5 points 7 months ago

That comes off as disingenuous in this instance.

[–] Eheran@lemmy.world 28 points 7 months ago (1 children)

The study is using 3.5, not version 4.

[–] phoneymouse@lemmy.world 1 points 7 months ago (1 children)

4 produces inaccurate programming answers too

[–] Eheran@lemmy.world 6 points 7 months ago (1 children)

Obviously. But it is FAR better yet again.

[–] phoneymouse@lemmy.world 1 points 7 months ago (1 children)

Not really. I ask it questions all the time and it makes shit up.

[–] Eheran@lemmy.world 2 points 7 months ago

Yes. But it is better than 3.5 without any doubt.

[–] SnotFlickerman@lemmy.blahaj.zone 22 points 7 months ago* (last edited 7 months ago) (1 children)

https://www.reuters.com/technology/openai-ceo-altman-says-davos-future-ai-depends-energy-breakthrough-2024-01-16/

Speaking at a Bloomberg event on the sidelines of the World Economic Forum's annual meeting in Davos, Altman said the silver lining is that more climate-friendly sources of energy, particularly nuclear fusion or cheaper solar power and storage, are the way forward for AI.

"There's no way to get there without a breakthrough," he said. "It motivates us to go invest more in fusion."

It's a good trajectory, but when you have people running these companies saying that we need "energy breakthroughs" to power something that gives more accurate answers in the face of a world that's already experiencing serious issues arising from climate change...

It just seems foolhardy if we have to burn the planet down to get to 80% accuracy.

I'm glad Altman is at least promoting nuclear, but at the same time, he has his fingers deep in a nuclear energy company, so it's not like this isn't something he might be pushing because it benefits him directly. He's not promoting nuclear because he cares about humanity, he's promoting nuclear because has deep investment in nuclear energy. That seems like just one more capitalist trying to corner the market for themselves.

[–] AIhasUse@lemmy.world 7 points 7 months ago

We are running these things on computers not designed for this. Right now, there are ASICs being built that are specifically designed for it, and traditionally, ASICs give about 5 orders of magnitude of efficiency gains.

[–] gencha@lemm.ee 4 points 7 months ago

Given the data points you made up, I feel it's safe to assume that this plateau will now be a 10 year stretch

[–] Knock_Knock_Lemmy_In@lemmy.world 3 points 7 months ago (1 children)

In what year do you estimating AI will have 90% accuracy?

[–] NounsAndWords@lemmy.world 3 points 7 months ago

No clue? Somewhere between a few years (assuming some unexpected breakthrough) or many decades? The consensus from experts (of which I am not) seems to be somewhere in the 2030s/40s for AGI. I'm guessing accuracy probably will be more on a topic by topic basis, LLMs might never even get there, or only related to things they've been heavily trained on. If predictive text doesn't do it then I would be betting on whatever Yann LeCun is working on.

[–] egeres@lemmy.world 3 points 7 months ago

Lemmy seems to be very near-sighted when it comes to the exponential curve of AI progress, I think this is an effect because the community is very anti-corp

[–] Thorny_Insight@lemm.ee 0 points 7 months ago

We only need to keep doing incremental improvements in the technology and avoid destroying ourselves in the meantime. That's all it takes for us to find ourselves in the presence of superintelligent AI one day.