this post was submitted on 23 Nov 2023
183 points (91.8% liked)

Technology

57418 readers
6397 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] guitarsarereal@sh.itjust.works 51 points 9 months ago* (last edited 9 months ago) (7 children)

According to the article, they got an experimental LLM to reliably perform basic arithmetic, which would be a pretty substantial improvement if true. IE instead of stochastically guessing or offloading it to an interpreter, the model itself was able to reliably perform a reasoning task that LLM's have struggled with so far.

It's rather exciting, tbh. it kicks open the door to a whole new universe of applications, if true. It's only technically a step in the direction of AGI, though, since technically if AGI is possible every improvement like this counts as a step towards it. If this development is really what triggered the board coup, though, then it sort of makes the board coup group look even more ridiculous than they did before. This is step 1 to making a model that can be tasked with ingesting spreadsheets and doing useful math on them. And I say that as someone who leans pretty pessimistically in the AI safety debate.

[–] maegul@lemmy.ml 16 points 9 months ago (5 children)

Being a layperson in this, I’d imagine part of the promise is that once you’ve got reliable arithmetic, you can get logic and maths in there too and so get the LLM to actually do more computer-y stuff but with the whole LLM/ChatGPT wrapped around it as the interface.

That would mean more functionality, perhaps a lot more of it works and scales, but also, perhaps more control and predictability and logical constraints. I can see how the development would get some excited. It seems like a categorical improvement.

[–] perviouslyiner@lemm.ee 2 points 9 months ago* (last edited 9 months ago) (1 children)

Always wondered why the text model didn't just put its output through something like MATLAB or Mathematica once it got as far as having something which requires domain-specific tools.

Like when Prof. Moriarty tried it on a quantum physics question and it got as far as writing out the correct formula before failing to actually calculate the result

[–] hamptonio@lemmy.world 3 points 9 months ago

There is definitely a lot of effort in this direction, seems very likely that a hybrid system could be very powerful.

load more comments (3 replies)
load more comments (4 replies)