this post was submitted on 02 Dec 2024
384 points (99.0% liked)

Technology

60052 readers
2857 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rumba@lemmy.zip 1 points 2 weeks ago (1 children)

Intel is totally missing the boat honestly. Their mobile i9 with the built-in GPU can share DDR5 with the video card.

You can put 96 gigs of RAM in a small form factor and load in a monster model. It's not super fast, But it works, and it's a lot faster than not offloading layers off the CPU.

They should be selling nuk sized PCs with built-in graphics and 128 gigs of the fastest RAM they can put on the boards.

[–] brucethemoose@lemmy.world 1 points 2 weeks ago (1 children)

IMO its not really "enough" until the bus is 256 bit. Thats when 32B-72B class models start to look even theoretically runnable at decent speeds.

[–] rumba@lemmy.zip 2 points 2 weeks ago (1 children)

he was getting 1.4 tokens on a 70B model. Not setting the world on fire, but enough to load and script against 70b

https://www.youtube.com/watch?v=xyKEQjUzfAk

[–] brucethemoose@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Also that is a very low context test. A longer context will bog it down, even setting aside the prompt processing time.

...On the other hand, you could probably squeeze a bit more running openvino instead of llama.cpp, so that is still respectable.

[–] rumba@lemmy.zip 2 points 2 weeks ago

text test. A longer co

yeah, it's definitely not good enough for user-facing work, but if I'm working on development for something like translations, being able to see the 70b output to compare it to other models, it's super useful before I send it off to something that costs more money to run.

9/10 times, the bigger model isn't significantly better for what I'm trying to do, but it's really nice to confirm that.