this post was submitted on 14 Mar 2024
68 points (91.5% liked)

Technology

58315 readers
4518 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kevincox@lemmy.ml 38 points 6 months ago (1 children)

This is pretty clever. As I understand it.

  1. Because LLMs are slow most of them stream the response to the user.
  2. The response is streamed as text, but generated in tokens.
  3. This means that each "chunk" leaks the length of the text corresponding to the token.
  4. You can then use heuristics to guess the text of the response based on the token lengths.

This is a good reminder any time you are sending content in small chunks over an encrypted channel, many encrypted channels don't provide protection against size leaks by default.

It seems there are a few easy solutions to this:

  1. Send the token IDs (as fixed-size integers) over the network rather than the text.
  2. Pad the text representations of the tokens to a fixed length.
  3. Batch the tokens more (and maybe add padding) to produce bigger chunks and obscure individual token size.

These still all leak the approximate length of the response, but that is probably acceptable.

[–] PlexSheep@feddit.de 7 points 6 months ago (1 children)

That actually is really really interesting. Thanks for giving the tldr. Do token lengths vary that much?

[–] kevincox@lemmy.ml 6 points 6 months ago

Absolutely. They are sort of a compression scheme so the tokens contain different numbers of characters based on how frequent that string is. So common words like "the" will typically be one token, or maybe even common phrases like "I am". On the other hand rare punctuation such as "~" may be its own token. There will also be tokens for many common prefixes and suffixes such as "non" and "n't". The tokens of each model are different but they definitely vary in length.