this post was submitted on 30 Jul 2024
49 points (78.2% liked)

Technology

57997 readers
2851 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

This isn't a joke, though it almost seems like one. It uses Llama 3.1, and supposedly the conversation data stays on the device and gets forgotten over time (through what the founder calls a rolling "context window").

The implementation is interesting, and you can see the founder talking about earlier prototypes and project goals in interviews from several months ago.

iOS only, for now.

Edit: Apparently, you can build your own for around $50 that runs on ChatGPT instead of Llama. I'm sure you could also figure out how to switch it to the LLM of your choice.

all 23 comments
sorted by: hot top controversial new old
[–] Warl0k3@lemmy.world 51 points 1 month ago (1 children)

LMFAO. The audacity of calling the token limit a "rolling context window" like it's a desirable feature and not just a design aspect of every LLM...

[–] hoshikarakitaridia@lemmy.world 26 points 1 month ago (1 children)

Yeah that part tripped me up.

"Rolling context window"? You mean one of the universal properties of LLMs in it's current state? The one that is so big for Google's latest AI endeavors that they are flexing with it?

It's hilarious to say that's a privacy feature. It's like calling amnesia a learning opportunity.

These claims make me think this is worse than the R1 rabbit or whatever it's called. Although it's very difficult to be worse, considering the CEO turned out to be a full-on crypto scammer.

[–] Telorand@reddthat.com 2 points 1 month ago

Check the edit for instructions on how to build your own. It's even called "Friend," so "friend" is likely a modified version of that (ChatGPT vs Llama, respectively).

I would certainly feel better about it if I had full control over the encryption endpoints, at a minimum.

[–] Telorand@reddthat.com 14 points 1 month ago (2 children)

I will be waiting for the tech YouTubers and early adopters to render their judgement before I even consider yet another AI wearable, but this aims to be less of a personal assistant and more of a "Tamagotchi."

[–] Clusterfck@lemmy.sdf.org 11 points 1 month ago

I think that’s what sets this one apart (and makes it less expensive) from the other devices like this. This thing only needs a mic, an LLM and a Bluetooth radio. It won’t search the whole internet for answers or tell you what you’re looking at, but it will talk shit on that bitch Tonya in accounting with you.

[–] felixwhynot@lemmy.world 2 points 1 month ago

At least tamagotchis had games

[–] randon31415@lemmy.world 7 points 1 month ago (2 children)

$99? I just ordered the parts for $50 (including shipping and handling and a 100 count on/off switches of which I only need one)

Also confused why they say it is "always on" if it has an off switch. A TV can be always on until you turn it off. Once I build it, I'll see what can be switched around - I am hoping to get something like the superbooga extension for oobabooga (RAG vetorization of documents) working with the transcripts.

Was a bit worried about Whisper STT, but I think it is the open source on device one, not the runs on OpenAI servers version.

[–] mosiacmango@lemm.ee 4 points 1 month ago

You're a champion. Post it up here when you get it rolling.

[–] felixwhynot@lemmy.world 2 points 1 month ago (1 children)
[–] randon31415@lemmy.world 5 points 1 month ago (1 children)
[–] Telorand@reddthat.com 2 points 1 month ago (1 children)

Added to the post! Great find.

[–] randon31415@lemmy.world 3 points 1 month ago (1 children)

Oh, crap! This is getting confusing. I think this is what happened:

The "Friend.com" AI friend was originally named "Tab". The Basedhardware.com wearable was originally named "Friend", but "AI Friend" was turning up to much stuff, so they added Based hardware to the name.

Then the creator of Tab renamed it "Friend" and bough Friend.com. Both AI wearable, but the Friend.com sounds more closed source than the based hardware Friend. So they are technically two different projects.

[–] Telorand@reddthat.com 1 points 1 month ago

I mean, taking open source and turning it into closed source isn't an impossible case. It wouldn't surprise me if he borrowed ideas from the other project, at any rate.

But yeah, definitely different projects, though it remains to be seen how different they are at their core. I'm not spending $100 to find out; I'll let the whales do that.

[–] Rolando@lemmy.world 7 points 1 month ago (1 children)

Wait, is this the same thing we were ridiculing over on 196? https://lemmy.world/post/18120973

[–] Telorand@reddthat.com 3 points 1 month ago

Yep, that's the one. Check my edit, if you want to build your own for ≈$50.

[–] Sanctus@lemmy.world 2 points 1 month ago (2 children)

I actually cant talk shit about this one. So far seems to be what you'd want for the data side. All on device. No subscription. It does come off as weird and still is probably a bad idea to receive advice. But at 99 bucks its gotta be one of the cheapest AI device from a startup so far. I won't get one, but I dont absolutely hate it.

[–] Angry_Autist@lemmy.world 5 points 1 month ago (2 children)

All-on-device AIs that could run on an iPhone would be terrible. Its sending tokens somewhere I guaran-fucking-tee.

[–] bandwidthcrisis@lemmy.world 6 points 1 month ago (2 children)

The FAQ says that it requires an Internet connection.

It also mentions e2ee, which isn't too reassuring when one of the ends is their servers.

[–] Angry_Autist@lemmy.world 4 points 1 month ago

Exactly, there are ways to make the tokens unreadable even in a server hosted LLM but I know for a fact that's not what's going to happen here.

And I fully expect all of our engagement data to be used in the 2028 election to target us.

[–] Telorand@reddthat.com 3 points 1 month ago

This was my big concern as well. E2EE only matters if you control each end. That's why I'll let the YouTubers and security analysts dissect it first.

Check my edit for instructions on how to build your own. It's even called "Friend," so it's probably the same thing tweaked for a different LLM.

[–] Aatube@kbin.melroy.org 0 points 1 month ago* (last edited 1 month ago) (1 children)

false advertising law: hello?

Anyone can test if it's sending with a firewall. If it's not connected to the internet, it ain't sending. Don't forget that iPhone chips have been the Moore's law executors.

[–] Angry_Autist@lemmy.world 5 points 1 month ago

Do you even have a slight idea how processing intense even 5 year old LLM models are?

And iPhones aren't magically immune to thermodynamic.