this post was submitted on 12 Mar 2025
47 points (82.2% liked)

Selfhosted

44146 readers
270 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Wondering about services to test on either a 16gb ram "AI Capable" arm64 board or on a laptop with modern rtx. Only looking for open source options, but curious to hear what people say. Cheers!

you are viewing a single comment's thread
view the rest of the comments
[–] kata1yst@sh.itjust.works 19 points 1 day ago (7 children)

I use OLlama & Open-WebUI, OLlama on my gaming rig and Open-WebUI as a frontend on my server.

It's been a really powerful combo!

[–] kiol@lemmy.world 5 points 1 day ago (6 children)

Would you please talk more about it. I forgot about Open-webui, but intending to start playing with. Honestly, what do you actually do with it?

[–] Oisteink@feddit.nl 5 points 1 day ago* (last edited 1 day ago) (3 children)

I have the same setup, but its not very usable as my graphics card has 6gb ram. I want one with 20 or 24, as the 6b models are pain and the tiny ones don’t give me much.

Ollama was pretty easy to set up on windows, and its eqsy to download and test the models ollama has available

[–] kiol@lemmy.world 1 points 1 day ago (1 children)

Sounds like you and I are in a similar place of testing.

[–] Oisteink@feddit.nl 3 points 1 day ago* (last edited 1 day ago) (1 children)

Possibly. Been running it since last summer, but like i say the small models dont do much good for me. I have tried llama3.1 olmo2, deepseek r1 in a few variants, qwen2. Qwen2.5 coder, mistral, codellama, starcoder2, nemotron-mini, llama3.2, qwen2.5-coder, gamma2 and llava.

I use perplexity and mistral as paid, with much better quality. Openwebui is great though, but my hardware is lacking

Edit: saw that my mate is still using it a bit so i’ll update openwebu frpm 0.4 to 0.5.20 for him. Hes a bit anxious about sending data to the cloud so he dont mind the quality

[–] Oisteink@feddit.nl 0 points 1 day ago

Scrap that - after upgrading it went bonkers and will always use one of my «knowledges» no matter what I try. The websearch fails even with ddg as engine. Its aways seemed like the ui was made by unskilled labour, but this is just horrible. 2/10 not recommended

load more comments (1 replies)
load more comments (3 replies)
load more comments (3 replies)