this post was submitted on 12 Mar 2025
39 points (81.0% liked)

Selfhosted

44146 readers
385 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Wondering about services to test on either a 16gb ram "AI Capable" arm64 board or on a laptop with modern rtx. Only looking for open source options, but curious to hear what people say. Cheers!

all 41 comments
sorted by: hot top controversial new old
[–] Smokeydope@lemmy.world 10 points 20 hours ago* (last edited 18 hours ago)

I run kobold.cpp which is a cutting edge local model engine, on my local gaming rig turned server. I like to play around with the latest models to see how they improve/change over time. The current chain of thought thinking models like deepseek r1 distills and qwen qwq are fun to poke at with advanced open ended STEM questions.

STEM questions like "What does Gödel's incompleteness theorem imply about scientific theories of everything?" Or "Could the speed of light be more accurately refered to as 'the speed of causality'?"

As for actual daily use, I prefer using mistral small 24b and treating it like a local search engine with the legitimacy of wikipedia. Its a starting point to ask questions about general things I don't know about or want advice on, then do further research through more legitimate sources.

Its important to not take the LLM too seriously as theres always a small statistical chance it hallucinates some bullshit but most of the time its fairly accurate and is a pretty good jumping off point for further research.

Lets say I want an overview of how can I repair small holes forming in concrete, or general ideas on how to invest financially, how to change fluids in a car, how much fat and protein is in an egg, ect.

If the LLM says a word or related concept I don't recognize I grill it for clarifying info and follow it through the infinite branching garden of related information.

I've used an LLM to help me go through old declassified documents and speculate on internal gov terminalogy I was unfamiliar with.

I've used a speech to text model and get it to speek just for fun. Ive used multimodal model and get it to see/scan documents for info.

Ive used websearch to get the model to retrieve information it didn't know off a ddg search, again mostly for fun.

Feel free to ask me anything, I'm glad to help get newbies started.

[–] ikidd@lemmy.world 4 points 17 hours ago

LMStudio is pretty much the standard. I think it's opensource except for the UI. Even if you don't end up using it long-term, it's great for getting used to a lot of the models.

Otherwise there's OpenWebUI that I would imagine would work as a docker compose, as I think there's ARM images for OWU and ollama

[–] kata1yst@sh.itjust.works 15 points 1 day ago (1 children)

I use OLlama & Open-WebUI, OLlama on my gaming rig and Open-WebUI as a frontend on my server.

It's been a really powerful combo!

[–] kiol@lemmy.world 3 points 1 day ago (3 children)

Would you please talk more about it. I forgot about Open-webui, but intending to start playing with. Honestly, what do you actually do with it?

[–] mac@lemm.ee 3 points 19 hours ago* (last edited 19 hours ago)

I have Linkwarden pointed at my ollama deployment, so it auto tags links that I archive which is nice.

I've seen other people send images captured on their security cameras on frigate to ollama to get it to describe the image

There's a bunch of other use cases I've thought of for coding projects, but haven't started on any of them yet

[–] Oisteink@feddit.nl 4 points 1 day ago* (last edited 1 day ago) (1 children)

I have the same setup, but its not very usable as my graphics card has 6gb ram. I want one with 20 or 24, as the 6b models are pain and the tiny ones don’t give me much.

Ollama was pretty easy to set up on windows, and its eqsy to download and test the models ollama has available

[–] kiol@lemmy.world 1 points 23 hours ago (1 children)

Sounds like you and I are in a similar place of testing.

[–] Oisteink@feddit.nl 3 points 23 hours ago* (last edited 23 hours ago) (1 children)

Possibly. Been running it since last summer, but like i say the small models dont do much good for me. I have tried llama3.1 olmo2, deepseek r1 in a few variants, qwen2. Qwen2.5 coder, mistral, codellama, starcoder2, nemotron-mini, llama3.2, qwen2.5-coder, gamma2 and llava.

I use perplexity and mistral as paid, with much better quality. Openwebui is great though, but my hardware is lacking

Edit: saw that my mate is still using it a bit so i’ll update openwebu frpm 0.4 to 0.5.20 for him. Hes a bit anxious about sending data to the cloud so he dont mind the quality

[–] Oisteink@feddit.nl 0 points 22 hours ago

Scrap that - after upgrading it went bonkers and will always use one of my «knowledges» no matter what I try. The websearch fails even with ddg as engine. Its aways seemed like the ui was made by unskilled labour, but this is just horrible. 2/10 not recommended

[–] 30p87@feddit.org 2 points 20 hours ago

Sex chats. For other uses, just simple searches are better 99% of the time. And for the 1%, something like the Kagis FastGPT helps to find the correct keywords.

[–] truxnell@infosec.pub 3 points 19 hours ago (1 children)

I run ollama and auto1111 on my desktop when it's powers on. Using open-webui in my homelab always on, and also connected to openrouter. This way I can always use openwebui with openrouter models and it's pretty cheap per query and a little more private that using a big tech chatbot. And if I want local, I turn on the desktop and have local lamma and stab diff.

I also get bugger all benefit out of it., it's a cute toy.

[–] kiol@lemmy.world 1 points 17 hours ago

How do you like auto1111 as I've never head of it

[–] rikudou@lemmings.world 2 points 18 hours ago (1 children)

Try running an AI Horde worker, it's a really great service!

[–] kiol@lemmy.world 1 points 17 hours ago (1 children)

Not sure I know what that is. As in Hoarder?

[–] rikudou@lemmings.world 2 points 9 hours ago

It's a cluster of workers where everyone can generate images/text using workers connected to the service.

So if you ran a worker, people could generate stuff using your PC. For that you would gain kudos, which in turn you can use to generate stuff on other people's computers.

Basically you do two things: help common people without access to powerful machines and use your capacity when you have time to use the kudos whenever you want, even on the road where you can't turn on your PC if you fancy so.

[–] colourlesspony@pawb.social 7 points 1 day ago (4 children)

I messed around with home assistant and the ollama integration. I have passed on it and just use the default one with voice commands I set up. I couldn't really get ollama to do or say anything useful. Like I asked it what's a good time to run on a treadmill for beginners and it told me it's not a doctor.

[–] Starfighter@discuss.tchncs.de 5 points 20 hours ago* (last edited 20 hours ago)

There are some experimental models made specifically for use with Home Assistant, for example home-llm.

Even though they are tiny 1-3B I've found them to work much better than even 14B general purpose models. Obviously they suck for general purpose questions just by their size alone.

That being said they're still LLMs. I like to keep the "prefer handling commands locally" option turned on and only use the LLM as a fallback.

[–] metoosalem@feddit.org 11 points 1 day ago (1 children)

Like I asked it what's a good time to run on a treadmill for beginners and it told me it's not a doctor.

Kirkland brand meseeks energy.

[–] psmgx@lemmy.world 2 points 14 hours ago

Hey now Kirkland brand is respectable, usually premium brands repackaged. Such as how Costco vodka was secretly ("secretly") Grey Goose

[–] Smokeydope@lemmy.world 2 points 19 hours ago

Sounds like ollama was loaded up with an either overly censored or plain brain dead language model. Do you know which model it was? Maybe try mistral if it fits in your computer.

[–] kiol@lemmy.world 2 points 1 day ago (1 children)

Haha, that is hilarious. Sounds like it gave you some snark. afaik you have to clarify by asking again when it says such things. "I'm not asking for medical advice, but..."

[–] RonnyZittledong@lemmy.world 7 points 1 day ago (2 children)

None currently. Wish I could afford a GPU to play with some stuff.

[–] state_electrician@discuss.tchncs.de 1 points 4 hours ago (1 children)

Yeah. I have a mini PC with an AMD GPU. Even if I were to buy a big GPU I couldn't use it. That frustrates me, because I'd love to play around with some models locally. I refuse to use anything hosted by other people.

[–] moomoomoo309@programming.dev 1 points 3 hours ago (1 children)

Your M.2 port can probably fit an M.2 to PCIe adapter and you can use a GPU with that - ollama supports AMD GPUs just fine nowadays (well, as well as it can, rocm is still very hit or miss)

Oh, then I need to give it another try.

[–] kiol@lemmy.world 2 points 1 day ago

Well, let me know your suggestions if you wish. I took the plunge and am willing to test on your behalf, assuming I can.

[–] superglue@lemmy.dbzer0.com 1 points 17 hours ago (1 children)

Can anyone suggest a model for light coding? I'm on a 3070 mobile.

[–] ikidd@lemmy.world 1 points 16 hours ago

Claude is the standard that all others are judged by. But it's not cheap.

Gemini is pretty good, and Qwen-coder isn't bad. I'd suggest you watch a few vids on GosuCoder's YT channel to see what works for you, he reviews a pile of them and it's quite up to date.

And if you use VScode, I highly recommend the Roocode extension. Gosucoder also goes into revising the roocode prompt to reduce costs for Claude. Another extension is Cline.

[–] Grandwolf319@sh.itjust.works 4 points 23 hours ago (1 children)

I have Immich that has AI searching for my photos. Pretty useful for finding stuff actually

[–] gdog05@lemmy.world 1 points 4 hours ago

Once I changed the default model, immich search became amazing. I want to show it off to people but alas, way too many NSFW pics in my library. I would create a second "clean" version to show off to people but I've been too lazy.

[–] acockworkorange@mander.xyz 1 points 17 hours ago (1 children)

I am curious about trying an application specific AI. Like just for coding, for instance. I assume the memory requirements would be much lower.

[–] kiol@lemmy.world 1 points 17 hours ago (1 children)

afaik Ollama would fit that bill, but perhaps others can chime in. You could probably run it on your local computer with a small model based on CPU alone.

[–] acockworkorange@mander.xyz 1 points 10 hours ago

I haven’t sunk much time at it, but I’m not aware of any training data focusing on code only. There’s nothing preventing me from running with general purpose data, but I imagine I’d have a snappier response with a smaller, focused dataset, without losing accuracy.

[–] neatobuilds@lemmy.today 3 points 23 hours ago

I have immich machine learning and ollama with openwebui

I use immich search a lot to find things like pictures of the side of the road to post on my community !sideoftheroad@lemmy.today

I almost never use the ollama though, not really sure what to do with it other than ask it dumb questions just to see what it says

I use the duckduckgo one when it auto has an answer to something I searched but its not too reliable

[–] y0shi@lemm.ee 2 points 21 hours ago (2 children)

I’ve an old gaming PC with a decent GPU laying around and I’ve thought of doing that (currently use it for linux gaming and GPU related tasks like photo editing etc) However ,I’m currently stuck using LLMs on demand locally with ollama. Energy costs of having it powered on all time for on demand queries seems a bit overkill to me…

[–] pezhore@infosec.pub 1 points 20 hours ago (1 children)

I put my Plex media server to work doing Ollama - it has a GPU for transcoding that's not awful for simple LLMs.

[–] y0shi@lemm.ee 2 points 20 hours ago

That sounds like a great way of leveraging existing infrastructure! I host Plex together with other services in a server with intel transcoding capable CPU. I’m quite sure I would get much better performance with the GPU machine, might end up following this path!

[–] kiol@lemmy.world 1 points 21 hours ago

Have to agree on that. Certainly only makes sense to have up when you are using it.

[–] Helmaar@lemmy.world 2 points 22 hours ago (1 children)

I was able to run a distilled version of DeepSeek on Linux. I ran it inside a PODMAN container with ROCM support (I have an AMD GPU). It wasn't super fast but for a locally deployed and self hosted option the performance was okay. Apart from that I have deployed Fooocus for image generation in a similar manner. Currently, I am working on deploying Stable Diffusion with either ComfyUI or Automatic1111 inside a PODMAN container with ROCM support.

[–] kiol@lemmy.world 2 points 21 hours ago

Didn't know about these image generation tools, besides Stable Diffusion. Thanks!