this post was submitted on 22 Jul 2023
72 points (92.9% liked)

Selfhosted

39413 readers
745 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

https://github.com/noneabove1182/text-generation-webui-docker (updated to 1.3.1 and has a fix for gqa to run llama2 70B)

https://github.com/noneabove1182/lollms-webui-docker (v3.0.0)

https://github.com/noneabove1182/koboldcpp-docker (updated to 1.36)

All should include up to date instructions, if you find any issues please ping me immediately so I can take a look or open an issue :)

top 7 comments
sorted by: hot top controversial new old
[–] noneabove1182@sh.itjust.works 8 points 1 year ago* (last edited 1 year ago)

lollms-webui is the jankiest of the images, but that one's newish to the scene and I'm working with the dev a bit to get it nicer (main current problem is the requirement for CLI prompts which he'll be removing) Koboldcpp and text-gen are in a good place though, happy with how those are running

[–] Speculater@lemmy.world 5 points 1 year ago

Thanks! I'll check these out when I get to my server. I host a small LLM that help bots sound more human while going trivial tasks in Twitch.

[–] fhein@lemmy.world 5 points 1 year ago (1 children)

Awesome work! Going to try out koboldcpp right away. Currently running llama.cpp in docker on my workstation because it would be such a mess to get cuda toolkit installed natively..

Out of curiosity, isn't conda a bit redundant in docker since it already is an isolated environment?

[–] noneabove1182@sh.itjust.works 2 points 1 year ago* (last edited 1 year ago) (1 children)

Yes that's a good comment for an FAQ cause I get it a lot and it's a very good question haha. The reason I use it is for image size, the base nvidia devel image is needed for a lot of compilation during python package installation and is huge, so instead I use conda, transfer it to the nvidia-runtime image which is.. also pretty big, but it saves several GB of space so it's a worthwhile hack :)

but yes avoiding CUDA messes on my bare machine is definitely my biggest motivation

[–] fhein@lemmy.world 1 points 1 year ago

Ah, nice.

Btw. perhaps you'd like to add:

build: .

to docker-compose.yml so you can just write "docker-compose build" instead of having to do it with a separate docker command. I would submit a PR for it but I have made a bunch of other changes to that file so it's probably faster if you do it.

[–] Madiator2011@lm.madiator.cloud 2 points 1 year ago (1 children)

I would love to have some GUI with optional vector database support that I could feed my docs into.

[–] Falcon@lemmy.world 1 points 8 months ago

You want H2OGPT or just use Langchain with CLI

load more comments
view more: next ›