Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
I'm using local open source LLMs for translation like DeepSeek, Gemma, phi, etc. And they are very similar to ChatGTP.
What does your hardware setup look like, if you don’t mind me asking?
I’m thinking of building something, but I don’t want to spend a fortune if I can help it. I run Lllama on a Mac Mini, which works fine, but I’m not able to run the bigger models on that.
I have a rtx 3060 because it has 12 GB VRAM so I can run the 14b models on it fast.
Couldn't tell you, I've never used any of them.
very easy to get started, make sure you have a graphics card with ideally more than 6gb of vram (the more the better), grab lm studio: https://lmstudio.ai/ then under the discover section you can grab a local model
these ones run locally on your PC and don't touch the internet hence the more VRAM you have the faster they go and more larger models you can run
I have a 10-year old laptop with integrated graphics running Debian Stable, so I don't think I'll be using a local LLM any time soon haha. I tell my students I don't want them to use any of these tools so I don't use them either.
I have no idea how you use this and stay sane but 🫡 to you sir
It's what I've been using since the early 2000's. Whatever laptop I can get for free with boring Linux. I teach all my classes across multiple establishments with it. Battery still lasts over 9 hours. Beats the Raspberry Pi I used as a computer for 6 months!
I do have a colleague that installed one of the LLMs on their computer to play around with translation and live subtitles, and another who claims ChatGPT taught him French. Maybe there is something to it, but I draw the line at using AI because, as I said, I forbid it in my classes.
You probably don't have the need to translate something like 15 times a day like me.