this post was submitted on 26 Aug 2023
80 points (96.5% liked)

Asklemmy

43380 readers
1395 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Seems like a extremely useful tool, just needs to not be Google, any alt recommendations welcome!

you are viewing a single comment's thread
view the rest of the comments
[โ€“] j4k3@lemmy.world 6 points 1 year ago

I'm just a hobbyist, and not familiar with anything specific, or prepackaged in an app, but there are probably examples posted in the projects section of hugging face (=like the github + dev social-ish thing for AI). I'm not sure what is really possible locally as far as machine vision + text recognition + translation. I think it would be really difficult to build an accurate model to do this on the limited system memory of a phone. I'm not sure what/if Google is offloading onto their servers to make this happen or if they are tuning the hell out of an AI model to get it small enough. I mean, there is a reason why the Pixel line has a SoC called a Tensor core, (it is designed for AI), but I haven't explored models or toolchains for mobile deployment.